All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00 of 24] [RFC] libxc: hypercall buffers
@ 2010-09-06 13:38 Ian Campbell
  2010-09-06 13:38 ` [PATCH 01 of 24] xen: define raw version of set_xen_guest_handle Ian Campbell
                   ` (25 more replies)
  0 siblings, 26 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

libxc currently locks various on-stack data structures present on the
stack using mlock(2) in order to try and make them safe for passing to
hypercalls (which requires the memory to be mapped)

There are several issues with this approach:

1) mlock/munlock do not nest, therefore mlocking multiple pieces of
   data on the stack which happen to share a page causes everything to
   be unlocked on the first munlock not the last. This is likely to be
   currently OK for the uses in libxc taken in isolation but could
   impact any caller of libxc which uses mlock itself.
2) mlocking only parts of the stack is considered by many to be a
   dubious, if strictly speaking allowed by the relevant
   specifications, use of mlock.
3) mlock may not provide the required semantics needed for hypercall
   safe memory. mlock simply ensures that there cvan be no major
   faults (page faults requiring I/O to satisfy) but does not
   necessarily rule out minor faults (e.g. due to page migration)

The following introduces an explicit hypercall-safe memory pool API
which includes support for bouncing user-supplied memory buffers into
suitable memory.

This series addresses (1) and (2) but does not directly address (3)
other than by encapsulating the code which acquires hypercall safe
memory into one place where it can be more easily fixed.

There is also the slightly separate issue of code which forgets to
lock buffers as necessary and therefor this series overrides the Xen
guest-handle interfaces to attempt to improve compile-time checking
for the correct use of the memory pool. This scheme works for the
pointers contained within hypercall argument structures but doesn't
catch the actual hypercall arguments themselves. I'm open to
suggestions on how to extend it cleanly to catch those cases.

This RFC series only partially translates over to the the new
scheme. It is intended that the final series end with a patch which
effectively does s/xc_set_xen_guest_handle/set_xen_guest_handle/g in
order to catch future errors (it should also remove the now redundant
hcall_buf_prep and hcall_buf_release calls and assiciated
infrastructure).

The RFC has already grown to many more patches than I originally
intended so I'd like to solicit some comments on the basic premise,
usability of the interface etc, before I dig down and convert/cleanup
the rest.

I've tried in this initial pass to keep the locking/bouncing at the
same level of the call stack. There seems to be several opportunities
for pushing this up or down to reduce unnecessary bouncing. While it
would be nice to avoid exposing the explicit allocation to users of
libxc (by using bounce buffers at all public interfaces) I do not
think this will be possible for performance reasons in many
cases. Already there are several users of libxc which lock their own
buffers.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 01 of 24] xen: define raw version of set_xen_guest_handle
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 02 of 24] libxc: flask: use (un)lock pages rather than open coding m(un)lock Ian Campbell
                   ` (24 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283778426 -3600
# Node ID f3b0732b41af4dd3218e59dae6e0a005c323774f
# Parent  8848895e50ded4d37879bf63635f5774d9dc4959
xen: define raw version of set_xen_guest_handle

allows users to define more complex (e.g. type-safer) variations on
set_xen_guest_handle if they wish.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 8848895e50de -r f3b0732b41af xen/include/public/arch-ia64.h
--- a/xen/include/public/arch-ia64.h	Mon Sep 06 14:07:06 2010 +0100
+++ b/xen/include/public/arch-ia64.h	Mon Sep 06 14:07:06 2010 +0100
@@ -49,10 +49,11 @@
 #define XEN_GUEST_HANDLE(name)          __guest_handle_ ## name
 #define XEN_GUEST_HANDLE_64(name)       XEN_GUEST_HANDLE(name)
 #define uint64_aligned_t                uint64_t
-#define set_xen_guest_handle(hnd, val)  do { (hnd).p = val; } while (0)
+#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
+#define set_xen_guest_handle(hnd, val) set_xen_guest_handle_raw(hnd, val)
 
 #ifndef __ASSEMBLY__
 typedef unsigned long xen_pfn_t;
diff -r 8848895e50de -r f3b0732b41af xen/include/public/arch-x86/xen-x86_32.h
--- a/xen/include/public/arch-x86/xen-x86_32.h	Mon Sep 06 14:07:06 2010 +0100
+++ b/xen/include/public/arch-x86/xen-x86_32.h	Mon Sep 06 14:07:06 2010 +0100
@@ -108,8 +108,8 @@
         __guest_handle_ ## name;                                \
     typedef struct { union { type *p; uint64_aligned_t q; }; }  \
         __guest_handle_64_ ## name
-#undef set_xen_guest_handle
-#define set_xen_guest_handle(hnd, val)                      \
+#undef set_xen_guest_handle_raw
+#define set_xen_guest_handle_raw(hnd, val)                  \
     do { if ( sizeof(hnd) == 8 ) *(uint64_t *)&(hnd) = 0;   \
          (hnd).p = val;                                     \
     } while ( 0 )
diff -r 8848895e50de -r f3b0732b41af xen/include/public/arch-x86/xen.h
--- a/xen/include/public/arch-x86/xen.h	Mon Sep 06 14:07:06 2010 +0100
+++ b/xen/include/public/arch-x86/xen.h	Mon Sep 06 14:07:06 2010 +0100
@@ -44,10 +44,11 @@
 #define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
 #define __XEN_GUEST_HANDLE(name)        __guest_handle_ ## name
 #define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
-#define set_xen_guest_handle(hnd, val)  do { (hnd).p = val; } while (0)
+#define set_xen_guest_handle_raw(hnd, val)  do { (hnd).p = val; } while (0)
 #ifdef __XEN_TOOLS__
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 #endif
+#define set_xen_guest_handle(hnd, val) set_xen_guest_handle_raw(hnd, val)
 
 #if defined(__i386__)
 #include "xen-x86_32.h"

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 02 of 24] libxc: flask: use (un)lock pages rather than open coding m(un)lock
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
  2010-09-06 13:38 ` [PATCH 01 of 24] xen: define raw version of set_xen_guest_handle Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 03 of 24] libxc: pass an xc_interface handle to page locking functions Ian Campbell
                   ` (23 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779690 -3600
# Node ID 89333e9d1d90fba9e5a493ae4f541a956a04e3c0
# Parent  f3b0732b41af4dd3218e59dae6e0a005c323774f
libxc: flask: use (un)lock pages rather than open coding m(un)lock.

Allows us to do away with safe_unlock and merge into unlock_pages.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r f3b0732b41af -r 89333e9d1d90 tools/libxc/xc_flask.c
--- a/tools/libxc/xc_flask.c	Mon Sep 06 14:07:06 2010 +0100
+++ b/tools/libxc/xc_flask.c	Mon Sep 06 14:28:10 2010 +0100
@@ -44,7 +44,7 @@ int xc_flask_op(xc_interface *xch, flask
     hypercall.op     = __HYPERVISOR_xsm_op;
     hypercall.arg[0] = (unsigned long)op;
 
-    if ( mlock(op, sizeof(*op)) != 0 )
+    if ( lock_pages(op, sizeof(*op)) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out;
@@ -56,7 +56,7 @@ int xc_flask_op(xc_interface *xch, flask
             fprintf(stderr, "XSM operation failed!\n");
     }
 
-    safe_munlock(op, sizeof(*op));
+    unlock_pages(op, sizeof(*op));
 
  out:
     return ret;
diff -r f3b0732b41af -r 89333e9d1d90 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c	Mon Sep 06 14:07:06 2010 +0100
+++ b/tools/libxc/xc_private.c	Mon Sep 06 14:28:10 2010 +0100
@@ -218,7 +218,9 @@ void unlock_pages(void *addr, size_t len
     void *laddr = (void *)((unsigned long)addr & PAGE_MASK);
     size_t llen = (len + ((unsigned long)addr - (unsigned long)laddr) +
                    PAGE_SIZE - 1) & PAGE_MASK;
-    safe_munlock(laddr, llen);
+    int saved_errno = errno;
+    (void)munlock(laddr, llen);
+    errno = saved_errno;
 }
 
 static pthread_key_t hcall_buf_pkey;
diff -r f3b0732b41af -r 89333e9d1d90 tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h	Mon Sep 06 14:07:06 2010 +0100
+++ b/tools/libxc/xc_private.h	Mon Sep 06 14:28:10 2010 +0100
@@ -105,13 +105,6 @@ void unlock_pages(void *addr, size_t len
 
 int hcall_buf_prep(void **addr, size_t len);
 void hcall_buf_release(void **addr, size_t len);
-
-static inline void safe_munlock(const void *addr, size_t len)
-{
-    int saved_errno = errno;
-    (void)munlock(addr, len);
-    errno = saved_errno;
-}
 
 int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall);

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 03 of 24] libxc: pass an xc_interface handle to page locking functions
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
  2010-09-06 13:38 ` [PATCH 01 of 24] xen: define raw version of set_xen_guest_handle Ian Campbell
  2010-09-06 13:38 ` [PATCH 02 of 24] libxc: flask: use (un)lock pages rather than open coding m(un)lock Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 04 of 24] libxc: Remove unnecessary double indirection from xc_readconsolering Ian Campbell
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 7e23b1acc3f23c9f06c88b6f4480a614c49c9a96
# Parent  89333e9d1d90fba9e5a493ae4f541a956a04e3c0
libxc: pass an xc_interface handle to page locking functions

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_acm.c
--- a/tools/libxc/xc_acm.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_acm.c	Mon Sep 06 14:28:11 2010 +0100
@@ -92,7 +92,7 @@ int xc_acm_op(xc_interface *xch, int cmd
 
     hypercall.op = __HYPERVISOR_xsm_op;
     hypercall.arg[0] = (unsigned long)&acmctl;
-    if ( lock_pages(&acmctl, sizeof(acmctl)) != 0)
+    if ( lock_pages(xch, &acmctl, sizeof(acmctl)) != 0)
     {
         PERROR("Could not lock memory for Xen hypercall");
         return -EFAULT;
@@ -103,7 +103,7 @@ int xc_acm_op(xc_interface *xch, int cmd
             DPRINTF("acmctl operation failed -- need to"
                     " rebuild the user-space tool set?\n");
     }
-    unlock_pages(&acmctl, sizeof(acmctl));
+    unlock_pages(xch, &acmctl, sizeof(acmctl));
 
     switch (cmd) {
         case ACMOP_getdecision: {
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_cpupool.c
--- a/tools/libxc/xc_cpupool.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_cpupool.c	Mon Sep 06 14:28:11 2010 +0100
@@ -85,13 +85,13 @@ int xc_cpupool_getinfo(xc_interface *xch
         set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
         sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(info->cpumap) * 8;
 
-        if ( (err = lock_pages(local, sizeof(local))) != 0 )
+        if ( (err = lock_pages(xch, local, sizeof(local))) != 0 )
         {
             PERROR("Could not lock memory for Xen hypercall");
             break;
         }
         err = do_sysctl_save(xch, &sysctl);
-        unlock_pages(local, sizeof (local));
+        unlock_pages(xch, local, sizeof (local));
 
         if ( err < 0 )
             break;
@@ -161,14 +161,14 @@ int xc_cpupool_freeinfo(xc_interface *xc
     set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
     sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(*cpumap) * 8;
 
-    if ( (err = lock_pages(local, sizeof(local))) != 0 )
+    if ( (err = lock_pages(xch, local, sizeof(local))) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         return err;
     }
 
     err = do_sysctl_save(xch, &sysctl);
-    unlock_pages(local, sizeof (local));
+    unlock_pages(xch, local, sizeof (local));
 
     if (err < 0)
         return err;
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
@@ -94,7 +94,7 @@ int xc_domain_shutdown(xc_interface *xch
     arg.domain_id = domid;
     arg.reason = reason;
 
-    if ( lock_pages(&arg, sizeof(arg)) != 0 )
+    if ( lock_pages(xch, &arg, sizeof(arg)) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
@@ -102,7 +102,7 @@ int xc_domain_shutdown(xc_interface *xch
 
     ret = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(&arg, sizeof(arg));
+    unlock_pages(xch, &arg, sizeof(arg));
 
  out1:
     return ret;
@@ -133,7 +133,7 @@ int xc_vcpu_setaffinity(xc_interface *xc
 
     domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
     
-    if ( lock_pages(local, cpusize) != 0 )
+    if ( lock_pages(xch, local, cpusize) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out;
@@ -141,7 +141,7 @@ int xc_vcpu_setaffinity(xc_interface *xc
 
     ret = do_domctl(xch, &domctl);
 
-    unlock_pages(local, cpusize);
+    unlock_pages(xch, local, cpusize);
 
  out:
     free(local);
@@ -172,7 +172,7 @@ int xc_vcpu_getaffinity(xc_interface *xc
     set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
     domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
     
-    if ( lock_pages(local, sizeof(local)) != 0 )
+    if ( lock_pages(xch, local, sizeof(local)) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out;
@@ -180,7 +180,7 @@ int xc_vcpu_getaffinity(xc_interface *xc
 
     ret = do_domctl(xch, &domctl);
 
-    unlock_pages(local, sizeof (local));
+    unlock_pages(xch, local, sizeof (local));
     bitmap_byte_to_64(cpumap, local, cpusize * 8);
 out:
     free(local);
@@ -257,7 +257,7 @@ int xc_domain_getinfolist(xc_interface *
     int ret = 0;
     DECLARE_SYSCTL;
 
-    if ( lock_pages(info, max_domains*sizeof(xc_domaininfo_t)) != 0 )
+    if ( lock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t)) != 0 )
         return -1;
 
     sysctl.cmd = XEN_SYSCTL_getdomaininfolist;
@@ -270,7 +270,7 @@ int xc_domain_getinfolist(xc_interface *
     else
         ret = sysctl.u.getdomaininfolist.num_domains;
 
-    unlock_pages(info, max_domains*sizeof(xc_domaininfo_t));
+    unlock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t));
 
     return ret;
 }
@@ -290,13 +290,13 @@ int xc_domain_hvm_getcontext(xc_interfac
     set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
 
     if ( ctxt_buf ) 
-        if ( (ret = lock_pages(ctxt_buf, size)) != 0 )
+        if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 )
             return ret;
 
     ret = do_domctl(xch, &domctl);
 
     if ( ctxt_buf ) 
-        unlock_pages(ctxt_buf, size);
+        unlock_pages(xch, ctxt_buf, size);
 
     return (ret < 0 ? -1 : domctl.u.hvmcontext.size);
 }
@@ -322,13 +322,13 @@ int xc_domain_hvm_getcontext_partial(xc_
     domctl.u.hvmcontext_partial.instance = instance;
     set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf);
 
-    if ( (ret = lock_pages(ctxt_buf, size)) != 0 )
+    if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 )
         return ret;
     
     ret = do_domctl(xch, &domctl);
 
     if ( ctxt_buf ) 
-        unlock_pages(ctxt_buf, size);
+        unlock_pages(xch, ctxt_buf, size);
 
     return ret ? -1 : 0;
 }
@@ -347,12 +347,12 @@ int xc_domain_hvm_setcontext(xc_interfac
     domctl.u.hvmcontext.size = size;
     set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
 
-    if ( (ret = lock_pages(ctxt_buf, size)) != 0 )
+    if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 )
         return ret;
 
     ret = do_domctl(xch, &domctl);
 
-    unlock_pages(ctxt_buf, size);
+    unlock_pages(xch, ctxt_buf, size);
 
     return ret;
 }
@@ -372,10 +372,10 @@ int xc_vcpu_getcontext(xc_interface *xch
     set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c);
 
     
-    if ( (rc = lock_pages(ctxt, sz)) != 0 )
+    if ( (rc = lock_pages(xch, ctxt, sz)) != 0 )
         return rc;
     rc = do_domctl(xch, &domctl);
-    unlock_pages(ctxt, sz);
+    unlock_pages(xch, ctxt, sz);
 
     return rc;
 }
@@ -394,7 +394,7 @@ int xc_watchdog(xc_interface *xch,
     arg.id = id;
     arg.timeout = timeout;
 
-    if ( lock_pages(&arg, sizeof(arg)) != 0 )
+    if ( lock_pages(xch, &arg, sizeof(arg)) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
@@ -402,7 +402,7 @@ int xc_watchdog(xc_interface *xch,
 
     ret = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(&arg, sizeof(arg));
+    unlock_pages(xch, &arg, sizeof(arg));
 
  out1:
     return ret;
@@ -488,7 +488,7 @@ int xc_domain_set_memmap_limit(xc_interf
 
     set_xen_guest_handle(fmap.map.buffer, &e820);
 
-    if ( lock_pages(&fmap, sizeof(fmap)) || lock_pages(&e820, sizeof(e820)) )
+    if ( lock_pages(xch, &fmap, sizeof(fmap)) || lock_pages(xch, &e820, sizeof(e820)) )
     {
         PERROR("Could not lock memory for Xen hypercall");
         rc = -1;
@@ -498,8 +498,8 @@ int xc_domain_set_memmap_limit(xc_interf
     rc = xc_memory_op(xch, XENMEM_set_memory_map, &fmap);
 
  out:
-    unlock_pages(&fmap, sizeof(fmap));
-    unlock_pages(&e820, sizeof(e820));
+    unlock_pages(xch, &fmap, sizeof(fmap));
+    unlock_pages(xch, &e820, sizeof(e820));
     return rc;
 }
 #else
@@ -564,7 +564,7 @@ int xc_domain_get_tsc_info(xc_interface 
     domctl.cmd = XEN_DOMCTL_gettscinfo;
     domctl.domain = (domid_t)domid;
     set_xen_guest_handle(domctl.u.tsc_info.out_info, &info);
-    if ( (rc = lock_pages(&info, sizeof(info))) != 0 )
+    if ( (rc = lock_pages(xch, &info, sizeof(info))) != 0 )
         return rc;
     rc = do_domctl(xch, &domctl);
     if ( rc == 0 )
@@ -574,7 +574,7 @@ int xc_domain_get_tsc_info(xc_interface 
         *gtsc_khz = info.gtsc_khz;
         *incarnation = info.incarnation;
     }
-    unlock_pages(&info,sizeof(info));
+    unlock_pages(xch, &info,sizeof(info));
     return rc;
 }
 
@@ -849,11 +849,11 @@ int xc_vcpu_setcontext(xc_interface *xch
     domctl.u.vcpucontext.vcpu = vcpu;
     set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c);
 
-    if ( (rc = lock_pages(ctxt, sz)) != 0 )
+    if ( (rc = lock_pages(xch, ctxt, sz)) != 0 )
         return rc;
     rc = do_domctl(xch, &domctl);
     
-    unlock_pages(ctxt, sz);
+    unlock_pages(xch, ctxt, sz);
 
     return rc;
 }
@@ -917,10 +917,10 @@ int xc_set_hvm_param(xc_interface *handl
     arg.domid = dom;
     arg.index = param;
     arg.value = value;
-    if ( lock_pages(&arg, sizeof(arg)) != 0 )
+    if ( lock_pages(handle, &arg, sizeof(arg)) != 0 )
         return -1;
     rc = do_xen_hypercall(handle, &hypercall);
-    unlock_pages(&arg, sizeof(arg));
+    unlock_pages(handle, &arg, sizeof(arg));
     return rc;
 }
 
@@ -935,10 +935,10 @@ int xc_get_hvm_param(xc_interface *handl
     hypercall.arg[1] = (unsigned long)&arg;
     arg.domid = dom;
     arg.index = param;
-    if ( lock_pages(&arg, sizeof(arg)) != 0 )
+    if ( lock_pages(handle, &arg, sizeof(arg)) != 0 )
         return -1;
     rc = do_xen_hypercall(handle, &hypercall);
-    unlock_pages(&arg, sizeof(arg));
+    unlock_pages(handle, &arg, sizeof(arg));
     *value = arg.value;
     return rc;
 }
@@ -988,13 +988,13 @@ int xc_get_device_group(
 
     set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array);
 
-    if ( lock_pages(sdev_array, max_sdevs * sizeof(*sdev_array)) != 0 )
+    if ( lock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array)) != 0 )
     {
         PERROR("Could not lock memory for xc_get_device_group");
         return -ENOMEM;
     }
     rc = do_domctl(xch, &domctl);
-    unlock_pages(sdev_array, max_sdevs * sizeof(*sdev_array));
+    unlock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array));
 
     *num_sdevs = domctl.u.get_device_group.num_sdevs;
     return rc;
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_domain_restore.c	Mon Sep 06 14:28:11 2010 +0100
@@ -1181,13 +1181,13 @@ int xc_domain_restore(xc_interface *xch,
     memset(ctx->p2m_batch, 0,
            ROUNDUP(MAX_BATCH_SIZE * sizeof(xen_pfn_t), PAGE_SHIFT)); 
 
-    if ( lock_pages(region_mfn, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) )
+    if ( lock_pages(xch, region_mfn, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) )
     {
         PERROR("Could not lock region_mfn");
         goto out;
     }
 
-    if ( lock_pages(ctx->p2m_batch, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) )
+    if ( lock_pages(xch, ctx->p2m_batch, sizeof(xen_pfn_t) * MAX_BATCH_SIZE) )
     {
         ERROR("Could not lock p2m_batch");
         goto out;
@@ -1547,7 +1547,7 @@ int xc_domain_restore(xc_interface *xch,
         }
     }
 
-    if ( lock_pages(&ctxt, sizeof(ctxt)) )
+    if ( lock_pages(xch, &ctxt, sizeof(ctxt)) )
     {
         PERROR("Unable to lock ctxt");
         return 1;
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_domain_save.c	Mon Sep 06 14:28:11 2010 +0100
@@ -1046,14 +1046,14 @@ int xc_domain_save(xc_interface *xch, in
 
     memset(to_send, 0xff, BITMAP_SIZE);
 
-    if ( lock_pages(to_send, BITMAP_SIZE) )
+    if ( lock_pages(xch, to_send, BITMAP_SIZE) )
     {
         PERROR("Unable to lock to_send");
         return 1;
     }
 
     /* (to fix is local only) */
-    if ( lock_pages(to_skip, BITMAP_SIZE) )
+    if ( lock_pages(xch, to_skip, BITMAP_SIZE) )
     {
         PERROR("Unable to lock to_skip");
         return 1;
@@ -1091,7 +1091,7 @@ int xc_domain_save(xc_interface *xch, in
     memset(pfn_type, 0,
            ROUNDUP(MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT));
 
-    if ( lock_pages(pfn_type, MAX_BATCH_SIZE * sizeof(*pfn_type)) )
+    if ( lock_pages(xch, pfn_type, MAX_BATCH_SIZE * sizeof(*pfn_type)) )
     {
         PERROR("Unable to lock pfn_type array");
         goto out;
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_evtchn.c
--- a/tools/libxc/xc_evtchn.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_evtchn.c	Mon Sep 06 14:28:11 2010 +0100
@@ -33,7 +33,7 @@ static int do_evtchn_op(xc_interface *xc
     hypercall.arg[0] = cmd;
     hypercall.arg[1] = (unsigned long)arg;
 
-    if ( lock_pages(arg, arg_size) != 0 )
+    if ( lock_pages(xch, arg, arg_size) != 0 )
     {
         PERROR("do_evtchn_op: arg lock failed");
         goto out;
@@ -42,7 +42,7 @@ static int do_evtchn_op(xc_interface *xc
     if ((ret = do_xen_hypercall(xch, &hypercall)) < 0 && !silently_fail)
         ERROR("do_evtchn_op: HYPERVISOR_event_channel_op failed: %d", ret);
 
-    unlock_pages(arg, arg_size);
+    unlock_pages(xch, arg, arg_size);
  out:
     return ret;
 }
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_flask.c
--- a/tools/libxc/xc_flask.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_flask.c	Mon Sep 06 14:28:11 2010 +0100
@@ -44,7 +44,7 @@ int xc_flask_op(xc_interface *xch, flask
     hypercall.op     = __HYPERVISOR_xsm_op;
     hypercall.arg[0] = (unsigned long)op;
 
-    if ( lock_pages(op, sizeof(*op)) != 0 )
+    if ( lock_pages(xch, op, sizeof(*op)) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out;
@@ -56,7 +56,7 @@ int xc_flask_op(xc_interface *xch, flask
             fprintf(stderr, "XSM operation failed!\n");
     }
 
-    unlock_pages(op, sizeof(*op));
+    unlock_pages(xch, op, sizeof(*op));
 
  out:
     return ret;
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_linux.c
--- a/tools/libxc/xc_linux.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_linux.c	Mon Sep 06 14:28:11 2010 +0100
@@ -618,7 +618,7 @@ int xc_gnttab_op(xc_interface *xch, int 
     hypercall.arg[1] = (unsigned long)op;
     hypercall.arg[2] = count;
 
-    if ( lock_pages(op, count* op_size) != 0 )
+    if ( lock_pages(xch, op, count* op_size) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
@@ -626,7 +626,7 @@ int xc_gnttab_op(xc_interface *xch, int 
 
     ret = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(op, count * op_size);
+    unlock_pages(xch, op, count * op_size);
 
  out1:
     return ret;
@@ -670,7 +670,7 @@ static void *_gnttab_map_table(xc_interf
     *gnt_num = query.nr_frames * (PAGE_SIZE / sizeof(grant_entry_v1_t) );
 
     frame_list = malloc(query.nr_frames * sizeof(unsigned long));
-    if ( !frame_list || lock_pages(frame_list,
+    if ( !frame_list || lock_pages(xch, frame_list,
                                    query.nr_frames * sizeof(unsigned long)) )
     {
         ERROR("Alloc/lock frame_list in xc_gnttab_map_table\n");
@@ -714,7 +714,7 @@ err:
 err:
     if ( frame_list )
     {
-        unlock_pages(frame_list, query.nr_frames * sizeof(unsigned long));
+        unlock_pages(xch, frame_list, query.nr_frames * sizeof(unsigned long));
         free(frame_list);
     }
     if ( pfn_list )
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
@@ -42,7 +42,7 @@ int xc_readconsolering(xc_interface *xch
         sysctl.u.readconsole.incremental = incremental;
     }
 
-    if ( (ret = lock_pages(buffer, nr_chars)) != 0 )
+    if ( (ret = lock_pages(xch, buffer, nr_chars)) != 0 )
         return ret;
 
     if ( (ret = do_sysctl(xch, &sysctl)) == 0 )
@@ -52,7 +52,7 @@ int xc_readconsolering(xc_interface *xch
             *pindex = sysctl.u.readconsole.index;
     }
 
-    unlock_pages(buffer, nr_chars);
+    unlock_pages(xch, buffer, nr_chars);
 
     return ret;
 }
@@ -66,12 +66,12 @@ int xc_send_debug_keys(xc_interface *xch
     set_xen_guest_handle(sysctl.u.debug_keys.keys, keys);
     sysctl.u.debug_keys.nr_keys = len;
 
-    if ( (ret = lock_pages(keys, len)) != 0 )
+    if ( (ret = lock_pages(xch, keys, len)) != 0 )
         return ret;
 
     ret = do_sysctl(xch, &sysctl);
 
-    unlock_pages(keys, len);
+    unlock_pages(xch, keys, len);
 
     return ret;
 }
@@ -154,7 +154,7 @@ int xc_mca_op(xc_interface *xch, struct 
     DECLARE_HYPERCALL;
 
     mc->interface_version = XEN_MCA_INTERFACE_VERSION;
-    if ( lock_pages(mc, sizeof(mc)) )
+    if ( lock_pages(xch, mc, sizeof(mc)) )
     {
         PERROR("Could not lock xen_mc memory");
         return -EINVAL;
@@ -163,7 +163,7 @@ int xc_mca_op(xc_interface *xch, struct 
     hypercall.op = __HYPERVISOR_mca;
     hypercall.arg[0] = (unsigned long)mc;
     ret = do_xen_hypercall(xch, &hypercall);
-    unlock_pages(mc, sizeof(mc));
+    unlock_pages(xch, mc, sizeof(mc));
     return ret;
 }
 #endif
@@ -227,12 +227,12 @@ int xc_getcpuinfo(xc_interface *xch, int
     sysctl.u.getcpuinfo.max_cpus = max_cpus; 
     set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); 
 
-    if ( (rc = lock_pages(info, max_cpus*sizeof(*info))) != 0 )
+    if ( (rc = lock_pages(xch, info, max_cpus*sizeof(*info))) != 0 )
         return rc;
 
     rc = do_sysctl(xch, &sysctl);
 
-    unlock_pages(info, max_cpus*sizeof(*info));
+    unlock_pages(xch, info, max_cpus*sizeof(*info));
 
     if ( nr_cpus )
         *nr_cpus = sysctl.u.getcpuinfo.nr_cpus; 
@@ -250,7 +250,7 @@ int xc_hvm_set_pci_intx_level(
     struct xen_hvm_set_pci_intx_level _arg, *arg = &_arg;
     int rc;
 
-    if ( (rc = hcall_buf_prep((void **)&arg, sizeof(*arg))) != 0 )
+    if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 )
     {
         PERROR("Could not lock memory");
         return rc;
@@ -269,7 +269,7 @@ int xc_hvm_set_pci_intx_level(
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    hcall_buf_release((void **)&arg, sizeof(*arg));
+    hcall_buf_release(xch, (void **)&arg, sizeof(*arg));
 
     return rc;
 }
@@ -283,7 +283,7 @@ int xc_hvm_set_isa_irq_level(
     struct xen_hvm_set_isa_irq_level _arg, *arg = &_arg;
     int rc;
 
-    if ( (rc = hcall_buf_prep((void **)&arg, sizeof(*arg))) != 0 )
+    if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 )
     {
         PERROR("Could not lock memory");
         return rc;
@@ -299,7 +299,7 @@ int xc_hvm_set_isa_irq_level(
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    hcall_buf_release((void **)&arg, sizeof(*arg));
+    hcall_buf_release(xch, (void **)&arg, sizeof(*arg));
 
     return rc;
 }
@@ -319,7 +319,7 @@ int xc_hvm_set_pci_link_route(
     arg.link    = link;
     arg.isa_irq = isa_irq;
 
-    if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
+    if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
     {
         PERROR("Could not lock memory");
         return rc;
@@ -327,7 +327,7 @@ int xc_hvm_set_pci_link_route(
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(&arg, sizeof(arg));
+    unlock_pages(xch, &arg, sizeof(arg));
 
     return rc;
 }
@@ -350,7 +350,7 @@ int xc_hvm_track_dirty_vram(
     arg.nr        = nr;
     set_xen_guest_handle(arg.dirty_bitmap, (uint8_t *)dirty_bitmap);
 
-    if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
+    if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
     {
         PERROR("Could not lock memory");
         return rc;
@@ -358,7 +358,7 @@ int xc_hvm_track_dirty_vram(
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(&arg, sizeof(arg));
+    unlock_pages(xch, &arg, sizeof(arg));
 
     return rc;
 }
@@ -378,7 +378,7 @@ int xc_hvm_modified_memory(
     arg.first_pfn = first_pfn;
     arg.nr        = nr;
 
-    if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
+    if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
     {
         PERROR("Could not lock memory");
         return rc;
@@ -386,7 +386,7 @@ int xc_hvm_modified_memory(
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(&arg, sizeof(arg));
+    unlock_pages(xch, &arg, sizeof(arg));
 
     return rc;
 }
@@ -407,7 +407,7 @@ int xc_hvm_set_mem_type(
     arg.first_pfn    = first_pfn;
     arg.nr           = nr;
 
-    if ( (rc = lock_pages(&arg, sizeof(arg))) != 0 )
+    if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
     {
         PERROR("Could not lock memory");
         return rc;
@@ -415,7 +415,7 @@ int xc_hvm_set_mem_type(
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(&arg, sizeof(arg));
+    unlock_pages(xch, &arg, sizeof(arg));
 
     return rc;
 }
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_offline_page.c
--- a/tools/libxc/xc_offline_page.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_offline_page.c	Mon Sep 06 14:28:11 2010 +0100
@@ -71,7 +71,7 @@ int xc_mark_page_online(xc_interface *xc
     if ( !status || (end < start) )
         return -EINVAL;
 
-    if (lock_pages(status, sizeof(uint32_t)*(end - start + 1)))
+    if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
     {
         ERROR("Could not lock memory for xc_mark_page_online\n");
         return -EINVAL;
@@ -84,7 +84,7 @@ int xc_mark_page_online(xc_interface *xc
     set_xen_guest_handle(sysctl.u.page_offline.status, status);
     ret = xc_sysctl(xch, &sysctl);
 
-    unlock_pages(status, sizeof(uint32_t)*(end - start + 1));
+    unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1));
 
     return ret;
 }
@@ -98,7 +98,7 @@ int xc_mark_page_offline(xc_interface *x
     if ( !status || (end < start) )
         return -EINVAL;
 
-    if (lock_pages(status, sizeof(uint32_t)*(end - start + 1)))
+    if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
     {
         ERROR("Could not lock memory for xc_mark_page_offline");
         return -EINVAL;
@@ -111,7 +111,7 @@ int xc_mark_page_offline(xc_interface *x
     set_xen_guest_handle(sysctl.u.page_offline.status, status);
     ret = xc_sysctl(xch, &sysctl);
 
-    unlock_pages(status, sizeof(uint32_t)*(end - start + 1));
+    unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1));
 
     return ret;
 }
@@ -125,7 +125,7 @@ int xc_query_page_offline_status(xc_inte
     if ( !status || (end < start) )
         return -EINVAL;
 
-    if (lock_pages(status, sizeof(uint32_t)*(end - start + 1)))
+    if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
     {
         ERROR("Could not lock memory for xc_query_page_offline_status\n");
         return -EINVAL;
@@ -138,7 +138,7 @@ int xc_query_page_offline_status(xc_inte
     set_xen_guest_handle(sysctl.u.page_offline.status, status);
     ret = xc_sysctl(xch, &sysctl);
 
-    unlock_pages(status, sizeof(uint32_t)*(end - start + 1));
+    unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1));
 
     return ret;
 }
@@ -291,7 +291,7 @@ static int init_mem_info(xc_interface *x
         minfo->pfn_type[i] = pfn_to_mfn(i, minfo->p2m_table,
                                         minfo->guest_width);
 
-    if ( lock_pages(minfo->pfn_type, minfo->p2m_size * sizeof(*minfo->pfn_type)) )
+    if ( lock_pages(xch, minfo->pfn_type, minfo->p2m_size * sizeof(*minfo->pfn_type)) )
     {
         ERROR("Unable to lock pfn_type array");
         goto failed;
@@ -310,7 +310,7 @@ static int init_mem_info(xc_interface *x
     return 0;
 
 unlock:
-    unlock_pages(minfo->pfn_type, minfo->p2m_size * sizeof(*minfo->pfn_type));
+    unlock_pages(xch, minfo->pfn_type, minfo->p2m_size * sizeof(*minfo->pfn_type));
 failed:
     if (minfo->pfn_type)
     {
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_pm.c
--- a/tools/libxc/xc_pm.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_pm.c	Mon Sep 06 14:28:11 2010 +0100
@@ -53,14 +53,14 @@ int xc_pm_get_pxstat(xc_interface *xch, 
     if ( (ret = xc_pm_get_max_px(xch, cpuid, &max_px)) != 0)
         return ret;
 
-    if ( (ret = lock_pages(pxpt->trans_pt, 
+    if ( (ret = lock_pages(xch, pxpt->trans_pt, 
         max_px * max_px * sizeof(uint64_t))) != 0 )
         return ret;
 
-    if ( (ret = lock_pages(pxpt->pt, 
+    if ( (ret = lock_pages(xch, pxpt->pt, 
         max_px * sizeof(struct xc_px_val))) != 0 )
     {
-        unlock_pages(pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
+        unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
         return ret;
     }
 
@@ -75,8 +75,8 @@ int xc_pm_get_pxstat(xc_interface *xch, 
     ret = xc_sysctl(xch, &sysctl);
     if ( ret )
     {
-        unlock_pages(pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
-        unlock_pages(pxpt->pt, max_px * sizeof(struct xc_px_val));
+        unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
+        unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val));
         return ret;
     }
 
@@ -85,8 +85,8 @@ int xc_pm_get_pxstat(xc_interface *xch, 
     pxpt->last = sysctl.u.get_pmstat.u.getpx.last;
     pxpt->cur = sysctl.u.get_pmstat.u.getpx.cur;
 
-    unlock_pages(pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
-    unlock_pages(pxpt->pt, max_px * sizeof(struct xc_px_val));
+    unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
+    unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val));
 
     return ret;
 }
@@ -128,11 +128,11 @@ int xc_pm_get_cxstat(xc_interface *xch, 
     if ( (ret = xc_pm_get_max_cx(xch, cpuid, &max_cx)) )
         goto unlock_0;
 
-    if ( (ret = lock_pages(cxpt, sizeof(struct xc_cx_stat))) )
+    if ( (ret = lock_pages(xch, cxpt, sizeof(struct xc_cx_stat))) )
         goto unlock_0;
-    if ( (ret = lock_pages(cxpt->triggers, max_cx * sizeof(uint64_t))) )
+    if ( (ret = lock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t))) )
         goto unlock_1;
-    if ( (ret = lock_pages(cxpt->residencies, max_cx * sizeof(uint64_t))) )
+    if ( (ret = lock_pages(xch, cxpt->residencies, max_cx * sizeof(uint64_t))) )
         goto unlock_2;
 
     sysctl.cmd = XEN_SYSCTL_get_pmstat;
@@ -155,11 +155,11 @@ int xc_pm_get_cxstat(xc_interface *xch, 
     cxpt->cc6 = sysctl.u.get_pmstat.u.getcx.cc6;
 
 unlock_3:
-    unlock_pages(cxpt->residencies, max_cx * sizeof(uint64_t));
+    unlock_pages(xch, cxpt->residencies, max_cx * sizeof(uint64_t));
 unlock_2:
-    unlock_pages(cxpt->triggers, max_cx * sizeof(uint64_t));
+    unlock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t));
 unlock_1:
-    unlock_pages(cxpt, sizeof(struct xc_cx_stat));
+    unlock_pages(xch, cxpt, sizeof(struct xc_cx_stat));
 unlock_0:
     return ret;
 }
@@ -200,13 +200,13 @@ int xc_get_cpufreq_para(xc_interface *xc
              (!user_para->scaling_available_governors) )
             return -EINVAL;
 
-        if ( (ret = lock_pages(user_para->affected_cpus,
+        if ( (ret = lock_pages(xch, user_para->affected_cpus,
                                user_para->cpu_num * sizeof(uint32_t))) )
             goto unlock_1;
-        if ( (ret = lock_pages(user_para->scaling_available_frequencies,
+        if ( (ret = lock_pages(xch, user_para->scaling_available_frequencies,
                                user_para->freq_num * sizeof(uint32_t))) )
             goto unlock_2;
-        if ( (ret = lock_pages(user_para->scaling_available_governors,
+        if ( (ret = lock_pages(xch, user_para->scaling_available_governors,
                  user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char))) )
             goto unlock_3;
 
@@ -263,13 +263,13 @@ int xc_get_cpufreq_para(xc_interface *xc
     }
 
 unlock_4:
-    unlock_pages(user_para->scaling_available_governors,
+    unlock_pages(xch, user_para->scaling_available_governors,
                  user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char));
 unlock_3:
-    unlock_pages(user_para->scaling_available_frequencies,
+    unlock_pages(xch, user_para->scaling_available_frequencies,
                  user_para->freq_num * sizeof(uint32_t));
 unlock_2:
-    unlock_pages(user_para->affected_cpus,
+    unlock_pages(xch, user_para->affected_cpus,
                  user_para->cpu_num * sizeof(uint32_t));
 unlock_1:
     return ret;
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_private.c	Mon Sep 06 14:28:11 2010 +0100
@@ -71,7 +71,7 @@ xc_interface *xc_interface_open(xentooll
     return 0;
 }
 
-static void xc_clean_hcall_buf(void);
+static void xc_clean_hcall_buf(xc_interface *xch);
 
 int xc_interface_close(xc_interface *xch)
 {
@@ -85,7 +85,7 @@ int xc_interface_close(xc_interface *xch
         if (rc) PERROR("Could not close hypervisor interface");
     }
 
-    xc_clean_hcall_buf();
+    xc_clean_hcall_buf(xch);
 
     free(xch);
     return rc;
@@ -193,17 +193,17 @@ void xc_report_progress_step(xc_interfac
 
 #ifdef __sun__
 
-int lock_pages(void *addr, size_t len) { return 0; }
-void unlock_pages(void *addr, size_t len) { }
+int lock_pages(xc_interface *xch, void *addr, size_t len) { return 0; }
+void unlock_pages(xc_interface *xch, void *addr, size_t len) { }
 
-int hcall_buf_prep(void **addr, size_t len) { return 0; }
-void hcall_buf_release(void **addr, size_t len) { }
+int hcall_buf_prep(xc_interface *xch, void **addr, size_t len) { return 0; }
+void hcall_buf_release(xc_interface *xch, void **addr, size_t len) { }
 
-static void xc_clean_hcall_buf(void) { }
+static void xc_clean_hcall_buf(xc_interface *xch) { }
 
 #else /* !__sun__ */
 
-int lock_pages(void *addr, size_t len)
+int lock_pages(xc_interface *xch, void *addr, size_t len)
 {
       int e;
       void *laddr = (void *)((unsigned long)addr & PAGE_MASK);
@@ -213,7 +213,7 @@ int lock_pages(void *addr, size_t len)
       return e;
 }
 
-void unlock_pages(void *addr, size_t len)
+void unlock_pages(xc_interface *xch, void *addr, size_t len)
 {
     void *laddr = (void *)((unsigned long)addr & PAGE_MASK);
     size_t llen = (len + ((unsigned long)addr - (unsigned long)laddr) +
@@ -226,6 +226,7 @@ static pthread_key_t hcall_buf_pkey;
 static pthread_key_t hcall_buf_pkey;
 static pthread_once_t hcall_buf_pkey_once = PTHREAD_ONCE_INIT;
 struct hcall_buf {
+    xc_interface *xch;
     void *buf;
     void *oldbuf;
 };
@@ -238,7 +239,7 @@ static void _xc_clean_hcall_buf(void *m)
     {
         if ( hcall_buf->buf )
         {
-            unlock_pages(hcall_buf->buf, PAGE_SIZE);
+            unlock_pages(hcall_buf->xch, hcall_buf->buf, PAGE_SIZE);
             free(hcall_buf->buf);
         }
 
@@ -253,14 +254,14 @@ static void _xc_init_hcall_buf(void)
     pthread_key_create(&hcall_buf_pkey, _xc_clean_hcall_buf);
 }
 
-static void xc_clean_hcall_buf(void)
+static void xc_clean_hcall_buf(xc_interface *xch)
 {
     pthread_once(&hcall_buf_pkey_once, _xc_init_hcall_buf);
 
     _xc_clean_hcall_buf(pthread_getspecific(hcall_buf_pkey));
 }
 
-int hcall_buf_prep(void **addr, size_t len)
+int hcall_buf_prep(xc_interface *xch, void **addr, size_t len)
 {
     struct hcall_buf *hcall_buf;
 
@@ -272,13 +273,14 @@ int hcall_buf_prep(void **addr, size_t l
         hcall_buf = calloc(1, sizeof(*hcall_buf));
         if ( !hcall_buf )
             goto out;
+        hcall_buf->xch = xch;
         pthread_setspecific(hcall_buf_pkey, hcall_buf);
     }
 
     if ( !hcall_buf->buf )
     {
         hcall_buf->buf = xc_memalign(PAGE_SIZE, PAGE_SIZE);
-        if ( !hcall_buf->buf || lock_pages(hcall_buf->buf, PAGE_SIZE) )
+        if ( !hcall_buf->buf || lock_pages(xch, hcall_buf->buf, PAGE_SIZE) )
         {
             free(hcall_buf->buf);
             hcall_buf->buf = NULL;
@@ -295,10 +297,10 @@ int hcall_buf_prep(void **addr, size_t l
     }
 
  out:
-    return lock_pages(*addr, len);
+    return lock_pages(xch, *addr, len);
 }
 
-void hcall_buf_release(void **addr, size_t len)
+void hcall_buf_release(xc_interface *xch, void **addr, size_t len)
 {
     struct hcall_buf *hcall_buf = pthread_getspecific(hcall_buf_pkey);
 
@@ -310,7 +312,7 @@ void hcall_buf_release(void **addr, size
     }
     else
     {
-        unlock_pages(*addr, len);
+        unlock_pages(xch, *addr, len);
     }
 }
 
@@ -337,7 +339,7 @@ int xc_mmuext_op(
     DECLARE_HYPERCALL;
     long ret = -EINVAL;
 
-    if ( hcall_buf_prep((void **)&op, nr_ops*sizeof(*op)) != 0 )
+    if ( hcall_buf_prep(xch, (void **)&op, nr_ops*sizeof(*op)) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
@@ -351,7 +353,7 @@ int xc_mmuext_op(
 
     ret = do_xen_hypercall(xch, &hypercall);
 
-    hcall_buf_release((void **)&op, nr_ops*sizeof(*op));
+    hcall_buf_release(xch, (void **)&op, nr_ops*sizeof(*op));
 
  out1:
     return ret;
@@ -371,7 +373,7 @@ static int flush_mmu_updates(xc_interfac
     hypercall.arg[2] = 0;
     hypercall.arg[3] = mmu->subject;
 
-    if ( lock_pages(mmu->updates, sizeof(mmu->updates)) != 0 )
+    if ( lock_pages(xch, mmu->updates, sizeof(mmu->updates)) != 0 )
     {
         PERROR("flush_mmu_updates: mmu updates lock_pages failed");
         err = 1;
@@ -386,7 +388,7 @@ static int flush_mmu_updates(xc_interfac
 
     mmu->idx = 0;
 
-    unlock_pages(mmu->updates, sizeof(mmu->updates));
+    unlock_pages(xch, mmu->updates, sizeof(mmu->updates));
 
  out:
     return err;
@@ -438,38 +440,38 @@ int xc_memory_op(xc_interface *xch,
     case XENMEM_increase_reservation:
     case XENMEM_decrease_reservation:
     case XENMEM_populate_physmap:
-        if ( lock_pages(reservation, sizeof(*reservation)) != 0 )
+        if ( lock_pages(xch, reservation, sizeof(*reservation)) != 0 )
         {
             PERROR("Could not lock");
             goto out1;
         }
         get_xen_guest_handle(extent_start, reservation->extent_start);
         if ( (extent_start != NULL) &&
-             (lock_pages(extent_start,
+             (lock_pages(xch, extent_start,
                     reservation->nr_extents * sizeof(xen_pfn_t)) != 0) )
         {
             PERROR("Could not lock");
-            unlock_pages(reservation, sizeof(*reservation));
+            unlock_pages(xch, reservation, sizeof(*reservation));
             goto out1;
         }
         break;
     case XENMEM_machphys_mfn_list:
-        if ( lock_pages(xmml, sizeof(*xmml)) != 0 )
+        if ( lock_pages(xch, xmml, sizeof(*xmml)) != 0 )
         {
             PERROR("Could not lock");
             goto out1;
         }
         get_xen_guest_handle(extent_start, xmml->extent_start);
-        if ( lock_pages(extent_start,
+        if ( lock_pages(xch, extent_start,
                    xmml->max_extents * sizeof(xen_pfn_t)) != 0 )
         {
             PERROR("Could not lock");
-            unlock_pages(xmml, sizeof(*xmml));
+            unlock_pages(xch, xmml, sizeof(*xmml));
             goto out1;
         }
         break;
     case XENMEM_add_to_physmap:
-        if ( lock_pages(arg, sizeof(struct xen_add_to_physmap)) )
+        if ( lock_pages(xch, arg, sizeof(struct xen_add_to_physmap)) )
         {
             PERROR("Could not lock");
             goto out1;
@@ -478,7 +480,7 @@ int xc_memory_op(xc_interface *xch,
     case XENMEM_current_reservation:
     case XENMEM_maximum_reservation:
     case XENMEM_maximum_gpfn:
-        if ( lock_pages(arg, sizeof(domid_t)) )
+        if ( lock_pages(xch, arg, sizeof(domid_t)) )
         {
             PERROR("Could not lock");
             goto out1;
@@ -486,7 +488,7 @@ int xc_memory_op(xc_interface *xch,
         break;
     case XENMEM_set_pod_target:
     case XENMEM_get_pod_target:
-        if ( lock_pages(arg, sizeof(struct xen_pod_target)) )
+        if ( lock_pages(xch, arg, sizeof(struct xen_pod_target)) )
         {
             PERROR("Could not lock");
             goto out1;
@@ -501,29 +503,29 @@ int xc_memory_op(xc_interface *xch,
     case XENMEM_increase_reservation:
     case XENMEM_decrease_reservation:
     case XENMEM_populate_physmap:
-        unlock_pages(reservation, sizeof(*reservation));
+        unlock_pages(xch, reservation, sizeof(*reservation));
         get_xen_guest_handle(extent_start, reservation->extent_start);
         if ( extent_start != NULL )
-            unlock_pages(extent_start,
+            unlock_pages(xch, extent_start,
                          reservation->nr_extents * sizeof(xen_pfn_t));
         break;
     case XENMEM_machphys_mfn_list:
-        unlock_pages(xmml, sizeof(*xmml));
+        unlock_pages(xch, xmml, sizeof(*xmml));
         get_xen_guest_handle(extent_start, xmml->extent_start);
-        unlock_pages(extent_start,
+        unlock_pages(xch, extent_start,
                      xmml->max_extents * sizeof(xen_pfn_t));
         break;
     case XENMEM_add_to_physmap:
-        unlock_pages(arg, sizeof(struct xen_add_to_physmap));
+        unlock_pages(xch, arg, sizeof(struct xen_add_to_physmap));
         break;
     case XENMEM_current_reservation:
     case XENMEM_maximum_reservation:
     case XENMEM_maximum_gpfn:
-        unlock_pages(arg, sizeof(domid_t));
+        unlock_pages(xch, arg, sizeof(domid_t));
         break;
     case XENMEM_set_pod_target:
     case XENMEM_get_pod_target:
-        unlock_pages(arg, sizeof(struct xen_pod_target));
+        unlock_pages(xch, arg, sizeof(struct xen_pod_target));
         break;
     }
 
@@ -565,7 +567,7 @@ int xc_get_pfn_list(xc_interface *xch,
     memset(pfn_buf, 0, max_pfns * sizeof(*pfn_buf));
 #endif
 
-    if ( lock_pages(pfn_buf, max_pfns * sizeof(*pfn_buf)) != 0 )
+    if ( lock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf)) != 0 )
     {
         PERROR("xc_get_pfn_list: pfn_buf lock failed");
         return -1;
@@ -573,7 +575,7 @@ int xc_get_pfn_list(xc_interface *xch,
 
     ret = do_domctl(xch, &domctl);
 
-    unlock_pages(pfn_buf, max_pfns * sizeof(*pfn_buf));
+    unlock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf));
 
     return (ret < 0) ? -1 : domctl.u.getmemlist.num_pfns;
 }
@@ -648,7 +650,7 @@ int xc_version(xc_interface *xch, int cm
         break;
     }
 
-    if ( (argsize != 0) && (lock_pages(arg, argsize) != 0) )
+    if ( (argsize != 0) && (lock_pages(xch, arg, argsize) != 0) )
     {
         PERROR("Could not lock memory for version hypercall");
         return -ENOMEM;
@@ -662,7 +664,7 @@ int xc_version(xc_interface *xch, int cm
     rc = do_xen_version(xch, cmd, arg);
 
     if ( argsize != 0 )
-        unlock_pages(arg, argsize);
+        unlock_pages(xch, arg, argsize);
 
     return rc;
 }
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
@@ -100,11 +100,11 @@ void xc_report_progress_step(xc_interfac
 
 void *xc_memalign(size_t alignment, size_t size);
 
-int lock_pages(void *addr, size_t len);
-void unlock_pages(void *addr, size_t len);
+int lock_pages(xc_interface *xch, void *addr, size_t len);
+void unlock_pages(xc_interface *xch, void *addr, size_t len);
 
-int hcall_buf_prep(void **addr, size_t len);
-void hcall_buf_release(void **addr, size_t len);
+int hcall_buf_prep(xc_interface *xch, void **addr, size_t len);
+void hcall_buf_release(xc_interface *xch, void **addr, size_t len);
 
 int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall);
 
@@ -125,7 +125,7 @@ static inline int do_physdev_op(xc_inter
 
     DECLARE_HYPERCALL;
 
-    if ( hcall_buf_prep(&op, len) != 0 )
+    if ( hcall_buf_prep(xch, &op, len) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
@@ -142,7 +142,7 @@ static inline int do_physdev_op(xc_inter
                     " rebuild the user-space tool set?\n");
     }
 
-    hcall_buf_release(&op, len);
+    hcall_buf_release(xch, &op, len);
 
 out1:
     return ret;
@@ -153,7 +153,7 @@ static inline int do_domctl(xc_interface
     int ret = -1;
     DECLARE_HYPERCALL;
 
-    if ( hcall_buf_prep((void **)&domctl, sizeof(*domctl)) != 0 )
+    if ( hcall_buf_prep(xch, (void **)&domctl, sizeof(*domctl)) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
@@ -171,7 +171,7 @@ static inline int do_domctl(xc_interface
                     " rebuild the user-space tool set?\n");
     }
 
-    hcall_buf_release((void **)&domctl, sizeof(*domctl));
+    hcall_buf_release(xch, (void **)&domctl, sizeof(*domctl));
 
  out1:
     return ret;
@@ -182,7 +182,7 @@ static inline int do_sysctl(xc_interface
     int ret = -1;
     DECLARE_HYPERCALL;
 
-    if ( hcall_buf_prep((void **)&sysctl, sizeof(*sysctl)) != 0 )
+    if ( hcall_buf_prep(xch, (void **)&sysctl, sizeof(*sysctl)) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
@@ -200,7 +200,7 @@ static inline int do_sysctl(xc_interface
                     " rebuild the user-space tool set?\n");
     }
 
-    hcall_buf_release((void **)&sysctl, sizeof(*sysctl));
+    hcall_buf_release(xch, (void **)&sysctl, sizeof(*sysctl));
 
  out1:
     return ret;
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_resume.c
--- a/tools/libxc/xc_resume.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_resume.c	Mon Sep 06 14:28:11 2010 +0100
@@ -196,7 +196,7 @@ static int xc_domain_resume_any(xc_inter
         goto out;
     }
 
-    if ( lock_pages(&ctxt, sizeof(ctxt)) )
+    if ( lock_pages(xch, &ctxt, sizeof(ctxt)) )
     {
         ERROR("Unable to lock ctxt");
         goto out;
@@ -235,7 +235,7 @@ static int xc_domain_resume_any(xc_inter
 
 #if defined(__i386__) || defined(__x86_64__)
  out:
-    unlock_pages((void *)&ctxt, sizeof ctxt);
+    unlock_pages(xch, (void *)&ctxt, sizeof ctxt);
     if (p2m)
         munmap(p2m, P2M_FL_ENTRIES*PAGE_SIZE);
     if (p2m_frame_list)
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_tbuf.c
--- a/tools/libxc/xc_tbuf.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_tbuf.c	Mon Sep 06 14:28:11 2010 +0100
@@ -129,7 +129,7 @@ int xc_tbuf_set_cpu_mask(xc_interface *x
     set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap);
     sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8;
 
-    if ( lock_pages(&bytemap, sizeof(bytemap)) != 0 )
+    if ( lock_pages(xch, &bytemap, sizeof(bytemap)) != 0 )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out;
@@ -137,7 +137,7 @@ int xc_tbuf_set_cpu_mask(xc_interface *x
 
     ret = do_sysctl(xch, &sysctl);
 
-    unlock_pages(&bytemap, sizeof(bytemap));
+    unlock_pages(xch, &bytemap, sizeof(bytemap));
 
  out:
     return ret;
diff -r 89333e9d1d90 -r 7e23b1acc3f2 tools/libxc/xc_tmem.c
--- a/tools/libxc/xc_tmem.c	Mon Sep 06 14:28:10 2010 +0100
+++ b/tools/libxc/xc_tmem.c	Mon Sep 06 14:28:11 2010 +0100
@@ -28,7 +28,7 @@ static int do_tmem_op(xc_interface *xch,
 
     hypercall.op = __HYPERVISOR_tmem_op;
     hypercall.arg[0] = (unsigned long)op;
-    if (lock_pages(op, sizeof(*op)) != 0)
+    if (lock_pages(xch, op, sizeof(*op)) != 0)
     {
         PERROR("Could not lock memory for Xen hypercall");
         return -EFAULT;
@@ -39,7 +39,7 @@ static int do_tmem_op(xc_interface *xch,
             DPRINTF("tmem operation failed -- need to"
                     " rebuild the user-space tool set?\n");
     }
-    unlock_pages(op, sizeof(*op));
+    unlock_pages(xch, op, sizeof(*op));
 
     return ret;
 }
@@ -66,7 +66,7 @@ int xc_tmem_control(xc_interface *xch,
     op.u.ctrl.arg3 = arg3;
 
     if (subop == TMEMC_LIST) {
-        if ((arg1 != 0) && (lock_pages(buf, arg1) != 0))
+        if ((arg1 != 0) && (lock_pages(xch, buf, arg1) != 0))
         {
             PERROR("Could not lock memory for Xen hypercall");
             return -ENOMEM;
@@ -82,7 +82,7 @@ int xc_tmem_control(xc_interface *xch,
 
     if (subop == TMEMC_LIST) {
         if (arg1 != 0)
-            unlock_pages(buf, arg1);
+            unlock_pages(xch, buf, arg1);
     }
 
     return rc;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 04 of 24] libxc: Remove unnecessary double indirection from xc_readconsolering
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (2 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 03 of 24] libxc: pass an xc_interface handle to page locking functions Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 05 of 24] libxc: use correct size of struct xen_mc Ian Campbell
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 8ad75cd961a081ce4ba60c97899b13112d7a5f3f
# Parent  7e23b1acc3f23c9f06c88b6f4480a614c49c9a96
libxc: Remove unnecessary double indirection from xc_readconsolering

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 7e23b1acc3f2 -r 8ad75cd961a0 tools/console/daemon/io.c
--- a/tools/console/daemon/io.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/console/daemon/io.c	Mon Sep 06 14:28:11 2010 +0100
@@ -887,7 +887,7 @@ static void handle_hv_logs(void)
 	if ((port = xc_evtchn_pending(xce_handle)) == -1)
 		return;
 
-	if (xc_readconsolering(xch, &bufptr, &size, 0, 1, &index) == 0 && size > 0) {
+	if (xc_readconsolering(xch, bufptr, &size, 0, 1, &index) == 0 && size > 0) {
 		int logret;
 		if (log_time_hv)
 			logret = write_with_timestamp(log_hv_fd, buffer, size,
diff -r 7e23b1acc3f2 -r 8ad75cd961a0 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
@@ -22,13 +22,12 @@
 #include <xen/hvm/hvm_op.h>
 
 int xc_readconsolering(xc_interface *xch,
-                       char **pbuffer,
+                       char *buffer,
                        unsigned int *pnr_chars,
                        int clear, int incremental, uint32_t *pindex)
 {
     int ret;
     DECLARE_SYSCTL;
-    char *buffer = *pbuffer;
     unsigned int nr_chars = *pnr_chars;
 
     sysctl.cmd = XEN_SYSCTL_readconsole;
diff -r 7e23b1acc3f2 -r 8ad75cd961a0 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
@@ -729,7 +729,7 @@ int xc_physdev_pci_access_modify(xc_inte
                                  int enable);
 
 int xc_readconsolering(xc_interface *xch,
-                       char **pbuffer,
+                       char *buffer,
                        unsigned int *pnr_chars,
                        int clear, int incremental, uint32_t *pindex);
 
diff -r 7e23b1acc3f2 -r 8ad75cd961a0 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxl/libxl.c	Mon Sep 06 14:28:11 2010 +0100
@@ -3162,7 +3162,7 @@ int libxl_xen_console_read_line(libxl_ct
     int ret;
 
     memset(cr->buffer, 0, cr->size);
-    ret = xc_readconsolering(ctx->xch, &cr->buffer, &cr->count,
+    ret = xc_readconsolering(ctx->xch, cr->buffer, &cr->count,
                              cr->clear, cr->incremental, &cr->index);
     if (ret < 0) {
         XL_LOG_ERRNO(ctx, XL_LOG_ERROR, "reading console ring buffer");
diff -r 7e23b1acc3f2 -r 8ad75cd961a0 tools/python/xen/lowlevel/xc/xc.c
--- a/tools/python/xen/lowlevel/xc/xc.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/python/xen/lowlevel/xc/xc.c	Mon Sep 06 14:28:11 2010 +0100
@@ -1116,7 +1116,7 @@ static PyObject *pyxc_readconsolering(Xc
          !str )
         return NULL;
 
-    ret = xc_readconsolering(self->xc_handle, &str, &count, clear,
+    ret = xc_readconsolering(self->xc_handle, str, &count, clear,
                              incremental, &index);
     if ( ret < 0 )
         return pyxc_error_to_exception(self->xc_handle);
@@ -1133,7 +1133,7 @@ static PyObject *pyxc_readconsolering(Xc
 
         str = ptr + count;
         count = size - count;
-        ret = xc_readconsolering(self->xc_handle, &str, &count, clear,
+        ret = xc_readconsolering(self->xc_handle, str, &count, clear,
                                  1, &index);
         if ( ret < 0 )
             break;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 05 of 24] libxc: use correct size of struct xen_mc
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (3 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 04 of 24] libxc: Remove unnecessary double indirection from xc_readconsolering Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 06 of 24] libxc: add to xc_domain_maximum_gpfn Ian Campbell
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID b4aa42793b8a8434aa3d7371e22e86dcca4f5a7a
# Parent  8ad75cd961a081ce4ba60c97899b13112d7a5f3f
libxc: use correct size of struct xen_mc

We want the size of the struct not the pointer (although rounding up
to page size in lock_pages probably saves us).

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 8ad75cd961a0 -r b4aa42793b8a tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
@@ -153,7 +153,7 @@ int xc_mca_op(xc_interface *xch, struct 
     DECLARE_HYPERCALL;
 
     mc->interface_version = XEN_MCA_INTERFACE_VERSION;
-    if ( lock_pages(xch, mc, sizeof(mc)) )
+    if ( lock_pages(xch, mc, sizeof(*mc)) )
     {
         PERROR("Could not lock xen_mc memory");
         return -EINVAL;
@@ -162,7 +162,7 @@ int xc_mca_op(xc_interface *xch, struct 
     hypercall.op = __HYPERVISOR_mca;
     hypercall.arg[0] = (unsigned long)mc;
     ret = do_xen_hypercall(xch, &hypercall);
-    unlock_pages(xch, mc, sizeof(mc));
+    unlock_pages(xch, mc, sizeof(*mc));
     return ret;
 }
 #endif

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 06 of 24] libxc: add to xc_domain_maximum_gpfn
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (4 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 05 of 24] libxc: use correct size of struct xen_mc Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 07 of 24] libxc: replace open-coded use of XENMEM_decrease_reservation Ian Campbell
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID e79d9a42c67e1d03f79d40529b2578578a5aa547
# Parent  b4aa42793b8a8434aa3d7371e22e86dcca4f5a7a
libxc: add to xc_domain_maximum_gpfn
to replace various open-coded calls to XENMEM_maximum_gpfn

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r b4aa42793b8a -r e79d9a42c67e tools/libxc/ia64/xc_ia64_linux_save.c
--- a/tools/libxc/ia64/xc_ia64_linux_save.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_linux_save.c	Mon Sep 06 14:28:11 2010 +0100
@@ -487,7 +487,7 @@ xc_domain_save(xc_interface *xch, int io
         goto out;
     }
 
-    p2m_size = xc_memory_op(xch, XENMEM_maximum_gpfn, &dom) + 1;
+    p2m_size = xc_domain_maximum_gpfn(xch, dom);
 
     /* This is expected by xm restore.  */
     if (write_exact(io_fd, &p2m_size, sizeof(unsigned long))) {
diff -r b4aa42793b8a -r e79d9a42c67e tools/libxc/ia64/xc_ia64_stubs.c
--- a/tools/libxc/ia64/xc_ia64_stubs.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_stubs.c	Mon Sep 06 14:28:11 2010 +0100
@@ -114,7 +114,7 @@ xc_ia64_copy_memmap(xc_interface *xch, u
 
     int ret;
 
-    gpfn_max_prev = xc_memory_op(xch, XENMEM_maximum_gpfn, &domid);
+    gpfn_max_prev = xc_domain_maximum_gpfn(xch, domid);
     if (gpfn_max_prev < 0)
         return -1;
 
@@ -143,7 +143,7 @@ xc_ia64_copy_memmap(xc_interface *xch, u
         goto again;
     }
 
-    gpfn_max_post = xc_memory_op(xch, XENMEM_maximum_gpfn, &domid);
+    gpfn_max_post = xc_domain_maximum_gpfn(xch, domid);
     if (gpfn_max_prev < 0) {
         free(memmap_info);
         return -1;
@@ -190,7 +190,7 @@ xc_ia64_map_foreign_p2m(xc_interface *xc
     int ret;
     int saved_errno;
 
-    gpfn_max = xc_memory_op(xch, XENMEM_maximum_gpfn, &dom);
+    gpfn_max = xc_domain_maximum_gpfn(xch, dom);
     if (gpfn_max < 0)
         return NULL;
     p2m_size =
diff -r b4aa42793b8a -r e79d9a42c67e tools/libxc/xc_core_x86.c
--- a/tools/libxc/xc_core_x86.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_core_x86.c	Mon Sep 06 14:28:11 2010 +0100
@@ -40,11 +40,6 @@ xc_core_arch_gpfn_may_present(struct xc_
 }
 
 
-static int nr_gpfns(xc_interface *xch, domid_t domid)
-{
-    return xc_memory_op(xch, XENMEM_maximum_gpfn, &domid) + 1;
-}
-
 int
 xc_core_arch_auto_translated_physmap(const xc_dominfo_t *info)
 {
@@ -57,7 +52,7 @@ xc_core_arch_memory_map_get(xc_interface
                             xc_core_memory_map_t **mapp,
                             unsigned int *nr_entries)
 {
-    unsigned long p2m_size = nr_gpfns(xch, info->domid);
+    unsigned long p2m_size = xc_domain_maximum_gpfn(xch, info->domid);
     xc_core_memory_map_t *map;
 
     map = malloc(sizeof(*map));
@@ -92,7 +87,7 @@ xc_core_arch_map_p2m_rw(xc_interface *xc
     int err;
     int i;
 
-    dinfo->p2m_size = nr_gpfns(xch, info->domid);
+    dinfo->p2m_size = xc_domain_maximum_gpfn(xch, info->domid);
     if ( dinfo->p2m_size < info->nr_pages  )
     {
         ERROR("p2m_size < nr_pages -1 (%lx < %lx", dinfo->p2m_size, info->nr_pages - 1);
diff -r b4aa42793b8a -r e79d9a42c67e tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
@@ -578,6 +578,11 @@ int xc_domain_get_tsc_info(xc_interface 
     return rc;
 }
 
+
+int xc_domain_maximum_gpfn(xc_interface *xch, domid_t domid)
+{
+    return xc_memory_op(xch, XENMEM_maximum_gpfn, &domid) + 1;
+}
 
 int xc_domain_memory_increase_reservation(xc_interface *xch,
                                           uint32_t domid,
diff -r b4aa42793b8a -r e79d9a42c67e tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain_save.c	Mon Sep 06 14:28:11 2010 +0100
@@ -979,7 +979,7 @@ int xc_domain_save(xc_interface *xch, in
     }
 
     /* Get the size of the P2M table */
-    dinfo->p2m_size = xc_memory_op(xch, XENMEM_maximum_gpfn, &dom) + 1;
+    dinfo->p2m_size = xc_domain_maximum_gpfn(xch, dom);
 
     if ( dinfo->p2m_size > ~XEN_DOMCTL_PFINFO_LTAB_MASK )
     {
diff -r b4aa42793b8a -r e79d9a42c67e tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
@@ -784,6 +784,9 @@ int xc_domain_get_tsc_info(xc_interface 
                            uint32_t *incarnation);
 
 int xc_domain_disable_migrate(xc_interface *xch, uint32_t domid);
+
+int xc_domain_maximum_gpfn(xc_interface *xch, domid_t domid);
+
 
 int xc_domain_memory_increase_reservation(xc_interface *xch,
                                           uint32_t domid,

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 07 of 24] libxc: replace open-coded use of XENMEM_decrease_reservation
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (5 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 06 of 24] libxc: add to xc_domain_maximum_gpfn Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 08 of 24] libxc: simplify performance counters API Ian Campbell
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 4d8b7ca524c7aeace21ba151cb21c784b268ae5f
# Parent  e79d9a42c67e1d03f79d40529b2578578a5aa547
libxc: replace open-coded use of XENMEM_decrease_reservation

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r e79d9a42c67e -r 4d8b7ca524c7 tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain_restore.c	Mon Sep 06 14:28:11 2010 +0100
@@ -1529,15 +1529,7 @@ int xc_domain_restore(xc_interface *xch,
 
         if ( nr_frees > 0 )
         {
-            struct xen_memory_reservation reservation = {
-                .nr_extents   = nr_frees,
-                .extent_order = 0,
-                .domid        = dom
-            };
-            set_xen_guest_handle(reservation.extent_start, tailbuf.u.pv.pfntab);
-
-            if ( (frc = xc_memory_op(xch, XENMEM_decrease_reservation,
-                                     &reservation)) != nr_frees )
+            if ( (frc = xc_domain_memory_decrease_reservation(xch, dom, nr_frees, 0, tailbuf.u.pv.pfntab)) != nr_frees )
             {
                 PERROR("Could not decrease reservation : %d", frc);
                 goto out;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 08 of 24] libxc: simplify performance counters API
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (6 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 07 of 24] libxc: replace open-coded use of XENMEM_decrease_reservation Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 09 of 24] libxc: simplify lock profiling API Ian Campbell
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 8e9ada79914009870ea06339eb4211519a43a927
# Parent  4d8b7ca524c7aeace21ba151cb21c784b268ae5f
libxc: simplify performance counters API

Current function has heavily overloaded semantics for the various
arguments. Separate out into more specific functions.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 4d8b7ca524c7 -r 8e9ada799140 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
@@ -167,20 +167,29 @@ int xc_mca_op(xc_interface *xch, struct 
 }
 #endif
 
-int xc_perfc_control(xc_interface *xch,
-                     uint32_t opcode,
-                     xc_perfc_desc_t *desc,
-                     xc_perfc_val_t *val,
-                     int *nbr_desc,
-                     int *nbr_val)
+int xc_perfc_reset(xc_interface *xch)
+{
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_perfc_op;
+    sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_reset;
+    set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL);
+    set_xen_guest_handle(sysctl.u.perfc_op.val, NULL);
+
+    return do_sysctl(xch, &sysctl);
+}
+
+int xc_perfc_query_number(xc_interface *xch,
+                          int *nbr_desc,
+                          int *nbr_val)
 {
     int rc;
     DECLARE_SYSCTL;
 
     sysctl.cmd = XEN_SYSCTL_perfc_op;
-    sysctl.u.perfc_op.cmd = opcode;
-    set_xen_guest_handle(sysctl.u.perfc_op.desc, desc);
-    set_xen_guest_handle(sysctl.u.perfc_op.val, val);
+    sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query;
+    set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL);
+    set_xen_guest_handle(sysctl.u.perfc_op.val, NULL);
 
     rc = do_sysctl(xch, &sysctl);
 
@@ -190,6 +199,20 @@ int xc_perfc_control(xc_interface *xch,
         *nbr_val = sysctl.u.perfc_op.nr_vals;
 
     return rc;
+}
+
+int xc_perfc_query(xc_interface *xch,
+                   xc_perfc_desc_t *desc,
+                   xc_perfc_val_t *val)
+{
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_perfc_op;
+    sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query;
+    set_xen_guest_handle(sysctl.u.perfc_op.desc, desc);
+    set_xen_guest_handle(sysctl.u.perfc_op.val, val);
+
+    return do_sysctl(xch, &sysctl);
 }
 
 int xc_lockprof_control(xc_interface *xch,
diff -r 4d8b7ca524c7 -r 8e9ada799140 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
@@ -849,14 +849,15 @@ unsigned long xc_make_page_below_4G(xc_i
 
 typedef xen_sysctl_perfc_desc_t xc_perfc_desc_t;
 typedef xen_sysctl_perfc_val_t xc_perfc_val_t;
+int xc_perfc_reset(xc_interface *xch);
+int xc_perfc_query_number(xc_interface *xch,
+                          int *nbr_desc,
+                          int *nbr_val);
 /* IMPORTANT: The caller is responsible for mlock()'ing the @desc and @val
    arrays. */
-int xc_perfc_control(xc_interface *xch,
-                     uint32_t op,
-                     xc_perfc_desc_t *desc,
-                     xc_perfc_val_t *val,
-                     int *nbr_desc,
-                     int *nbr_val);
+int xc_perfc_query(xc_interface *xch,
+                   xc_perfc_desc_t *desc,
+                   xc_perfc_val_t *val);
 
 typedef xen_sysctl_lockprof_data_t xc_lockprof_data_t;
 /* IMPORTANT: The caller is responsible for mlock()'ing the @data array. */
diff -r 4d8b7ca524c7 -r 8e9ada799140 tools/misc/xenperf.c
--- a/tools/misc/xenperf.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/misc/xenperf.c	Mon Sep 06 14:28:11 2010 +0100
@@ -137,8 +137,7 @@ int main(int argc, char *argv[])
     
     if ( reset )
     {
-        if ( xc_perfc_control(xc_handle, XEN_SYSCTL_PERFCOP_reset,
-                              NULL, NULL, NULL, NULL) != 0 )
+        if ( xc_perfc_reset(xc_handle) != 0 )
         {
             fprintf(stderr, "Error reseting performance counters: %d (%s)\n",
                     errno, strerror(errno));
@@ -148,8 +147,7 @@ int main(int argc, char *argv[])
         return 0;
     }
 
-    if ( xc_perfc_control(xc_handle, XEN_SYSCTL_PERFCOP_query,
-                          NULL, NULL, &num_desc, &num_val) != 0 )
+    if ( xc_perfc_query_number(xc_handle, &num_desc, &num_val) != 0 )
     {
         fprintf(stderr, "Error getting number of perf counters: %d (%s)\n",
                 errno, strerror(errno));
@@ -169,8 +167,7 @@ int main(int argc, char *argv[])
         exit(-1);
     }
 
-    if ( xc_perfc_control(xc_handle, XEN_SYSCTL_PERFCOP_query,
-                          pcd, pcv, NULL, NULL) != 0 )
+    if ( xc_perfc_query(xc_handle, pcd, pcv) != 0 )
     {
         fprintf(stderr, "Error getting perf counter: %d (%s)\n",
                 errno, strerror(errno));

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 09 of 24] libxc: simplify lock profiling API
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (7 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 08 of 24] libxc: simplify performance counters API Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers Ian Campbell
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 7b45202f78cd82d320fb32fea67c0a618697baec
# Parent  8e9ada79914009870ea06339eb4211519a43a927
libxc: simplify lock profiling API

Current function has heavily overloaded semantics for the various
arguments. Separate out into more specific functions.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 8e9ada799140 -r 7b45202f78cd tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
@@ -215,8 +215,35 @@ int xc_perfc_query(xc_interface *xch,
     return do_sysctl(xch, &sysctl);
 }
 
-int xc_lockprof_control(xc_interface *xch,
-                        uint32_t opcode,
+int xc_lockprof_reset(xc_interface *xch)
+{
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_lockprof_op;
+    sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_reset;
+    set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL);
+
+    return do_sysctl(xch, &sysctl);
+}
+
+int xc_lockprof_query_number(xc_interface *xch,
+                             uint32_t *n_elems)
+{
+    int rc;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_lockprof_op;
+    sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query;
+    set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL);
+
+    rc = do_sysctl(xch, &sysctl);
+
+    *n_elems = sysctl.u.lockprof_op.nr_elem;
+
+    return rc;
+}
+
+int xc_lockprof_query(xc_interface *xch,
                         uint32_t *n_elems,
                         uint64_t *time,
                         xc_lockprof_data_t *data)
@@ -225,16 +252,13 @@ int xc_lockprof_control(xc_interface *xc
     DECLARE_SYSCTL;
 
     sysctl.cmd = XEN_SYSCTL_lockprof_op;
-    sysctl.u.lockprof_op.cmd = opcode;
-    sysctl.u.lockprof_op.max_elem = n_elems ? *n_elems : 0;
+    sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query;
+    sysctl.u.lockprof_op.max_elem = *n_elems;
     set_xen_guest_handle(sysctl.u.lockprof_op.data, data);
 
     rc = do_sysctl(xch, &sysctl);
 
-    if (n_elems)
-        *n_elems = sysctl.u.lockprof_op.nr_elem;
-    if (time)
-        *time = sysctl.u.lockprof_op.time;
+    *n_elems = sysctl.u.lockprof_op.nr_elem;
 
     return rc;
 }
diff -r 8e9ada799140 -r 7b45202f78cd tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
@@ -860,12 +860,14 @@ int xc_perfc_query(xc_interface *xch,
                    xc_perfc_val_t *val);
 
 typedef xen_sysctl_lockprof_data_t xc_lockprof_data_t;
+int xc_lockprof_reset(xc_interface *xch);
+int xc_lockprof_query_number(xc_interface *xch,
+                             uint32_t *n_elems);
 /* IMPORTANT: The caller is responsible for mlock()'ing the @data array. */
-int xc_lockprof_control(xc_interface *xch,
-                        uint32_t opcode,
-                        uint32_t *n_elems,
-                        uint64_t *time,
-                        xc_lockprof_data_t *data);
+int xc_lockprof_query(xc_interface *xch,
+                      uint32_t *n_elems,
+                      uint64_t *time,
+                      xc_lockprof_data_t *data);
 
 /**
  * Memory maps a range within one domain to a local address range.  Mappings
diff -r 8e9ada799140 -r 7b45202f78cd tools/misc/xenlockprof.c
--- a/tools/misc/xenlockprof.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/misc/xenlockprof.c	Mon Sep 06 14:28:11 2010 +0100
@@ -60,8 +60,7 @@ int main(int argc, char *argv[])
 
     if ( argc > 1 )
     {
-        if ( xc_lockprof_control(xc_handle, XEN_SYSCTL_LOCKPROF_reset, NULL,
-                                 NULL, NULL) != 0 )
+        if ( xc_lockprof_reset(xc_handle) != 0 )
         {
             fprintf(stderr, "Error reseting profile data: %d (%s)\n",
                     errno, strerror(errno));
@@ -71,8 +70,7 @@ int main(int argc, char *argv[])
     }
 
     n = 0;
-    if ( xc_lockprof_control(xc_handle, XEN_SYSCTL_LOCKPROF_query, &n,
-                             NULL, NULL) != 0 )
+    if ( xc_lockprof_query_number(xc_handle, &n) != 0 )
     {
         fprintf(stderr, "Error getting number of profile records: %d (%s)\n",
                 errno, strerror(errno));
@@ -89,8 +87,7 @@ int main(int argc, char *argv[])
     }
 
     i = n;
-    if ( xc_lockprof_control(xc_handle, XEN_SYSCTL_LOCKPROF_query, &i,
-                             &time, data) != 0 )
+    if ( xc_lockprof_query(xc_handle, &i, &time, data) != 0 )
     {
         fprintf(stderr, "Error getting profile records: %d (%s)\n",
                 errno, strerror(errno));

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (8 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 09 of 24] libxc: simplify lock profiling API Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-07  8:44   ` Jeremy Fitzhardinge
  2010-09-06 13:38 ` [PATCH 11 of 24] libxc: convert xc_version over to hypercall buffers Ian Campbell
                   ` (15 subsequent siblings)
  25 siblings, 1 reply; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID bf7fb64762eb7decea9a6804460f0f966496ba07
# Parent  7b45202f78cd82d320fb32fea67c0a618697baec
libxc: infrastructure for hypercall safe data buffers.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 7b45202f78cd -r bf7fb64762eb tools/libxc/Makefile
--- a/tools/libxc/Makefile	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/Makefile	Mon Sep 06 14:28:11 2010 +0100
@@ -27,6 +27,7 @@ CTRL_SRCS-y       += xc_mem_event.c
 CTRL_SRCS-y       += xc_mem_event.c
 CTRL_SRCS-y       += xc_mem_paging.c
 CTRL_SRCS-y       += xc_memshr.c
+CTRL_SRCS-y       += xc_hcall_buf.c
 CTRL_SRCS-y       += xtl_core.c
 CTRL_SRCS-y       += xtl_logger_stdio.c
 CTRL_SRCS-$(CONFIG_X86) += xc_pagetab.c
diff -r 7b45202f78cd -r bf7fb64762eb tools/libxc/xc_hcall_buf.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tools/libxc/xc_hcall_buf.c	Mon Sep 06 14:28:11 2010 +0100
@@ -0,0 +1,147 @@
+/*
+ * Copyright (c) 2010, Citrix Systems, Inc.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+ */
+
+#include <inttypes.h>
+#include "xc_private.h"
+#include "xg_private.h"
+
+DECLARE_NAMED_HYPERCALL_BUFFER(HYPERCALL_BUFFER_NULL);
+
+void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
+{
+    size_t size = nr_pages * PAGE_SIZE;
+    void *p;
+#if defined(_POSIX_C_SOURCE) && !defined(__sun__)
+    int ret;
+    ret = posix_memalign(&p, PAGE_SIZE, size);
+    if (ret != 0)
+        return NULL;
+#elif defined(__NetBSD__) || defined(__OpenBSD__)
+    p = valloc(size);
+#else
+    p = memalign(PAGE_SIZE, size);
+#endif
+
+    if (!p)
+        return NULL;
+
+#ifndef __sun__
+    if ( mlock(p, size) < 0 )
+    {
+        free(p);
+        return NULL;
+    }
+#endif
+
+    b->hbuf = p;
+
+    memset(p, 0, size);
+    return b->hbuf;
+}
+
+void xc__hypercall_buffer_free_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
+{
+    if ( b->hbuf == NULL )
+        return;
+
+#ifndef __sun__
+    (void) munlock(b->hbuf, nr_pages * PAGE_SIZE);
+#endif
+
+    free(b->hbuf);
+}
+
+struct allocation_header {
+    int nr_pages;
+};
+
+void *xc__hypercall_buffer_alloc(xc_interface *xch, xc_hypercall_buffer_t *b, size_t size)
+{
+    size_t actual_size = ROUNDUP(size + sizeof(struct allocation_header), PAGE_SHIFT);
+    int nr_pages = actual_size >> PAGE_SHIFT;
+    struct allocation_header *hdr;
+
+    hdr = xc__hypercall_buffer_alloc_pages(xch, b, nr_pages);
+    if ( hdr == NULL )
+        return NULL;
+
+    b->hbuf = (void *)(hdr+1);
+
+    hdr->nr_pages = nr_pages;
+    return b->hbuf;
+}
+
+void xc__hypercall_buffer_free(xc_interface *xch, xc_hypercall_buffer_t *b)
+{
+    struct allocation_header *hdr;
+
+    if (b->hbuf == NULL)
+        return;
+
+    hdr = b->hbuf;
+    b->hbuf = --hdr;
+
+    xc__hypercall_buffer_free_pages(xch, b, hdr->nr_pages);
+}
+
+int xc__hypercall_bounce_pre(xc_interface *xch, xc_hypercall_buffer_t *b)
+{
+    void *p;
+
+    /*
+     * Catch hypercall buffer declared other than with DECLARE_HYPERCALL_BOUNCE.
+     */
+    if ( b->ubuf == (void *)-1 || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_NONE )
+        abort();
+
+    p = xc__hypercall_buffer_alloc(xch, b, b->sz);
+    if ( p == NULL )
+        return -1;
+
+    if ( b->dir == XC_HYPERCALL_BUFFER_BOUNCE_IN || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_BOTH )
+        memcpy(b->hbuf, b->ubuf, b->sz);
+
+    return 0;
+}
+
+void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *b)
+{
+    /*
+     * Catch hypercall buffer declared other than with DECLARE_HYPERCALL_BOUNCE.
+     */
+    if ( b->ubuf == (void *)-1 || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_NONE )
+        abort();
+
+    if ( b->hbuf == NULL )
+        return;
+
+    if ( b->dir == XC_HYPERCALL_BUFFER_BOUNCE_OUT || b->dir == XC_HYPERCALL_BUFFER_BOUNCE_BOTH )
+        memcpy(b->ubuf, b->hbuf, b->sz);
+
+    xc__hypercall_buffer_free(xch, b);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff -r 7b45202f78cd -r bf7fb64762eb tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
@@ -105,6 +105,62 @@ void unlock_pages(xc_interface *xch, voi
 
 int hcall_buf_prep(xc_interface *xch, void **addr, size_t len);
 void hcall_buf_release(xc_interface *xch, void **addr, size_t len);
+
+/*
+ * HYPERCALL ARGUMENT BUFFERS
+ *
+ * Augment the public hypercall buffer interface with the ability to
+ * bounce between user provided buffers and hypercall safe memory.
+ *
+ * Use xc_hypercall_bounce_pre/post instead of
+ * xc_hypercall_buffer_alloc/free(_pages).  The specified user
+ * supplied buffer is automatically copied in/out of the hypercall
+ * safe memory.
+ */
+enum {
+    XC_HYPERCALL_BUFFER_BOUNCE_NONE = 0,
+    XC_HYPERCALL_BUFFER_BOUNCE_IN   = 1,
+    XC_HYPERCALL_BUFFER_BOUNCE_OUT  = 2,
+    XC_HYPERCALL_BUFFER_BOUNCE_BOTH = 3
+};
+
+/*
+ * Declare a named bounce buffer.
+ *
+ * See the definition DECLARE_NAMED_HYPERCALL_BUFFER for details of
+ * when it is acceptable to use this declaration rather than
+ * DECLARE_HYPERCALL_BOUNCE.
+ */
+#define DECLARE_NAMED_HYPERCALL_BOUNCE(_name, _ubuf, _sz, _dir) \
+    xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = {  \
+        .hbuf = NULL,                                           \
+        .param_shadow = NULL,                                   \
+        .sz = _sz, .dir = _dir, .ubuf = _ubuf,                  \
+    }
+
+/*
+ * Declare a bounce buffer shadowing the named user data pointer.
+ */
+#define DECLARE_HYPERCALL_BOUNCE(_ubuf, _sz, _dir) DECLARE_NAMED_HYPERCALL_BOUNCE(_ubuf, _ubuf, _sz, _dir)
+
+/*
+ * Set the size of data to bounce. Useful when the size is not known
+ * when the bounce buffer is declared.
+ */
+#define HYPERCALL_BOUNCE_SET_SIZE(_buf, _sz) do { (HYPERCALL_BUFFER(_buf))->sz = _sz; } while (0)
+
+/*
+ * Initialise and free hypercall safe memory. Takes care of any required
+ * copying.
+ */
+int xc__hypercall_bounce_pre(xc_interface *xch, xc_hypercall_buffer_t *bounce);
+#define xc_hypercall_bounce_pre(_xch, _name) xc__hypercall_bounce_pre(_xch, HYPERCALL_BUFFER(_name))
+void xc__hypercall_bounce_post(xc_interface *xch, xc_hypercall_buffer_t *bounce);
+#define xc_hypercall_bounce_post(_xch, _name) xc__hypercall_bounce_post(_xch, HYPERCALL_BUFFER(_name))
+
+/*
+ * Hypercall interfaces.
+ */
 
 int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall);
 
diff -r 7b45202f78cd -r bf7fb64762eb tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
@@ -147,6 +147,149 @@ enum xc_open_flags {
  * @return 0 on success, -1 otherwise.
  */
 int xc_interface_close(xc_interface *xch);
+
+/*
+ * HYPERCALL SAFE MEMORY BUFFER
+ *
+ * Ensure that memory which is passed to a hypercall has been
+ * specially allocated in order to be safe to access from the
+ * hypervisor.
+ *
+ * Each user data pointer is shadowed by an xc_hypercall_buffer data
+ * structure. You should never define an xc_hypercall_buffer type
+ * directly, instead use the DECLARE_HYPERCALL_BUFFER* macros below.
+ *
+ * The strucuture should be considered opaque and all access should be
+ * via the macros and helper functions defined below.
+ *
+ * Once the buffer is declared the user is responsible for explicitly
+ * allocating and releasing the memory using
+ * xc_hypercall_buffer_alloc(_pages) and
+ * xc_hypercall_buffer_free(_pages).
+ *
+ * Once the buffer has been allocated the user can initialise the data
+ * via the normal pointer. The xc_hypercall_buffer structure is
+ * transparently referenced by the helper macros (such as
+ * xen_set_guest_handle) in order to check at compile time that the
+ * correct type of memory is being used.
+ */
+struct xc_hypercall_buffer {
+    /* Hypercall safe memory buffer. */
+    void *hbuf;
+
+    /*
+     * Reference to xc_hypercall_buffer passed as argument to the
+     * current function.
+     */
+    struct xc_hypercall_buffer *param_shadow;
+
+    /*
+     * Direction of copy for bounce buffering.
+     */
+    int dir;
+
+    /* Used iff dir != 0. */
+    void *ubuf;
+    size_t sz;
+};
+typedef struct xc_hypercall_buffer xc_hypercall_buffer_t;
+
+/*
+ * Construct the name of the hypercall buffer for a given variable.
+ * For internal use only
+ */
+#define XC__HYPERCALL_BUFFER_NAME(_name) xc__hypercall_buffer_##_name
+
+/*
+ * Returns the hypercall_buffer associated with a variable.
+ */
+#define HYPERCALL_BUFFER(_name)                                                              \
+    ({  xc_hypercall_buffer_t _val1;                                                         \
+        typeof(XC__HYPERCALL_BUFFER_NAME(_name)) *_val2 = &XC__HYPERCALL_BUFFER_NAME(_name); \
+        (void)(&_val1 == _val2);                                                             \
+        (_val2)->param_shadow ? (_val2)->param_shadow : (_val2);                             \
+     })
+
+#define HYPERCALL_BUFFER_INIT_NO_BOUNCE .dir = 0, .sz = 0, .ubuf = (void *)-1
+
+/*
+ * Defines a named hypercall buffer.
+ *
+ * Normally you should use DECLARE_HYPERCALL_BUFFER (see below).
+ *
+ * This declaration should only be used when the user pointer is
+ * non-trivial, e.g. when it is contained within an existing data
+ * structure.
+ */
+#define DECLARE_NAMED_HYPERCALL_BUFFER(_name)                  \
+    xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \
+        .hbuf = NULL,                                          \
+        .param_shadow = NULL,                                  \
+        HYPERCALL_BUFFER_INIT_NO_BOUNCE                        \
+    }
+
+/*
+ * Defines a hypercall buffer and user pointer with _name of _type.
+ *
+ * The user accesses the data as normal via _name which will be
+ * transparently converted to the hypercall buffer as necessary.
+ */
+#define DECLARE_HYPERCALL_BUFFER(_type, _name) \
+    _type *_name = NULL;                       \
+    DECLARE_NAMED_HYPERCALL_BUFFER(_name)
+
+/*
+ * Declare the necessary data structure to allow a hypercall buffer
+ * passed as an argument to a function to be used in the normal way.
+ */
+#define DECLARE_HYPERCALL_BUFFER_ARGUMENT(_name)               \
+    xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(_name) = { \
+        .hbuf = (void *)-1,                                    \
+        .param_shadow = _name,                                 \
+        HYPERCALL_BUFFER_INIT_NO_BOUNCE                        \
+    }
+
+/*
+ * Get the hypercall buffer data pointer in a form suitable for use
+ * directly as a hypercall argument.
+ */
+#define HYPERCALL_BUFFER_AS_ARG(_name)                                             \
+    ({  xc_hypercall_buffer_t _val1;                                               \
+        typeof(XC__HYPERCALL_BUFFER_NAME(_name)) *_val2 = HYPERCALL_BUFFER(_name); \
+        (void)(&_val1 == _val2);                                                   \
+        (unsigned long)(_val2)->hbuf;                                              \
+     })
+
+/*
+ * Set a xen_guest_handle in a type safe manner, ensuring that the
+ * data pointer has been correctly allocated.
+ */
+#define xc_set_xen_guest_handle(_hnd, _val)                                      \
+    do {                                                                         \
+        xc_hypercall_buffer_t _val1;                                             \
+        typeof(XC__HYPERCALL_BUFFER_NAME(_val)) *_val2 = HYPERCALL_BUFFER(_val); \
+        (void) (&_val1 == _val2);                                                 \
+        set_xen_guest_handle_raw(_hnd, (_val2)->hbuf);                           \
+    } while (0)
+
+/* Use with xc_set_xen_guest_handle in place of NULL */
+extern xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL);
+
+/*
+ * Allocate and free hypercall buffers with byte granularity.
+ */
+void *xc__hypercall_buffer_alloc(xc_interface *xch, xc_hypercall_buffer_t *b, size_t size);
+#define xc_hypercall_buffer_alloc(_xch, _name, _size) xc__hypercall_buffer_alloc(_xch, HYPERCALL_BUFFER(_name), _size)
+void xc__hypercall_buffer_free(xc_interface *xch, xc_hypercall_buffer_t *b);
+#define xc_hypercall_buffer_free(_xch, _name) xc__hypercall_buffer_free(_xch, HYPERCALL_BUFFER(_name))
+
+/*
+ * Allocate and free hypercall buffers with page alignment.
+ */
+void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages);
+#define xc_hypercall_buffer_alloc_pages(_xch, _name, _nr) xc__hypercall_buffer_alloc_pages(_xch, HYPERCALL_BUFFER(_name), _nr)
+void xc__hypercall_buffer_free_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages);
+#define xc_hypercall_buffer_free_pages(_xch, _name, _nr) xc__hypercall_buffer_free_pages(_xch, HYPERCALL_BUFFER(_name), _nr)
 
 /*
  * DOMAIN DEBUGGING FUNCTIONS

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 11 of 24] libxc: convert xc_version over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (9 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 12 of 24] libxc: convert domctl interfaces " Ian Campbell
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID f3b26cbd7eb5cc0ce7321aaae9eefb821192e86f
# Parent  bf7fb64762eb7decea9a6804460f0f966496ba07
libxc: convert xc_version over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r bf7fb64762eb -r f3b26cbd7eb5 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_private.c	Mon Sep 06 14:28:11 2010 +0100
@@ -629,42 +629,46 @@ int xc_sysctl(xc_interface *xch, struct 
 
 int xc_version(xc_interface *xch, int cmd, void *arg)
 {
-    int rc, argsize = 0;
+    DECLARE_HYPERCALL_BOUNCE(arg, 0, XC_HYPERCALL_BUFFER_BOUNCE_OUT); /* Size unknown until cmd decoded */
+    size_t sz = 0;
+    int rc;
 
     switch ( cmd )
     {
     case XENVER_extraversion:
-        argsize = sizeof(xen_extraversion_t);
+        sz = sizeof(xen_extraversion_t);
         break;
     case XENVER_compile_info:
-        argsize = sizeof(xen_compile_info_t);
+        sz = sizeof(xen_compile_info_t);
         break;
     case XENVER_capabilities:
-        argsize = sizeof(xen_capabilities_info_t);
+        sz = sizeof(xen_capabilities_info_t);
         break;
     case XENVER_changeset:
-        argsize = sizeof(xen_changeset_info_t);
+        sz = sizeof(xen_changeset_info_t);
         break;
     case XENVER_platform_parameters:
-        argsize = sizeof(xen_platform_parameters_t);
+        sz = sizeof(xen_platform_parameters_t);
         break;
     }
 
-    if ( (argsize != 0) && (lock_pages(xch, arg, argsize) != 0) )
+    HYPERCALL_BOUNCE_SET_SIZE(arg, sz);
+
+    if ( (sz != 0) && xc_hypercall_bounce_pre(xch, arg) )
     {
         PERROR("Could not lock memory for version hypercall");
         return -ENOMEM;
     }
 
 #ifdef VALGRIND
-    if (argsize != 0)
-        memset(arg, 0, argsize);
+    if (sz != 0)
+        memset(hypercall_bounce_get(bounce), 0, sz);
 #endif
 
-    rc = do_xen_version(xch, cmd, arg);
+    rc = do_xen_version(xch, cmd, HYPERCALL_BUFFER(arg));
 
-    if ( argsize != 0 )
-        unlock_pages(xch, arg, argsize);
+    if ( sz != 0 )
+        xc_hypercall_bounce_post(xch, arg);
 
     return rc;
 }
diff -r bf7fb64762eb -r f3b26cbd7eb5 tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
@@ -164,13 +164,14 @@ void xc__hypercall_bounce_post(xc_interf
 
 int do_xen_hypercall(xc_interface *xch, privcmd_hypercall_t *hypercall);
 
-static inline int do_xen_version(xc_interface *xch, int cmd, void *dest)
+static inline int do_xen_version(xc_interface *xch, int cmd, xc_hypercall_buffer_t *dest)
 {
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dest);
 
     hypercall.op     = __HYPERVISOR_xen_version;
     hypercall.arg[0] = (unsigned long) cmd;
-    hypercall.arg[1] = (unsigned long) dest;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(dest);
 
     return do_xen_hypercall(xch, &hypercall);
 }

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 12 of 24] libxc: convert domctl interfaces over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (10 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 11 of 24] libxc: convert xc_version over to hypercall buffers Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 13 of 24] libxc: convert shadow domctl interfaces and save/restore " Ian Campbell
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 7f735088ac1d169d140d978441c772b62083fae0
# Parent  f3b26cbd7eb5cc0ce7321aaae9eefb821192e86f
libxc: convert domctl interfaces over to hypercall buffers

(defer save/restore and shadow related interfaces til a later patch)

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r f3b26cbd7eb5 -r 7f735088ac1d tools/libxc/ia64/xc_ia64_stubs.c
--- a/tools/libxc/ia64/xc_ia64_stubs.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/ia64/xc_ia64_stubs.c	Mon Sep 06 14:28:11 2010 +0100
@@ -46,7 +46,7 @@ xc_ia64_get_pfn_list(xc_interface *xch, 
     domctl.u.getmemlist.max_pfns = nr_pages;
     domctl.u.getmemlist.start_pfn = start_page;
     domctl.u.getmemlist.num_pfns = 0;
-    set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf);
+    xc_set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf);
 
     if (lock_pages(pfn_buf, nr_pages * sizeof(xen_pfn_t)) != 0) {
         PERROR("Could not lock pfn list buffer");
diff -r f3b26cbd7eb5 -r 7f735088ac1d tools/libxc/xc_dom_boot.c
--- a/tools/libxc/xc_dom_boot.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_dom_boot.c	Mon Sep 06 14:28:11 2010 +0100
@@ -61,9 +61,10 @@ static int setup_hypercall_page(struct x
     return rc;
 }
 
-static int launch_vm(xc_interface *xch, domid_t domid, void *ctxt)
+static int launch_vm(xc_interface *xch, domid_t domid, xc_hypercall_buffer_t *ctxt)
 {
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER_ARGUMENT(ctxt);
     int rc;
 
     xc_dom_printf(xch, "%s: called, ctxt=%p", __FUNCTION__, ctxt);
@@ -71,7 +72,7 @@ static int launch_vm(xc_interface *xch, 
     domctl.cmd = XEN_DOMCTL_setvcpucontext;
     domctl.domain = domid;
     domctl.u.vcpucontext.vcpu = 0;
-    set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt);
+    xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt);
     rc = do_domctl(xch, &domctl);
     if ( rc != 0 )
         xc_dom_panic(xch, XC_INTERNAL_ERROR,
@@ -202,8 +203,12 @@ int xc_dom_boot_image(struct xc_dom_imag
 int xc_dom_boot_image(struct xc_dom_image *dom)
 {
     DECLARE_DOMCTL;
-    vcpu_guest_context_any_t ctxt;
+    DECLARE_HYPERCALL_BUFFER(vcpu_guest_context_any_t, ctxt);
     int rc;
+
+    ctxt = xc_hypercall_buffer_alloc(dom->xch, ctxt, sizeof(*ctxt));
+    if ( ctxt == NULL )
+        return -1;
 
     DOMPRINTF_CALLED(dom->xch);
 
@@ -260,12 +265,13 @@ int xc_dom_boot_image(struct xc_dom_imag
         return rc;
 
     /* let the vm run */
-    memset(&ctxt, 0, sizeof(ctxt));
-    if ( (rc = dom->arch_hooks->vcpu(dom, &ctxt)) != 0 )
+    memset(ctxt, 0, sizeof(ctxt));
+    if ( (rc = dom->arch_hooks->vcpu(dom, ctxt)) != 0 )
         return rc;
     xc_dom_unmap_all(dom);
-    rc = launch_vm(dom->xch, dom->guest_domid, &ctxt);
+    rc = launch_vm(dom->xch, dom->guest_domid, HYPERCALL_BUFFER(ctxt));
 
+    xc_hypercall_buffer_free(dom->xch, ctxt);
     return rc;
 }
 
diff -r f3b26cbd7eb5 -r 7f735088ac1d tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
@@ -115,36 +115,31 @@ int xc_vcpu_setaffinity(xc_interface *xc
                         uint64_t *cpumap, int cpusize)
 {
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
     int ret = -1;
-    uint8_t *local = malloc(cpusize); 
 
-    if(local == NULL)
+    local = xc_hypercall_buffer_alloc(xch, local, cpusize);
+    if ( local == NULL )
     {
-        PERROR("Could not alloc memory for Xen hypercall");
+        PERROR("Could not alloc locked memory for Xen hypercall");
         goto out;
     }
+
     domctl.cmd = XEN_DOMCTL_setvcpuaffinity;
     domctl.domain = (domid_t)domid;
     domctl.u.vcpuaffinity.vcpu    = vcpu;
 
     bitmap_64_to_byte(local, cpumap, cpusize * 8);
 
-    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
+    xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
 
     domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
-    
-    if ( lock_pages(xch, local, cpusize) != 0 )
-    {
-        PERROR("Could not lock memory for Xen hypercall");
-        goto out;
-    }
 
     ret = do_domctl(xch, &domctl);
 
-    unlock_pages(xch, local, cpusize);
+    xc_hypercall_buffer_free(xch, local);
 
  out:
-    free(local);
     return ret;
 }
 
@@ -155,9 +150,10 @@ int xc_vcpu_getaffinity(xc_interface *xc
                         uint64_t *cpumap, int cpusize)
 {
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
     int ret = -1;
-    uint8_t * local = malloc(cpusize);
 
+    local = xc_hypercall_buffer_alloc(xch, local, cpusize);
     if(local == NULL)
     {
         PERROR("Could not alloc memory for Xen hypercall");
@@ -168,22 +164,15 @@ int xc_vcpu_getaffinity(xc_interface *xc
     domctl.domain = (domid_t)domid;
     domctl.u.vcpuaffinity.vcpu = vcpu;
 
-
-    set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
+    xc_set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
     domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
-    
-    if ( lock_pages(xch, local, sizeof(local)) != 0 )
-    {
-        PERROR("Could not lock memory for Xen hypercall");
-        goto out;
-    }
 
     ret = do_domctl(xch, &domctl);
 
-    unlock_pages(xch, local, sizeof (local));
     bitmap_byte_to_64(cpumap, local, cpusize * 8);
+
+    xc_hypercall_buffer_free(xch, local);
 out:
-    free(local);
     return ret;
 }
 
@@ -283,20 +272,23 @@ int xc_domain_hvm_getcontext(xc_interfac
 {
     int ret;
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(ctxt_buf, size, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( ctxt_buf && xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
 
     domctl.cmd = XEN_DOMCTL_gethvmcontext;
     domctl.domain = (domid_t)domid;
     domctl.u.hvmcontext.size = size;
-    set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
-
-    if ( ctxt_buf ) 
-        if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 )
-            return ret;
+    if ( ctxt_buf )
+        xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
+    else
+        xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, HYPERCALL_BUFFER_NULL);
 
     ret = do_domctl(xch, &domctl);
 
-    if ( ctxt_buf ) 
-        unlock_pages(xch, ctxt_buf, size);
+    if ( ctxt_buf )
+        xc_hypercall_bounce_post(xch, ctxt_buf);
 
     return (ret < 0 ? -1 : domctl.u.hvmcontext.size);
 }
@@ -312,23 +304,21 @@ int xc_domain_hvm_getcontext_partial(xc_
 {
     int ret;
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(ctxt_buf, size, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
 
-    if ( !ctxt_buf ) 
-        return -EINVAL;
+    if ( !ctxt_buf || xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
 
     domctl.cmd = XEN_DOMCTL_gethvmcontext_partial;
     domctl.domain = (domid_t) domid;
     domctl.u.hvmcontext_partial.type = typecode;
     domctl.u.hvmcontext_partial.instance = instance;
-    set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf);
+    xc_set_xen_guest_handle(domctl.u.hvmcontext_partial.buffer, ctxt_buf);
 
-    if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 )
-        return ret;
-    
     ret = do_domctl(xch, &domctl);
 
-    if ( ctxt_buf ) 
-        unlock_pages(xch, ctxt_buf, size);
+    if ( ctxt_buf )
+        xc_hypercall_bounce_post(xch, ctxt_buf);
 
     return ret ? -1 : 0;
 }
@@ -341,18 +331,19 @@ int xc_domain_hvm_setcontext(xc_interfac
 {
     int ret;
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(ctxt_buf, size, XC_HYPERCALL_BUFFER_BOUNCE_IN);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
 
     domctl.cmd = XEN_DOMCTL_sethvmcontext;
     domctl.domain = domid;
     domctl.u.hvmcontext.size = size;
-    set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
-
-    if ( (ret = lock_pages(xch, ctxt_buf, size)) != 0 )
-        return ret;
+    xc_set_xen_guest_handle(domctl.u.hvmcontext.buffer, ctxt_buf);
 
     ret = do_domctl(xch, &domctl);
 
-    unlock_pages(xch, ctxt_buf, size);
+    xc_hypercall_bounce_post(xch, ctxt_buf);
 
     return ret;
 }
@@ -364,18 +355,19 @@ int xc_vcpu_getcontext(xc_interface *xch
 {
     int rc;
     DECLARE_DOMCTL;
-    size_t sz = sizeof(vcpu_guest_context_any_t);
+    DECLARE_HYPERCALL_BOUNCE(ctxt, sizeof(vcpu_guest_context_any_t), XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt) )
+        return -1;
 
     domctl.cmd = XEN_DOMCTL_getvcpucontext;
     domctl.domain = (domid_t)domid;
     domctl.u.vcpucontext.vcpu   = (uint16_t)vcpu;
-    set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c);
+    xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt);
 
-    
-    if ( (rc = lock_pages(xch, ctxt, sz)) != 0 )
-        return rc;
     rc = do_domctl(xch, &domctl);
-    unlock_pages(xch, ctxt, sz);
+
+    xc_hypercall_bounce_post(xch, ctxt);
 
     return rc;
 }
@@ -559,22 +551,24 @@ int xc_domain_get_tsc_info(xc_interface 
 {
     int rc;
     DECLARE_DOMCTL;
-    xen_guest_tsc_info_t info = { 0 };
+    DECLARE_HYPERCALL_BUFFER(xen_guest_tsc_info_t, info);
+
+    info = xc_hypercall_buffer_alloc(xch, info, sizeof(*info));
+    if ( info == NULL )
+        return -ENOMEM;
 
     domctl.cmd = XEN_DOMCTL_gettscinfo;
     domctl.domain = (domid_t)domid;
-    set_xen_guest_handle(domctl.u.tsc_info.out_info, &info);
-    if ( (rc = lock_pages(xch, &info, sizeof(info))) != 0 )
-        return rc;
+    xc_set_xen_guest_handle(domctl.u.tsc_info.out_info, info);
     rc = do_domctl(xch, &domctl);
     if ( rc == 0 )
     {
-        *tsc_mode = info.tsc_mode;
-        *elapsed_nsec = info.elapsed_nsec;
-        *gtsc_khz = info.gtsc_khz;
-        *incarnation = info.incarnation;
+        *tsc_mode = info->tsc_mode;
+        *elapsed_nsec = info->elapsed_nsec;
+        *gtsc_khz = info->gtsc_khz;
+        *incarnation = info->incarnation;
     }
-    unlock_pages(xch, &info,sizeof(info));
+    xc_hypercall_buffer_free(xch, info);
     return rc;
 }
 
@@ -840,8 +834,8 @@ int xc_vcpu_setcontext(xc_interface *xch
                        vcpu_guest_context_any_t *ctxt)
 {
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(ctxt, sizeof(vcpu_guest_context_any_t), XC_HYPERCALL_BUFFER_BOUNCE_IN);
     int rc;
-    size_t sz = sizeof(vcpu_guest_context_any_t);
 
     if (ctxt == NULL)
     {
@@ -849,16 +843,17 @@ int xc_vcpu_setcontext(xc_interface *xch
         return -1;
     }
 
+    if ( xc_hypercall_bounce_pre(xch, ctxt) )
+        return -1;
+
     domctl.cmd = XEN_DOMCTL_setvcpucontext;
     domctl.domain = domid;
     domctl.u.vcpucontext.vcpu = vcpu;
-    set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt->c);
+    xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt);
 
-    if ( (rc = lock_pages(xch, ctxt, sz)) != 0 )
-        return rc;
     rc = do_domctl(xch, &domctl);
-    
-    unlock_pages(xch, ctxt, sz);
+
+    xc_hypercall_bounce_post(xch, ctxt);
 
     return rc;
 }
@@ -984,6 +979,13 @@ int xc_get_device_group(
 {
     int rc;
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(sdev_array, max_sdevs * sizeof(*sdev_array), XC_HYPERCALL_BUFFER_BOUNCE_IN);
+
+    if ( xc_hypercall_bounce_pre(xch, sdev_array) )
+    {
+        PERROR("Could not lock memory for xc_get_device_group");
+        return -1;
+    }
 
     domctl.cmd = XEN_DOMCTL_get_device_group;
     domctl.domain = (domid_t)domid;
@@ -991,17 +993,14 @@ int xc_get_device_group(
     domctl.u.get_device_group.machine_bdf = machine_bdf;
     domctl.u.get_device_group.max_sdevs = max_sdevs;
 
-    set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array);
+    xc_set_xen_guest_handle(domctl.u.get_device_group.sdev_array, sdev_array);
 
-    if ( lock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array)) != 0 )
-    {
-        PERROR("Could not lock memory for xc_get_device_group");
-        return -ENOMEM;
-    }
     rc = do_domctl(xch, &domctl);
-    unlock_pages(xch, sdev_array, max_sdevs * sizeof(*sdev_array));
 
     *num_sdevs = domctl.u.get_device_group.num_sdevs;
+
+    xc_hypercall_bounce_post(xch, sdev_array);
+
     return rc;
 }
 
diff -r f3b26cbd7eb5 -r 7f735088ac1d tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_private.c	Mon Sep 06 14:28:11 2010 +0100
@@ -322,12 +322,18 @@ int xc_get_pfn_type_batch(xc_interface *
 int xc_get_pfn_type_batch(xc_interface *xch, uint32_t dom,
                           unsigned int num, xen_pfn_t *arr)
 {
+    int rc;
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(arr, sizeof(*arr) * num, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+    if ( xc_hypercall_bounce_pre(xch, arr) )
+        return -1;
     domctl.cmd = XEN_DOMCTL_getpageframeinfo3;
     domctl.domain = (domid_t)dom;
     domctl.u.getpageframeinfo3.num = num;
-    set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr);
-    return do_domctl(xch, &domctl);
+    xc_set_xen_guest_handle(domctl.u.getpageframeinfo3.array, arr);
+    rc = do_domctl(xch, &domctl);
+    xc_hypercall_bounce_post(xch, arr);
+    return rc;
 }
 
 int xc_mmuext_op(
@@ -557,25 +563,27 @@ int xc_get_pfn_list(xc_interface *xch,
                     unsigned long max_pfns)
 {
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(pfn_buf, max_pfns * sizeof(*pfn_buf), XC_HYPERCALL_BUFFER_BOUNCE_OUT);
     int ret;
-    domctl.cmd = XEN_DOMCTL_getmemlist;
-    domctl.domain   = (domid_t)domid;
-    domctl.u.getmemlist.max_pfns = max_pfns;
-    set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf);
 
 #ifdef VALGRIND
     memset(pfn_buf, 0, max_pfns * sizeof(*pfn_buf));
 #endif
 
-    if ( lock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf)) != 0 )
+    if ( xc_hypercall_bounce_pre(xch, pfn_buf) )
     {
         PERROR("xc_get_pfn_list: pfn_buf lock failed");
         return -1;
     }
 
+    domctl.cmd = XEN_DOMCTL_getmemlist;
+    domctl.domain   = (domid_t)domid;
+    domctl.u.getmemlist.max_pfns = max_pfns;
+    xc_set_xen_guest_handle(domctl.u.getmemlist.buffer, pfn_buf);
+
     ret = do_domctl(xch, &domctl);
 
-    unlock_pages(xch, pfn_buf, max_pfns * sizeof(*pfn_buf));
+    xc_hypercall_bounce_post(xch, pfn_buf);
 
     return (ret < 0) ? -1 : domctl.u.getmemlist.num_pfns;
 }
diff -r f3b26cbd7eb5 -r 7f735088ac1d tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
@@ -209,17 +209,18 @@ static inline int do_domctl(xc_interface
 {
     int ret = -1;
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BOUNCE(domctl, sizeof(*domctl), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
 
-    if ( hcall_buf_prep(xch, (void **)&domctl, sizeof(*domctl)) != 0 )
+    domctl->interface_version = XEN_DOMCTL_INTERFACE_VERSION;
+
+    if ( xc_hypercall_bounce_pre(xch, domctl) )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
     }
 
-    domctl->interface_version = XEN_DOMCTL_INTERFACE_VERSION;
-
     hypercall.op     = __HYPERVISOR_domctl;
-    hypercall.arg[0] = (unsigned long)domctl;
+    hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(domctl);
 
     if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 )
     {
@@ -228,8 +229,7 @@ static inline int do_domctl(xc_interface
                     " rebuild the user-space tool set?\n");
     }
 
-    hcall_buf_release(xch, (void **)&domctl, sizeof(*domctl));
-
+    xc_hypercall_bounce_post(xch, domctl);
  out1:
     return ret;
 }
diff -r f3b26cbd7eb5 -r 7f735088ac1d tools/libxc/xc_resume.c
--- a/tools/libxc/xc_resume.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_resume.c	Mon Sep 06 14:28:11 2010 +0100
@@ -196,12 +196,6 @@ static int xc_domain_resume_any(xc_inter
         goto out;
     }
 
-    if ( lock_pages(xch, &ctxt, sizeof(ctxt)) )
-    {
-        ERROR("Unable to lock ctxt");
-        goto out;
-    }
-
     if ( xc_vcpu_getcontext(xch, domid, 0, &ctxt) )
     {
         ERROR("Could not get vcpu context");
@@ -235,7 +229,6 @@ static int xc_domain_resume_any(xc_inter
 
 #if defined(__i386__) || defined(__x86_64__)
  out:
-    unlock_pages(xch, (void *)&ctxt, sizeof ctxt);
     if (p2m)
         munmap(p2m, P2M_FL_ENTRIES*PAGE_SIZE);
     if (p2m_frame_list)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 13 of 24] libxc: convert shadow domctl interfaces and save/restore over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (11 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 12 of 24] libxc: convert domctl interfaces " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 14 of 24] libxc: convert sysctl interfaces " Ian Campbell
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 2a5e84fe718ae25e91785643388411b70d4c013b
# Parent  7f735088ac1d169d140d978441c772b62083fae0
libxc: convert shadow domctl interfaces and save/restore over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 7f735088ac1d -r 2a5e84fe718a tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
@@ -404,7 +404,7 @@ int xc_shadow_control(xc_interface *xch,
 int xc_shadow_control(xc_interface *xch,
                       uint32_t domid,
                       unsigned int sop,
-                      unsigned long *dirty_bitmap,
+                      xc_hypercall_buffer_t *dirty_bitmap,
                       unsigned long pages,
                       unsigned long *mb,
                       uint32_t mode,
@@ -412,14 +412,17 @@ int xc_shadow_control(xc_interface *xch,
 {
     int rc;
     DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BUFFER_ARGUMENT(dirty_bitmap);
+
     domctl.cmd = XEN_DOMCTL_shadow_op;
     domctl.domain = (domid_t)domid;
     domctl.u.shadow_op.op     = sop;
     domctl.u.shadow_op.pages  = pages;
     domctl.u.shadow_op.mb     = mb ? *mb : 0;
     domctl.u.shadow_op.mode   = mode;
-    set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap,
-                         (uint8_t *)dirty_bitmap);
+    if (dirty_bitmap != NULL)
+        xc_set_xen_guest_handle(domctl.u.shadow_op.dirty_bitmap,
+                                dirty_bitmap);
 
     rc = do_domctl(xch, &domctl);
 
diff -r 7f735088ac1d -r 2a5e84fe718a tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain_restore.c	Mon Sep 06 14:28:11 2010 +0100
@@ -1063,7 +1063,7 @@ int xc_domain_restore(xc_interface *xch,
     shared_info_any_t *new_shared_info;
 
     /* A copy of the CPU context of the guest. */
-    vcpu_guest_context_any_t ctxt;
+    DECLARE_HYPERCALL_BUFFER(vcpu_guest_context_any_t, ctxt);
 
     /* A table containing the type of each PFN (/not/ MFN!). */
     unsigned long *pfn_type = NULL;
@@ -1112,6 +1112,15 @@ int xc_domain_restore(xc_interface *xch,
 
     if ( superpages )
         return 1;
+
+    ctxt = xc_hypercall_buffer_alloc(xch, ctxt, sizeof(*ctxt));
+
+    if ( ctxt == NULL )
+    {
+        PERROR("Unable to lock ctxt");
+        return 1;
+    }
+
 
     if ( (orig_io_fd_flags = fcntl(io_fd, F_GETFL, 0)) < 0 ) {
         PERROR("unable to read IO FD flags");
@@ -1539,26 +1548,20 @@ int xc_domain_restore(xc_interface *xch,
         }
     }
 
-    if ( lock_pages(xch, &ctxt, sizeof(ctxt)) )
-    {
-        PERROR("Unable to lock ctxt");
-        return 1;
-    }
-
     vcpup = tailbuf.u.pv.vcpubuf;
     for ( i = 0; i <= max_vcpu_id; i++ )
     {
         if ( !(vcpumap & (1ULL << i)) )
             continue;
 
-        memcpy(&ctxt, vcpup, ((dinfo->guest_width == 8) ? sizeof(ctxt.x64)
-                              : sizeof(ctxt.x32)));
-        vcpup += (dinfo->guest_width == 8) ? sizeof(ctxt.x64) : sizeof(ctxt.x32);
+        memcpy(ctxt, vcpup, ((dinfo->guest_width == 8) ? sizeof(ctxt->x64)
+                              : sizeof(ctxt->x32)));
+        vcpup += (dinfo->guest_width == 8) ? sizeof(ctxt->x64) : sizeof(ctxt->x32);
 
         DPRINTF("read VCPU %d\n", i);
 
         if ( !new_ctxt_format )
-            SET_FIELD(&ctxt, flags, GET_FIELD(&ctxt, flags) | VGCF_online);
+            SET_FIELD(ctxt, flags, GET_FIELD(ctxt, flags) | VGCF_online);
 
         if ( i == 0 )
         {
@@ -1566,7 +1569,7 @@ int xc_domain_restore(xc_interface *xch,
              * Uncanonicalise the suspend-record frame number and poke
              * resume record.
              */
-            pfn = GET_FIELD(&ctxt, user_regs.edx);
+            pfn = GET_FIELD(ctxt, user_regs.edx);
             if ( (pfn >= dinfo->p2m_size) ||
                  (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) )
             {
@@ -1574,7 +1577,7 @@ int xc_domain_restore(xc_interface *xch,
                 goto out;
             }
             mfn = ctx->p2m[pfn];
-            SET_FIELD(&ctxt, user_regs.edx, mfn);
+            SET_FIELD(ctxt, user_regs.edx, mfn);
             start_info = xc_map_foreign_range(
                 xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE, mfn);
             SET_FIELD(start_info, nr_pages, dinfo->p2m_size);
@@ -1589,15 +1592,15 @@ int xc_domain_restore(xc_interface *xch,
             munmap(start_info, PAGE_SIZE);
         }
         /* Uncanonicalise each GDT frame number. */
-        if ( GET_FIELD(&ctxt, gdt_ents) > 8192 )
+        if ( GET_FIELD(ctxt, gdt_ents) > 8192 )
         {
             ERROR("GDT entry count out of range");
             goto out;
         }
 
-        for ( j = 0; (512*j) < GET_FIELD(&ctxt, gdt_ents); j++ )
+        for ( j = 0; (512*j) < GET_FIELD(ctxt, gdt_ents); j++ )
         {
-            pfn = GET_FIELD(&ctxt, gdt_frames[j]);
+            pfn = GET_FIELD(ctxt, gdt_frames[j]);
             if ( (pfn >= dinfo->p2m_size) ||
                  (pfn_type[pfn] != XEN_DOMCTL_PFINFO_NOTAB) )
             {
@@ -1605,10 +1608,10 @@ int xc_domain_restore(xc_interface *xch,
                       j, (unsigned long)pfn);
                 goto out;
             }
-            SET_FIELD(&ctxt, gdt_frames[j], ctx->p2m[pfn]);
+            SET_FIELD(ctxt, gdt_frames[j], ctx->p2m[pfn]);
         }
         /* Uncanonicalise the page table base pointer. */
-        pfn = UNFOLD_CR3(GET_FIELD(&ctxt, ctrlreg[3]));
+        pfn = UNFOLD_CR3(GET_FIELD(ctxt, ctrlreg[3]));
 
         if ( pfn >= dinfo->p2m_size )
         {
@@ -1625,12 +1628,12 @@ int xc_domain_restore(xc_interface *xch,
                   (unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT);
             goto out;
         }
-        SET_FIELD(&ctxt, ctrlreg[3], FOLD_CR3(ctx->p2m[pfn]));
+        SET_FIELD(ctxt, ctrlreg[3], FOLD_CR3(ctx->p2m[pfn]));
 
         /* Guest pagetable (x86/64) stored in otherwise-unused CR1. */
-        if ( (ctx->pt_levels == 4) && (ctxt.x64.ctrlreg[1] & 1) )
+        if ( (ctx->pt_levels == 4) && (ctxt->x64.ctrlreg[1] & 1) )
         {
-            pfn = UNFOLD_CR3(ctxt.x64.ctrlreg[1] & ~1);
+            pfn = UNFOLD_CR3(ctxt->x64.ctrlreg[1] & ~1);
             if ( pfn >= dinfo->p2m_size )
             {
                 ERROR("User PT base is bad: pfn=%lu p2m_size=%lu",
@@ -1645,12 +1648,12 @@ int xc_domain_restore(xc_interface *xch,
                       (unsigned long)ctx->pt_levels<<XEN_DOMCTL_PFINFO_LTAB_SHIFT);
                 goto out;
             }
-            ctxt.x64.ctrlreg[1] = FOLD_CR3(ctx->p2m[pfn]);
+            ctxt->x64.ctrlreg[1] = FOLD_CR3(ctx->p2m[pfn]);
         }
         domctl.cmd = XEN_DOMCTL_setvcpucontext;
         domctl.domain = (domid_t)dom;
         domctl.u.vcpucontext.vcpu = i;
-        set_xen_guest_handle(domctl.u.vcpucontext.ctxt, &ctxt.c);
+        xc_set_xen_guest_handle(domctl.u.vcpucontext.ctxt, ctxt);
         frc = xc_domctl(xch, &domctl);
         if ( frc != 0 )
         {
@@ -1791,6 +1794,7 @@ int xc_domain_restore(xc_interface *xch,
  out:
     if ( (rc != 0) && (dom != 0) )
         xc_domain_destroy(xch, dom);
+    xc_hypercall_buffer_free(xch, ctxt);
     free(mmu);
     free(ctx->p2m);
     free(pfn_type);
diff -r 7f735088ac1d -r 2a5e84fe718a tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain_save.c	Mon Sep 06 14:28:11 2010 +0100
@@ -411,7 +411,7 @@ static int print_stats(xc_interface *xch
 
 
 static int analysis_phase(xc_interface *xch, uint32_t domid, struct save_ctx *ctx,
-                          unsigned long *arr, int runs)
+                          xc_hypercall_buffer_t *arr, int runs)
 {
     long long start, now;
     xc_shadow_op_stats_t stats;
@@ -915,7 +915,9 @@ int xc_domain_save(xc_interface *xch, in
        - that should be sent this iteration (unless later marked as skip);
        - to skip this iteration because already dirty;
        - to fixup by sending at the end if not already resent; */
-    unsigned long *to_send = NULL, *to_skip = NULL, *to_fix = NULL;
+    DECLARE_HYPERCALL_BUFFER(unsigned long, to_skip);
+    DECLARE_HYPERCALL_BUFFER(unsigned long, to_send);
+    unsigned long *to_fix = NULL;
 
     xc_shadow_op_stats_t stats;
 
@@ -1034,9 +1036,9 @@ int xc_domain_save(xc_interface *xch, in
     sent_last_iter = dinfo->p2m_size;
 
     /* Setup to_send / to_fix and to_skip bitmaps */
-    to_send = xc_memalign(PAGE_SIZE, ROUNDUP(BITMAP_SIZE, PAGE_SHIFT)); 
+    to_send = xc_hypercall_buffer_alloc_pages(xch, to_send, NRPAGES(BITMAP_SIZE));
+    to_skip = xc_hypercall_buffer_alloc_pages(xch, to_skip, NRPAGES(BITMAP_SIZE));
     to_fix  = calloc(1, BITMAP_SIZE);
-    to_skip = xc_memalign(PAGE_SIZE, ROUNDUP(BITMAP_SIZE, PAGE_SHIFT)); 
 
     if ( !to_send || !to_fix || !to_skip )
     {
@@ -1046,20 +1048,7 @@ int xc_domain_save(xc_interface *xch, in
 
     memset(to_send, 0xff, BITMAP_SIZE);
 
-    if ( lock_pages(xch, to_send, BITMAP_SIZE) )
-    {
-        PERROR("Unable to lock to_send");
-        return 1;
-    }
-
-    /* (to fix is local only) */
-    if ( lock_pages(xch, to_skip, BITMAP_SIZE) )
-    {
-        PERROR("Unable to lock to_skip");
-        return 1;
-    }
-
-    if ( hvm ) 
+    if ( hvm )
     {
         /* Need another buffer for HVM context */
         hvm_buf_size = xc_domain_hvm_getcontext(xch, dom, 0, 0);
@@ -1076,7 +1065,7 @@ int xc_domain_save(xc_interface *xch, in
         }
     }
 
-    analysis_phase(xch, dom, ctx, to_skip, 0);
+    analysis_phase(xch, dom, ctx, HYPERCALL_BUFFER(to_skip), 0);
 
     pfn_type   = xc_memalign(PAGE_SIZE, ROUNDUP(
                               MAX_BATCH_SIZE * sizeof(*pfn_type), PAGE_SHIFT));
@@ -1188,7 +1177,7 @@ int xc_domain_save(xc_interface *xch, in
                 /* Slightly wasteful to peek the whole array evey time,
                    but this is fast enough for the moment. */
                 frc = xc_shadow_control(
-                    xch, dom, XEN_DOMCTL_SHADOW_OP_PEEK, to_skip, 
+                    xch, dom, XEN_DOMCTL_SHADOW_OP_PEEK, HYPERCALL_BUFFER(to_skip),
                     dinfo->p2m_size, NULL, 0, NULL);
                 if ( frc != dinfo->p2m_size )
                 {
@@ -1528,8 +1517,8 @@ int xc_domain_save(xc_interface *xch, in
 
             }
 
-            if ( xc_shadow_control(xch, dom, 
-                                   XEN_DOMCTL_SHADOW_OP_CLEAN, to_send, 
+            if ( xc_shadow_control(xch, dom,
+                                   XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send),
                                    dinfo->p2m_size, NULL, 0, &stats) != dinfo->p2m_size )
             {
                 PERROR("Error flushing shadow PT");
@@ -1857,7 +1846,7 @@ int xc_domain_save(xc_interface *xch, in
         print_stats(xch, dom, 0, &stats, 1);
 
         if ( xc_shadow_control(xch, dom,
-                               XEN_DOMCTL_SHADOW_OP_CLEAN, to_send,
+                               XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send),
                                dinfo->p2m_size, NULL, 0, &stats) != dinfo->p2m_size )
         {
             PERROR("Error flushing shadow PT");
@@ -1888,12 +1877,13 @@ int xc_domain_save(xc_interface *xch, in
     if ( ctx->live_m2p )
         munmap(ctx->live_m2p, M2P_SIZE(ctx->max_mfn));
 
+    xc_hypercall_buffer_free_pages(xch, to_send, NRPAGES(BITMAP_SIZE));
+    xc_hypercall_buffer_free_pages(xch, to_skip, NRPAGES(BITMAP_SIZE));
+
     free(pfn_type);
     free(pfn_batch);
     free(pfn_err);
-    free(to_send);
     free(to_fix);
-    free(to_skip);
 
     DPRINTF("Save exit rc=%d\n",rc);
 
diff -r 7f735088ac1d -r 2a5e84fe718a tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
@@ -610,7 +610,7 @@ int xc_shadow_control(xc_interface *xch,
 int xc_shadow_control(xc_interface *xch,
                       uint32_t domid,
                       unsigned int sop,
-                      unsigned long *dirty_bitmap,
+                      xc_hypercall_buffer_t *dirty_bitmap,
                       unsigned long pages,
                       unsigned long *mb,
                       uint32_t mode,
diff -r 7f735088ac1d -r 2a5e84fe718a tools/libxc/xg_private.h
--- a/tools/libxc/xg_private.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xg_private.h	Mon Sep 06 14:28:11 2010 +0100
@@ -157,6 +157,7 @@ typedef l4_pgentry_64_t l4_pgentry_t;
 #define PAGE_MASK_IA64          (~(PAGE_SIZE_IA64-1))
 
 #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
+#define NRPAGES(x) (ROUNDUP(x, PAGE_SHIFT) >> PAGE_SHIFT)
 
 
 /* XXX SMH: following skanky macros rely on variable p2m_size being set */

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 14 of 24] libxc: convert sysctl interfaces over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (12 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 13 of 24] libxc: convert shadow domctl interfaces and save/restore " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 15 of 24] libxc: convert watchdog interface " Ian Campbell
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 92e9795d06413325c84f2220cf33c0dd831e8355
# Parent  2a5e84fe718ae25e91785643388411b70d4c013b
libxc: convert sysctl interfaces over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 2a5e84fe718a -r 92e9795d0641 tools/libxc/xc_cpupool.c
--- a/tools/libxc/xc_cpupool.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_cpupool.c	Mon Sep 06 14:28:11 2010 +0100
@@ -72,8 +72,14 @@ int xc_cpupool_getinfo(xc_interface *xch
     int err = 0;
     int p;
     uint32_t poolid = first_poolid;
-    uint8_t local[sizeof (info->cpumap)];
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+
+    local = xc_hypercall_buffer_alloc(xch, local, sizeof (info->cpumap));
+    if ( local == NULL ) {
+        PERROR("Could not allocate locked memory for Xen hypercall");
+        return -ENOMEM;
+    }
 
     memset(info, 0, n_max * sizeof(xc_cpupoolinfo_t));
 
@@ -82,17 +88,10 @@ int xc_cpupool_getinfo(xc_interface *xch
         sysctl.cmd = XEN_SYSCTL_cpupool_op;
         sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_INFO;
         sysctl.u.cpupool_op.cpupool_id = poolid;
-        set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
+        xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
         sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(info->cpumap) * 8;
 
-        if ( (err = lock_pages(xch, local, sizeof(local))) != 0 )
-        {
-            PERROR("Could not lock memory for Xen hypercall");
-            break;
-        }
         err = do_sysctl_save(xch, &sysctl);
-        unlock_pages(xch, local, sizeof (local));
-
         if ( err < 0 )
             break;
 
@@ -103,6 +102,8 @@ int xc_cpupool_getinfo(xc_interface *xch
         poolid = sysctl.u.cpupool_op.cpupool_id + 1;
         info++;
     }
+
+    xc_hypercall_buffer_free(xch, local);
 
     if ( p == 0 )
         return err;
@@ -153,27 +154,28 @@ int xc_cpupool_freeinfo(xc_interface *xc
                         uint64_t *cpumap)
 {
     int err;
-    uint8_t local[sizeof (*cpumap)];
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BUFFER(uint8_t, local);
+
+    local = xc_hypercall_buffer_alloc(xch, local, sizeof (*cpumap));
+    if ( local == NULL ) {
+        PERROR("Could not allocate locked memory for Xen hypercall");
+        return -ENOMEM;
+    }
 
     sysctl.cmd = XEN_SYSCTL_cpupool_op;
     sysctl.u.cpupool_op.op = XEN_SYSCTL_CPUPOOL_OP_FREEINFO;
-    set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
+    xc_set_xen_guest_handle(sysctl.u.cpupool_op.cpumap.bitmap, local);
     sysctl.u.cpupool_op.cpumap.nr_cpus = sizeof(*cpumap) * 8;
 
-    if ( (err = lock_pages(xch, local, sizeof(local))) != 0 )
-    {
-        PERROR("Could not lock memory for Xen hypercall");
-        return err;
-    }
-
     err = do_sysctl_save(xch, &sysctl);
-    unlock_pages(xch, local, sizeof (local));
 
     if (err < 0)
         return err;
 
     bitmap_byte_to_64(cpumap, local, sizeof(local) * 8);
 
+    xc_hypercall_buffer_free(xch, local);
+
     return 0;
 }
diff -r 2a5e84fe718a -r 92e9795d0641 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
@@ -245,21 +245,22 @@ int xc_domain_getinfolist(xc_interface *
 {
     int ret = 0;
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BOUNCE(info, max_domains*sizeof(*info), XC_HYPERCALL_BUFFER_BOUNCE_OUT);
 
-    if ( lock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t)) != 0 )
+    if ( xc_hypercall_bounce_pre(xch, info) )
         return -1;
 
     sysctl.cmd = XEN_SYSCTL_getdomaininfolist;
     sysctl.u.getdomaininfolist.first_domain = first_domain;
     sysctl.u.getdomaininfolist.max_domains  = max_domains;
-    set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info);
+    xc_set_xen_guest_handle(sysctl.u.getdomaininfolist.buffer, info);
 
     if ( xc_sysctl(xch, &sysctl) < 0 )
         ret = -1;
     else
         ret = sysctl.u.getdomaininfolist.num_domains;
 
-    unlock_pages(xch, info, max_domains*sizeof(xc_domaininfo_t));
+    xc_hypercall_bounce_post(xch, info);
 
     return ret;
 }
diff -r 2a5e84fe718a -r 92e9795d0641 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
@@ -27,11 +27,15 @@ int xc_readconsolering(xc_interface *xch
                        int clear, int incremental, uint32_t *pindex)
 {
     int ret;
+    unsigned int nr_chars = *pnr_chars;
     DECLARE_SYSCTL;
-    unsigned int nr_chars = *pnr_chars;
+    DECLARE_HYPERCALL_BOUNCE(buffer, nr_chars, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, buffer) )
+        return -1;
 
     sysctl.cmd = XEN_SYSCTL_readconsole;
-    set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer);
+    xc_set_xen_guest_handle(sysctl.u.readconsole.buffer, buffer);
     sysctl.u.readconsole.count = nr_chars;
     sysctl.u.readconsole.clear = clear;
     sysctl.u.readconsole.incremental = 0;
@@ -41,9 +45,6 @@ int xc_readconsolering(xc_interface *xch
         sysctl.u.readconsole.incremental = incremental;
     }
 
-    if ( (ret = lock_pages(xch, buffer, nr_chars)) != 0 )
-        return ret;
-
     if ( (ret = do_sysctl(xch, &sysctl)) == 0 )
     {
         *pnr_chars = sysctl.u.readconsole.count;
@@ -51,7 +52,7 @@ int xc_readconsolering(xc_interface *xch
             *pindex = sysctl.u.readconsole.index;
     }
 
-    unlock_pages(xch, buffer, nr_chars);
+    xc_hypercall_bounce_post(xch, buffer);
 
     return ret;
 }
@@ -60,17 +61,18 @@ int xc_send_debug_keys(xc_interface *xch
 {
     int ret, len = strlen(keys);
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BOUNCE(keys, len, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, keys) )
+        return -1;
 
     sysctl.cmd = XEN_SYSCTL_debug_keys;
-    set_xen_guest_handle(sysctl.u.debug_keys.keys, keys);
+    xc_set_xen_guest_handle(sysctl.u.debug_keys.keys, keys);
     sysctl.u.debug_keys.nr_keys = len;
-
-    if ( (ret = lock_pages(xch, keys, len)) != 0 )
-        return ret;
 
     ret = do_sysctl(xch, &sysctl);
 
-    unlock_pages(xch, keys, len);
+    xc_hypercall_bounce_post(xch, keys);
 
     return ret;
 }
@@ -173,8 +175,8 @@ int xc_perfc_reset(xc_interface *xch)
 
     sysctl.cmd = XEN_SYSCTL_perfc_op;
     sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_reset;
-    set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL);
-    set_xen_guest_handle(sysctl.u.perfc_op.val, NULL);
+    xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL);
+    xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL);
 
     return do_sysctl(xch, &sysctl);
 }
@@ -188,8 +190,8 @@ int xc_perfc_query_number(xc_interface *
 
     sysctl.cmd = XEN_SYSCTL_perfc_op;
     sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query;
-    set_xen_guest_handle(sysctl.u.perfc_op.desc, NULL);
-    set_xen_guest_handle(sysctl.u.perfc_op.val, NULL);
+    xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, HYPERCALL_BUFFER_NULL);
+    xc_set_xen_guest_handle(sysctl.u.perfc_op.val, HYPERCALL_BUFFER_NULL);
 
     rc = do_sysctl(xch, &sysctl);
 
@@ -202,15 +204,17 @@ int xc_perfc_query_number(xc_interface *
 }
 
 int xc_perfc_query(xc_interface *xch,
-                   xc_perfc_desc_t *desc,
-                   xc_perfc_val_t *val)
+                   struct xc_hypercall_buffer *desc,
+                   struct xc_hypercall_buffer *val)
 {
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BUFFER_ARGUMENT(desc);
+    DECLARE_HYPERCALL_BUFFER_ARGUMENT(val);
 
     sysctl.cmd = XEN_SYSCTL_perfc_op;
     sysctl.u.perfc_op.cmd = XEN_SYSCTL_PERFCOP_query;
-    set_xen_guest_handle(sysctl.u.perfc_op.desc, desc);
-    set_xen_guest_handle(sysctl.u.perfc_op.val, val);
+    xc_set_xen_guest_handle(sysctl.u.perfc_op.desc, desc);
+    xc_set_xen_guest_handle(sysctl.u.perfc_op.val, val);
 
     return do_sysctl(xch, &sysctl);
 }
@@ -221,7 +225,7 @@ int xc_lockprof_reset(xc_interface *xch)
 
     sysctl.cmd = XEN_SYSCTL_lockprof_op;
     sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_reset;
-    set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL);
+    xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL);
 
     return do_sysctl(xch, &sysctl);
 }
@@ -234,7 +238,7 @@ int xc_lockprof_query_number(xc_interfac
 
     sysctl.cmd = XEN_SYSCTL_lockprof_op;
     sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query;
-    set_xen_guest_handle(sysctl.u.lockprof_op.data, NULL);
+    xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, HYPERCALL_BUFFER_NULL);
 
     rc = do_sysctl(xch, &sysctl);
 
@@ -244,17 +248,18 @@ int xc_lockprof_query_number(xc_interfac
 }
 
 int xc_lockprof_query(xc_interface *xch,
-                        uint32_t *n_elems,
-                        uint64_t *time,
-                        xc_lockprof_data_t *data)
+                      uint32_t *n_elems,
+                      uint64_t *time,
+                      struct xc_hypercall_buffer *data)
 {
     int rc;
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BUFFER_ARGUMENT(data);
 
     sysctl.cmd = XEN_SYSCTL_lockprof_op;
     sysctl.u.lockprof_op.cmd = XEN_SYSCTL_LOCKPROF_query;
     sysctl.u.lockprof_op.max_elem = *n_elems;
-    set_xen_guest_handle(sysctl.u.lockprof_op.data, data);
+    xc_set_xen_guest_handle(sysctl.u.lockprof_op.data, data);
 
     rc = do_sysctl(xch, &sysctl);
 
@@ -268,20 +273,21 @@ int xc_getcpuinfo(xc_interface *xch, int
 {
     int rc;
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BOUNCE(info, max_cpus*sizeof(*info), XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, info) )
+        return -1;
 
     sysctl.cmd = XEN_SYSCTL_getcpuinfo;
-    sysctl.u.getcpuinfo.max_cpus = max_cpus; 
-    set_xen_guest_handle(sysctl.u.getcpuinfo.info, info); 
-
-    if ( (rc = lock_pages(xch, info, max_cpus*sizeof(*info))) != 0 )
-        return rc;
+    sysctl.u.getcpuinfo.max_cpus = max_cpus;
+    xc_set_xen_guest_handle(sysctl.u.getcpuinfo.info, info);
 
     rc = do_sysctl(xch, &sysctl);
 
-    unlock_pages(xch, info, max_cpus*sizeof(*info));
+    xc_hypercall_bounce_post(xch, info);
 
     if ( nr_cpus )
-        *nr_cpus = sysctl.u.getcpuinfo.nr_cpus; 
+        *nr_cpus = sysctl.u.getcpuinfo.nr_cpus;
 
     return rc;
 }
diff -r 2a5e84fe718a -r 92e9795d0641 tools/libxc/xc_offline_page.c
--- a/tools/libxc/xc_offline_page.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_offline_page.c	Mon Sep 06 14:28:11 2010 +0100
@@ -66,12 +66,13 @@ int xc_mark_page_online(xc_interface *xc
                         unsigned long end, uint32_t *status)
 {
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BOUNCE(status, sizeof(uint32_t)*(end - start + 1), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
     int ret = -1;
 
     if ( !status || (end < start) )
         return -EINVAL;
 
-    if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
+    if ( xc_hypercall_bounce_pre(xch, status) )
     {
         ERROR("Could not lock memory for xc_mark_page_online\n");
         return -EINVAL;
@@ -81,10 +82,10 @@ int xc_mark_page_online(xc_interface *xc
     sysctl.u.page_offline.start = start;
     sysctl.u.page_offline.cmd = sysctl_page_online;
     sysctl.u.page_offline.end = end;
-    set_xen_guest_handle(sysctl.u.page_offline.status, status);
+    xc_set_xen_guest_handle(sysctl.u.page_offline.status, status);
     ret = xc_sysctl(xch, &sysctl);
 
-    unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1));
+    xc_hypercall_bounce_post(xch, status);
 
     return ret;
 }
@@ -93,12 +94,13 @@ int xc_mark_page_offline(xc_interface *x
                           unsigned long end, uint32_t *status)
 {
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BOUNCE(status, sizeof(uint32_t)*(end - start + 1), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
     int ret = -1;
 
     if ( !status || (end < start) )
         return -EINVAL;
 
-    if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
+    if ( xc_hypercall_bounce_pre(xch, status) )
     {
         ERROR("Could not lock memory for xc_mark_page_offline");
         return -EINVAL;
@@ -108,10 +110,10 @@ int xc_mark_page_offline(xc_interface *x
     sysctl.u.page_offline.start = start;
     sysctl.u.page_offline.cmd = sysctl_page_offline;
     sysctl.u.page_offline.end = end;
-    set_xen_guest_handle(sysctl.u.page_offline.status, status);
+    xc_set_xen_guest_handle(sysctl.u.page_offline.status, status);
     ret = xc_sysctl(xch, &sysctl);
 
-    unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1));
+    xc_hypercall_bounce_post(xch, status);
 
     return ret;
 }
@@ -120,12 +122,13 @@ int xc_query_page_offline_status(xc_inte
                                  unsigned long end, uint32_t *status)
 {
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BOUNCE(status, sizeof(uint32_t)*(end - start + 1), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
     int ret = -1;
 
     if ( !status || (end < start) )
         return -EINVAL;
 
-    if (lock_pages(xch, status, sizeof(uint32_t)*(end - start + 1)))
+    if ( xc_hypercall_bounce_pre(xch, status) )
     {
         ERROR("Could not lock memory for xc_query_page_offline_status\n");
         return -EINVAL;
@@ -135,10 +138,10 @@ int xc_query_page_offline_status(xc_inte
     sysctl.u.page_offline.start = start;
     sysctl.u.page_offline.cmd = sysctl_query_page_offline;
     sysctl.u.page_offline.end = end;
-    set_xen_guest_handle(sysctl.u.page_offline.status, status);
+    xc_set_xen_guest_handle(sysctl.u.page_offline.status, status);
     ret = xc_sysctl(xch, &sysctl);
 
-    unlock_pages(xch, status, sizeof(uint32_t)*(end - start + 1));
+    xc_hypercall_bounce_post(xch, status);
 
     return ret;
 }
diff -r 2a5e84fe718a -r 92e9795d0641 tools/libxc/xc_pm.c
--- a/tools/libxc/xc_pm.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_pm.c	Mon Sep 06 14:28:11 2010 +0100
@@ -45,6 +45,10 @@ int xc_pm_get_pxstat(xc_interface *xch, 
 int xc_pm_get_pxstat(xc_interface *xch, int cpuid, struct xc_px_stat *pxpt)
 {
     DECLARE_SYSCTL;
+    /* Sizes unknown until xc_pm_get_max_px */
+    DECLARE_NAMED_HYPERCALL_BOUNCE(trans, &pxpt->trans_pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+    DECLARE_NAMED_HYPERCALL_BOUNCE(pt, &pxpt->pt, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+
     int max_px, ret;
 
     if ( !pxpt || !(pxpt->trans_pt) || !(pxpt->pt) )
@@ -53,14 +57,15 @@ int xc_pm_get_pxstat(xc_interface *xch, 
     if ( (ret = xc_pm_get_max_px(xch, cpuid, &max_px)) != 0)
         return ret;
 
-    if ( (ret = lock_pages(xch, pxpt->trans_pt, 
-        max_px * max_px * sizeof(uint64_t))) != 0 )
+    HYPERCALL_BOUNCE_SET_SIZE(trans, max_px * max_px * sizeof(uint64_t));
+    HYPERCALL_BOUNCE_SET_SIZE(pt, max_px * sizeof(struct xc_px_val));
+
+    if ( xc_hypercall_bounce_pre(xch, trans) )
         return ret;
 
-    if ( (ret = lock_pages(xch, pxpt->pt, 
-        max_px * sizeof(struct xc_px_val))) != 0 )
+    if ( xc_hypercall_bounce_pre(xch, pt) )
     {
-        unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
+        xc_hypercall_bounce_post(xch, trans);
         return ret;
     }
 
@@ -68,15 +73,14 @@ int xc_pm_get_pxstat(xc_interface *xch, 
     sysctl.u.get_pmstat.type = PMSTAT_get_pxstat;
     sysctl.u.get_pmstat.cpuid = cpuid;
     sysctl.u.get_pmstat.u.getpx.total = max_px;
-    set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, pxpt->trans_pt);
-    set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, 
-                        (pm_px_val_t *)pxpt->pt);
+    xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.trans_pt, trans);
+    xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getpx.pt, pt);
 
     ret = xc_sysctl(xch, &sysctl);
     if ( ret )
     {
-        unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
-        unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val));
+	xc_hypercall_bounce_post(xch, trans);
+	xc_hypercall_bounce_post(xch, pt);
         return ret;
     }
 
@@ -85,8 +89,8 @@ int xc_pm_get_pxstat(xc_interface *xch, 
     pxpt->last = sysctl.u.get_pmstat.u.getpx.last;
     pxpt->cur = sysctl.u.get_pmstat.u.getpx.cur;
 
-    unlock_pages(xch, pxpt->trans_pt, max_px * max_px * sizeof(uint64_t));
-    unlock_pages(xch, pxpt->pt, max_px * sizeof(struct xc_px_val));
+    xc_hypercall_bounce_post(xch, trans);
+    xc_hypercall_bounce_post(xch, pt);
 
     return ret;
 }
@@ -120,6 +124,8 @@ int xc_pm_get_cxstat(xc_interface *xch, 
 int xc_pm_get_cxstat(xc_interface *xch, int cpuid, struct xc_cx_stat *cxpt)
 {
     DECLARE_SYSCTL;
+    DECLARE_NAMED_HYPERCALL_BOUNCE(triggers, &cxpt->triggers, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+    DECLARE_NAMED_HYPERCALL_BOUNCE(residencies, &cxpt->residencies, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
     int max_cx, ret;
 
     if( !cxpt || !(cxpt->triggers) || !(cxpt->residencies) )
@@ -128,22 +134,23 @@ int xc_pm_get_cxstat(xc_interface *xch, 
     if ( (ret = xc_pm_get_max_cx(xch, cpuid, &max_cx)) )
         goto unlock_0;
 
-    if ( (ret = lock_pages(xch, cxpt, sizeof(struct xc_cx_stat))) )
+    HYPERCALL_BOUNCE_SET_SIZE(triggers, max_cx * sizeof(uint64_t));
+    HYPERCALL_BOUNCE_SET_SIZE(residencies, max_cx * sizeof(uint64_t));
+
+    ret = -1;
+    if ( xc_hypercall_bounce_pre(xch, triggers) )
         goto unlock_0;
-    if ( (ret = lock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t))) )
+    if ( xc_hypercall_bounce_pre(xch, residencies) )
         goto unlock_1;
-    if ( (ret = lock_pages(xch, cxpt->residencies, max_cx * sizeof(uint64_t))) )
-        goto unlock_2;
 
     sysctl.cmd = XEN_SYSCTL_get_pmstat;
     sysctl.u.get_pmstat.type = PMSTAT_get_cxstat;
     sysctl.u.get_pmstat.cpuid = cpuid;
-    set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, cxpt->triggers);
-    set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, 
-                         cxpt->residencies);
+    xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.triggers, triggers);
+    xc_set_xen_guest_handle(sysctl.u.get_pmstat.u.getcx.residencies, residencies);
 
     if ( (ret = xc_sysctl(xch, &sysctl)) )
-        goto unlock_3;
+        goto unlock_2;
 
     cxpt->nr = sysctl.u.get_pmstat.u.getcx.nr;
     cxpt->last = sysctl.u.get_pmstat.u.getcx.last;
@@ -154,12 +161,10 @@ int xc_pm_get_cxstat(xc_interface *xch, 
     cxpt->cc3 = sysctl.u.get_pmstat.u.getcx.cc3;
     cxpt->cc6 = sysctl.u.get_pmstat.u.getcx.cc6;
 
-unlock_3:
-    unlock_pages(xch, cxpt->residencies, max_cx * sizeof(uint64_t));
 unlock_2:
-    unlock_pages(xch, cxpt->triggers, max_cx * sizeof(uint64_t));
+    xc_hypercall_bounce_post(xch, residencies);
 unlock_1:
-    unlock_pages(xch, cxpt, sizeof(struct xc_cx_stat));
+    xc_hypercall_bounce_post(xch, triggers);
 unlock_0:
     return ret;
 }
@@ -186,12 +191,19 @@ int xc_get_cpufreq_para(xc_interface *xc
     DECLARE_SYSCTL;
     int ret = 0;
     struct xen_get_cpufreq_para *sys_para = &sysctl.u.pm_op.u.get_para;
+    DECLARE_NAMED_HYPERCALL_BOUNCE(affected_cpus,
+			 user_para->affected_cpus,
+			 user_para->cpu_num * sizeof(uint32_t), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+    DECLARE_NAMED_HYPERCALL_BOUNCE(scaling_available_frequencies,
+			 user_para->scaling_available_frequencies,
+			 user_para->freq_num * sizeof(uint32_t), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+    DECLARE_NAMED_HYPERCALL_BOUNCE(scaling_available_governors,
+			 user_para->scaling_available_governors,
+			 user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+
     bool has_num = user_para->cpu_num &&
                      user_para->freq_num &&
                      user_para->gov_num;
-
-    if ( (xch < 0) || !user_para )
-        return -EINVAL;
 
     if ( has_num )
     {
@@ -200,22 +212,16 @@ int xc_get_cpufreq_para(xc_interface *xc
              (!user_para->scaling_available_governors) )
             return -EINVAL;
 
-        if ( (ret = lock_pages(xch, user_para->affected_cpus,
-                               user_para->cpu_num * sizeof(uint32_t))) )
+        if ( xc_hypercall_bounce_pre(xch, affected_cpus) )
             goto unlock_1;
-        if ( (ret = lock_pages(xch, user_para->scaling_available_frequencies,
-                               user_para->freq_num * sizeof(uint32_t))) )
+        if ( xc_hypercall_bounce_pre(xch, scaling_available_frequencies) )
             goto unlock_2;
-        if ( (ret = lock_pages(xch, user_para->scaling_available_governors,
-                 user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char))) )
+        if ( xc_hypercall_bounce_pre(xch, scaling_available_governors) )
             goto unlock_3;
 
-        set_xen_guest_handle(sys_para->affected_cpus,
-                             user_para->affected_cpus);
-        set_xen_guest_handle(sys_para->scaling_available_frequencies,
-                             user_para->scaling_available_frequencies);
-        set_xen_guest_handle(sys_para->scaling_available_governors,
-                             user_para->scaling_available_governors);
+        xc_set_xen_guest_handle(sys_para->affected_cpus, affected_cpus);
+        xc_set_xen_guest_handle(sys_para->scaling_available_frequencies, scaling_available_frequencies);
+        xc_set_xen_guest_handle(sys_para->scaling_available_governors, scaling_available_governors);
     }
 
     sysctl.cmd = XEN_SYSCTL_pm_op;
@@ -250,7 +256,7 @@ int xc_get_cpufreq_para(xc_interface *xc
         user_para->scaling_min_freq = sys_para->scaling_min_freq;
         user_para->turbo_enabled    = sys_para->turbo_enabled;
 
-        memcpy(user_para->scaling_driver, 
+        memcpy(user_para->scaling_driver,
                 sys_para->scaling_driver, CPUFREQ_NAME_LEN);
         memcpy(user_para->scaling_governor,
                 sys_para->scaling_governor, CPUFREQ_NAME_LEN);
@@ -263,14 +269,11 @@ int xc_get_cpufreq_para(xc_interface *xc
     }
 
 unlock_4:
-    unlock_pages(xch, user_para->scaling_available_governors,
-                 user_para->gov_num * CPUFREQ_NAME_LEN * sizeof(char));
+    xc_hypercall_bounce_post(xch, scaling_available_governors);
 unlock_3:
-    unlock_pages(xch, user_para->scaling_available_frequencies,
-                 user_para->freq_num * sizeof(uint32_t));
+    xc_hypercall_bounce_post(xch, scaling_available_frequencies);
 unlock_2:
-    unlock_pages(xch, user_para->affected_cpus,
-                 user_para->cpu_num * sizeof(uint32_t));
+    xc_hypercall_bounce_post(xch, affected_cpus);
 unlock_1:
     return ret;
 }
diff -r 2a5e84fe718a -r 92e9795d0641 tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
@@ -238,18 +238,18 @@ static inline int do_sysctl(xc_interface
 {
     int ret = -1;
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BOUNCE(sysctl, sizeof(*sysctl), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
 
-    if ( hcall_buf_prep(xch, (void **)&sysctl, sizeof(*sysctl)) != 0 )
+    sysctl->interface_version = XEN_SYSCTL_INTERFACE_VERSION;
+
+    if ( xc_hypercall_bounce_pre(xch, sysctl) )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
     }
 
-    sysctl->interface_version = XEN_SYSCTL_INTERFACE_VERSION;
-
     hypercall.op     = __HYPERVISOR_sysctl;
-    hypercall.arg[0] = (unsigned long)sysctl;
-
+    hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(sysctl);
     if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 )
     {
         if ( errno == EACCES )
@@ -257,8 +257,7 @@ static inline int do_sysctl(xc_interface
                     " rebuild the user-space tool set?\n");
     }
 
-    hcall_buf_release(xch, (void **)&sysctl, sizeof(*sysctl));
-
+    xc_hypercall_bounce_post(xch, sysctl);
  out1:
     return ret;
 }
diff -r 2a5e84fe718a -r 92e9795d0641 tools/libxc/xc_tbuf.c
--- a/tools/libxc/xc_tbuf.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_tbuf.c	Mon Sep 06 14:28:11 2010 +0100
@@ -116,9 +116,15 @@ int xc_tbuf_set_cpu_mask(xc_interface *x
 int xc_tbuf_set_cpu_mask(xc_interface *xch, uint32_t mask)
 {
     DECLARE_SYSCTL;
+    DECLARE_HYPERCALL_BUFFER(uint8_t, bytemap);
     int ret = -1;
     uint64_t mask64 = mask;
-    uint8_t bytemap[sizeof(mask64)];
+
+    bytemap = xc_hypercall_buffer_alloc(xch, bytemap, sizeof(mask64));
+    {
+        PERROR("Could not lock memory for Xen hypercall");
+        goto out;
+    }
 
     sysctl.cmd = XEN_SYSCTL_tbuf_op;
     sysctl.interface_version = XEN_SYSCTL_INTERFACE_VERSION;
@@ -126,18 +132,12 @@ int xc_tbuf_set_cpu_mask(xc_interface *x
 
     bitmap_64_to_byte(bytemap, &mask64, sizeof (mask64) * 8);
 
-    set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap);
+    xc_set_xen_guest_handle(sysctl.u.tbuf_op.cpu_mask.bitmap, bytemap);
     sysctl.u.tbuf_op.cpu_mask.nr_cpus = sizeof(bytemap) * 8;
-
-    if ( lock_pages(xch, &bytemap, sizeof(bytemap)) != 0 )
-    {
-        PERROR("Could not lock memory for Xen hypercall");
-        goto out;
-    }
 
     ret = do_sysctl(xch, &sysctl);
 
-    unlock_pages(xch, &bytemap, sizeof(bytemap));
+    xc_hypercall_buffer_free(xch, bytemap);
 
  out:
     return ret;
diff -r 2a5e84fe718a -r 92e9795d0641 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xenctrl.h	Mon Sep 06 14:28:11 2010 +0100
@@ -996,21 +996,18 @@ int xc_perfc_query_number(xc_interface *
 int xc_perfc_query_number(xc_interface *xch,
                           int *nbr_desc,
                           int *nbr_val);
-/* IMPORTANT: The caller is responsible for mlock()'ing the @desc and @val
-   arrays. */
 int xc_perfc_query(xc_interface *xch,
-                   xc_perfc_desc_t *desc,
-                   xc_perfc_val_t *val);
+                   xc_hypercall_buffer_t *desc,
+                   xc_hypercall_buffer_t *val);
 
 typedef xen_sysctl_lockprof_data_t xc_lockprof_data_t;
 int xc_lockprof_reset(xc_interface *xch);
 int xc_lockprof_query_number(xc_interface *xch,
                              uint32_t *n_elems);
-/* IMPORTANT: The caller is responsible for mlock()'ing the @data array. */
 int xc_lockprof_query(xc_interface *xch,
                       uint32_t *n_elems,
                       uint64_t *time,
-                      xc_lockprof_data_t *data);
+                      xc_hypercall_buffer_t *data);
 
 /**
  * Memory maps a range within one domain to a local address range.  Mappings
diff -r 2a5e84fe718a -r 92e9795d0641 tools/misc/xenlockprof.c
--- a/tools/misc/xenlockprof.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/misc/xenlockprof.c	Mon Sep 06 14:28:11 2010 +0100
@@ -18,22 +18,6 @@
 #include <string.h>
 #include <inttypes.h>
 
-static int lock_pages(void *addr, size_t len)
-{
-    int e = 0;
-#ifndef __sun__
-    e = mlock(addr, len);
-#endif
-    return (e);
-}
-
-static void unlock_pages(void *addr, size_t len)
-{
-#ifndef __sun__
-    munlock(addr, len);
-#endif
-}
-
 int main(int argc, char *argv[])
 {
     xc_interface      *xc_handle;
@@ -41,7 +25,7 @@ int main(int argc, char *argv[])
     uint64_t           time;
     double             l, b, sl, sb;
     char               name[60];
-    xc_lockprof_data_t *data;
+	DECLARE_HYPERCALL_BUFFER(xc_lockprof_data_t, data);
 
     if ( (argc > 2) || ((argc == 2) && (strcmp(argv[1], "-r") != 0)) )
     {
@@ -78,8 +62,8 @@ int main(int argc, char *argv[])
     }
 
     n += 32;    /* just to be sure */
-    data = malloc(sizeof(*data) * n);
-    if ( (data == NULL) || (lock_pages(data, sizeof(*data) * n) != 0) )
+    data = xc_hypercall_buffer_alloc(xc_handle, data, sizeof(*data) * n);
+    if ( data == NULL )
     {
         fprintf(stderr, "Could not alloc or lock buffers: %d (%s)\n",
                 errno, strerror(errno));
@@ -87,14 +71,12 @@ int main(int argc, char *argv[])
     }
 
     i = n;
-    if ( xc_lockprof_query(xc_handle, &i, &time, data) != 0 )
+    if ( xc_lockprof_query(xc_handle, &i, &time, HYPERCALL_BUFFER(data)) != 0 )
     {
         fprintf(stderr, "Error getting profile records: %d (%s)\n",
                 errno, strerror(errno));
         return 1;
     }
-
-    unlock_pages(data, sizeof(*data) * n);
 
     if ( i > n )
     {
@@ -132,5 +114,7 @@ int main(int argc, char *argv[])
     printf("total locked time:    %20.9fs\n", sl);
     printf("total blocked time:   %20.9fs\n", sb);
 
+	xc_hypercall_buffer_free(xc_handle, data);
+
     return 0;
 }
diff -r 2a5e84fe718a -r 92e9795d0641 tools/misc/xenperf.c
--- a/tools/misc/xenperf.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/misc/xenperf.c	Mon Sep 06 14:28:11 2010 +0100
@@ -68,28 +68,12 @@ const char *hypercall_name_table[64] =
 };
 #undef X
 
-static int lock_pages(void *addr, size_t len)
-{
-    int e = 0;
-#ifndef __sun__
-    e = mlock(addr, len);
-#endif
-    return (e);
-}
-
-static void unlock_pages(void *addr, size_t len)
-{
-#ifndef __sun__
-    munlock(addr, len);
-#endif
-}
-
 int main(int argc, char *argv[])
 {
     int              i, j;
     xc_interface    *xc_handle;
-    xc_perfc_desc_t *pcd;
-    xc_perfc_val_t  *pcv;
+    DECLARE_HYPERCALL_BUFFER(xc_perfc_desc_t, pcd);
+    DECLARE_HYPERCALL_BUFFER(xc_perfc_val_t, pcv);
     xc_perfc_val_t  *val;
     int num_desc, num_val;
     unsigned int    sum, reset = 0, full = 0, pretty = 0;
@@ -154,28 +138,22 @@ int main(int argc, char *argv[])
         return 1;
     }
 
-    pcd = malloc(sizeof(*pcd) * num_desc);
-    pcv = malloc(sizeof(*pcv) * num_val);
+    pcd = xc_hypercall_buffer_alloc(xc_handle, pcd, sizeof(*pcd) * num_desc);
+    pcv = xc_hypercall_buffer_alloc(xc_handle, pcv, sizeof(*pcv) * num_val);
 
-    if ( pcd == NULL
-         || lock_pages(pcd, sizeof(*pcd) * num_desc) != 0
-         || pcv == NULL
-         || lock_pages(pcv, sizeof(*pcv) * num_val) != 0)
+    if ( pcd == NULL || pcv == NULL)
     {
         fprintf(stderr, "Could not alloc or lock buffers: %d (%s)\n",
                 errno, strerror(errno));
         exit(-1);
     }
 
-    if ( xc_perfc_query(xc_handle, pcd, pcv) != 0 )
+    if ( xc_perfc_query(xc_handle, HYPERCALL_BUFFER(pcd), HYPERCALL_BUFFER(pcv)) != 0 )
     {
         fprintf(stderr, "Error getting perf counter: %d (%s)\n",
                 errno, strerror(errno));
         return 1;
     }
-
-    unlock_pages(pcd, sizeof(*pcd) * num_desc);
-    unlock_pages(pcv, sizeof(*pcv) * num_val);
 
     val = pcv;
     for ( i = 0; i < num_desc; i++ )
@@ -221,5 +199,7 @@ int main(int argc, char *argv[])
         val += pcd[i].nr_vals;
     }
 
+	xc_hypercall_buffer_free(xc_handle, pcd);
+	xc_hypercall_buffer_free(xc_handle, pcv);
     return 0;
 }

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 15 of 24] libxc: convert watchdog interface over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (13 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 14 of 24] libxc: convert sysctl interfaces " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 16 of 24] libxc: convert acm interfaces " Ian Campbell
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 44b73f8b623f9e9567d0188f1fa1a566f4e00b1b
# Parent  92e9795d06413325c84f2220cf33c0dd831e8355
libxc: convert watchdog interface over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 92e9795d0641 -r 44b73f8b623f tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
@@ -378,24 +378,25 @@ int xc_watchdog(xc_interface *xch,
                 uint32_t timeout)
 {
     int ret = -1;
-    sched_watchdog_t arg;
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(sched_watchdog_t, arg);
 
-    hypercall.op     = __HYPERVISOR_sched_op;
-    hypercall.arg[0] = (unsigned long)SCHEDOP_watchdog;
-    hypercall.arg[1] = (unsigned long)&arg;
-    arg.id = id;
-    arg.timeout = timeout;
-
-    if ( lock_pages(xch, &arg, sizeof(arg)) != 0 )
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
     }
 
+    hypercall.op     = __HYPERVISOR_sched_op;
+    hypercall.arg[0] = (unsigned long)SCHEDOP_watchdog;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->id = id;
+    arg->timeout = timeout;
+
     ret = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(xch, &arg, sizeof(arg));
+    xc_hypercall_buffer_free(xch, arg);
 
  out1:
     return ret;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 16 of 24] libxc: convert acm interfaces over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (14 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 15 of 24] libxc: convert watchdog interface " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 17 of 24] libxc: convert evtchn " Ian Campbell
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID c42a409a5c56732462d9effa844ebc2f7d06ba60
# Parent  44b73f8b623f9e9567d0188f1fa1a566f4e00b1b
libxc: convert acm interfaces over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 44b73f8b623f -r c42a409a5c56 tools/libxc/xc_acm.c
--- a/tools/libxc/xc_acm.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_acm.c	Mon Sep 06 14:28:11 2010 +0100
@@ -27,12 +27,19 @@ int xc_acm_op(xc_interface *xch, int cmd
 {
     int ret;
     DECLARE_HYPERCALL;
-    struct xen_acmctl acmctl;
+    DECLARE_HYPERCALL_BUFFER(struct xen_acmctl, acmctl);
+
+    acmctl = xc_hypercall_buffer_alloc(xch, acmctl, sizeof(*acmctl));
+    if ( acmctl == NULL )
+    {
+        PERROR("Could not lock memory for Xen hypercall");
+        return -EFAULT;
+    }
 
     switch (cmd) {
         case ACMOP_setpolicy: {
             struct acm_setpolicy *setpolicy = (struct acm_setpolicy *)arg;
-            memcpy(&acmctl.u.setpolicy,
+            memcpy(&acmctl->u.setpolicy,
                    setpolicy,
                    sizeof(struct acm_setpolicy));
         }
@@ -40,7 +47,7 @@ int xc_acm_op(xc_interface *xch, int cmd
 
         case ACMOP_getpolicy: {
             struct acm_getpolicy *getpolicy = (struct acm_getpolicy *)arg;
-            memcpy(&acmctl.u.getpolicy,
+            memcpy(&acmctl->u.getpolicy,
                    getpolicy,
                    sizeof(struct acm_getpolicy));
         }
@@ -48,7 +55,7 @@ int xc_acm_op(xc_interface *xch, int cmd
 
         case ACMOP_dumpstats: {
             struct acm_dumpstats *dumpstats = (struct acm_dumpstats *)arg;
-            memcpy(&acmctl.u.dumpstats,
+            memcpy(&acmctl->u.dumpstats,
                    dumpstats,
                    sizeof(struct acm_dumpstats));
         }
@@ -56,7 +63,7 @@ int xc_acm_op(xc_interface *xch, int cmd
 
         case ACMOP_getssid: {
             struct acm_getssid *getssid = (struct acm_getssid *)arg;
-            memcpy(&acmctl.u.getssid,
+            memcpy(&acmctl->u.getssid,
                    getssid,
                    sizeof(struct acm_getssid));
         }
@@ -64,7 +71,7 @@ int xc_acm_op(xc_interface *xch, int cmd
 
         case ACMOP_getdecision: {
             struct acm_getdecision *getdecision = (struct acm_getdecision *)arg;
-            memcpy(&acmctl.u.getdecision,
+            memcpy(&acmctl->u.getdecision,
                    getdecision,
                    sizeof(struct acm_getdecision));
         }
@@ -72,7 +79,7 @@ int xc_acm_op(xc_interface *xch, int cmd
 
         case ACMOP_chgpolicy: {
             struct acm_change_policy *change_policy = (struct acm_change_policy *)arg;
-            memcpy(&acmctl.u.change_policy,
+            memcpy(&acmctl->u.change_policy,
                    change_policy,
                    sizeof(struct acm_change_policy));
         }
@@ -80,40 +87,36 @@ int xc_acm_op(xc_interface *xch, int cmd
 
         case ACMOP_relabeldoms: {
             struct acm_relabel_doms *relabel_doms = (struct acm_relabel_doms *)arg;
-            memcpy(&acmctl.u.relabel_doms,
+            memcpy(&acmctl->u.relabel_doms,
                    relabel_doms,
                    sizeof(struct acm_relabel_doms));
         }
         break;
     }
 
-    acmctl.cmd = cmd;
-    acmctl.interface_version = ACM_INTERFACE_VERSION;
+    acmctl->cmd = cmd;
+    acmctl->interface_version = ACM_INTERFACE_VERSION;
 
     hypercall.op = __HYPERVISOR_xsm_op;
-    hypercall.arg[0] = (unsigned long)&acmctl;
-    if ( lock_pages(xch, &acmctl, sizeof(acmctl)) != 0)
-    {
-        PERROR("Could not lock memory for Xen hypercall");
-        return -EFAULT;
-    }
+    hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(acmctl);
     if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0)
     {
         if ( errno == EACCES )
             DPRINTF("acmctl operation failed -- need to"
                     " rebuild the user-space tool set?\n");
     }
-    unlock_pages(xch, &acmctl, sizeof(acmctl));
 
     switch (cmd) {
         case ACMOP_getdecision: {
             struct acm_getdecision *getdecision = (struct acm_getdecision *)arg;
             memcpy(getdecision,
-                   &acmctl.u.getdecision,
+                   &acmctl->u.getdecision,
                    sizeof(struct acm_getdecision));
             break;
         }
     }
+
+    xc_hypercall_buffer_free(xch, acmctl);
 
     return ret;
 }

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 17 of 24] libxc: convert evtchn interfaces over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (15 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 16 of 24] libxc: convert acm interfaces " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 18 of 24] libxc: convert schedop " Ian Campbell
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 0a24ab4ac4a43f4b36feea9e7c5ddd72b5f23872
# Parent  c42a409a5c56732462d9effa844ebc2f7d06ba60
libxc: convert evtchn interfaces over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r c42a409a5c56 -r 0a24ab4ac4a4 tools/libxc/xc_evtchn.c
--- a/tools/libxc/xc_evtchn.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_evtchn.c	Mon Sep 06 14:28:11 2010 +0100
@@ -22,31 +22,30 @@
 
 #include "xc_private.h"
 
-
 static int do_evtchn_op(xc_interface *xch, int cmd, void *arg,
                         size_t arg_size, int silently_fail)
 {
     int ret = -1;
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BOUNCE(arg, arg_size, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
 
-    hypercall.op     = __HYPERVISOR_event_channel_op;
-    hypercall.arg[0] = cmd;
-    hypercall.arg[1] = (unsigned long)arg;
-
-    if ( lock_pages(xch, arg, arg_size) != 0 )
+    if ( xc_hypercall_bounce_pre(xch, arg) )
     {
         PERROR("do_evtchn_op: arg lock failed");
         goto out;
     }
 
+    hypercall.op     = __HYPERVISOR_event_channel_op;
+    hypercall.arg[0] = cmd;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+
     if ((ret = do_xen_hypercall(xch, &hypercall)) < 0 && !silently_fail)
         ERROR("do_evtchn_op: HYPERVISOR_event_channel_op failed: %d", ret);
 
-    unlock_pages(xch, arg, arg_size);
+    xc_hypercall_bounce_post(xch, arg);
  out:
     return ret;
 }
-
 
 evtchn_port_or_error_t
 xc_evtchn_alloc_unbound(xc_interface *xch,

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 18 of 24] libxc: convert schedop interfaces over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (16 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 17 of 24] libxc: convert evtchn " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 19 of 24] libxc: convert physdevop interface " Ian Campbell
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID d781e6711016a2f15b276499b2ebbd69b16d5dfe
# Parent  0a24ab4ac4a43f4b36feea9e7c5ddd72b5f23872
libxc: convert schedop interfaces over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 0a24ab4ac4a4 -r d781e6711016 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
@@ -85,24 +85,25 @@ int xc_domain_shutdown(xc_interface *xch
                        int reason)
 {
     int ret = -1;
-    sched_remote_shutdown_t arg;
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BUFFER(sched_remote_shutdown_t, arg);
 
-    hypercall.op     = __HYPERVISOR_sched_op;
-    hypercall.arg[0] = (unsigned long)SCHEDOP_remote_shutdown;
-    hypercall.arg[1] = (unsigned long)&arg;
-    arg.domain_id = domid;
-    arg.reason = reason;
-
-    if ( lock_pages(xch, &arg, sizeof(arg)) != 0 )
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
     }
 
+    hypercall.op     = __HYPERVISOR_sched_op;
+    hypercall.arg[0] = (unsigned long)SCHEDOP_remote_shutdown;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domain_id = domid;
+    arg->reason = reason;
+
     ret = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(xch, &arg, sizeof(arg));
+    xc_hypercall_buffer_free(xch, arg);
 
  out1:
     return ret;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 19 of 24] libxc: convert physdevop interface over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (17 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 18 of 24] libxc: convert schedop " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 20 of 24] libxc: convert flask interfaces " Ian Campbell
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 30ca51399b65a029af07904ba9b0529ac99c0754
# Parent  d781e6711016a2f15b276499b2ebbd69b16d5dfe
libxc: convert physdevop interface over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r d781e6711016 -r 30ca51399b65 tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_private.h	Mon Sep 06 14:28:11 2010 +0100
@@ -179,10 +179,10 @@ static inline int do_physdev_op(xc_inter
 static inline int do_physdev_op(xc_interface *xch, int cmd, void *op, size_t len)
 {
     int ret = -1;
+    DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BOUNCE(op, len, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
 
-    DECLARE_HYPERCALL;
-
-    if ( hcall_buf_prep(xch, &op, len) != 0 )
+    if ( xc_hypercall_bounce_pre(xch, op) )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
@@ -190,7 +190,7 @@ static inline int do_physdev_op(xc_inter
 
     hypercall.op = __HYPERVISOR_physdev_op;
     hypercall.arg[0] = (unsigned long) cmd;
-    hypercall.arg[1] = (unsigned long) op;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(op);
 
     if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 )
     {
@@ -199,8 +199,7 @@ static inline int do_physdev_op(xc_inter
                     " rebuild the user-space tool set?\n");
     }
 
-    hcall_buf_release(xch, &op, len);
-
+    xc_hypercall_bounce_post(xch, op);
 out1:
     return ret;
 }

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 20 of 24] libxc: convert flask interfaces over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (18 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 19 of 24] libxc: convert physdevop interface " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 21 of 24] libxc: convert hvmop " Ian Campbell
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 68dfe4921429a857123a5f926f30b54abf3f5f80
# Parent  30ca51399b65a029af07904ba9b0529ac99c0754
libxc: convert flask interfaces over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 30ca51399b65 -r 68dfe4921429 tools/libxc/xc_flask.c
--- a/tools/libxc/xc_flask.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_flask.c	Mon Sep 06 14:28:11 2010 +0100
@@ -40,15 +40,16 @@ int xc_flask_op(xc_interface *xch, flask
 {
     int ret = -1;
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BOUNCE(op, sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
 
-    hypercall.op     = __HYPERVISOR_xsm_op;
-    hypercall.arg[0] = (unsigned long)op;
-
-    if ( lock_pages(xch, op, sizeof(*op)) != 0 )
+    if ( xc_hypercall_bounce_pre(xch, op) )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out;
     }
+
+    hypercall.op     = __HYPERVISOR_xsm_op;
+    hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(op);
 
     if ( (ret = do_xen_hypercall(xch, &hypercall)) < 0 )
     {
@@ -56,7 +57,7 @@ int xc_flask_op(xc_interface *xch, flask
             fprintf(stderr, "XSM operation failed!\n");
     }
 
-    unlock_pages(xch, op, sizeof(*op));
+    xc_hypercall_bounce_post(xch, op);
 
  out:
     return ret;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 21 of 24] libxc: convert hvmop interfaces over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (19 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 20 of 24] libxc: convert flask interfaces " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 22 of 24] libxc: convert mca interface " Ian Campbell
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 413c6e963a87945e05a8fc1eb761c1e976445d9c
# Parent  68dfe4921429a857123a5f926f30b54abf3f5f80
libxc: convert hvmop interfaces over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 68dfe4921429 -r 413c6e963a87 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_domain.c	Mon Sep 06 14:28:11 2010 +0100
@@ -914,38 +914,42 @@ int xc_set_hvm_param(xc_interface *handl
 int xc_set_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long value)
 {
     DECLARE_HYPERCALL;
-    xen_hvm_param_t arg;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_param_t, arg);
     int rc;
+
+    arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
 
     hypercall.op     = __HYPERVISOR_hvm_op;
     hypercall.arg[0] = HVMOP_set_param;
-    hypercall.arg[1] = (unsigned long)&arg;
-    arg.domid = dom;
-    arg.index = param;
-    arg.value = value;
-    if ( lock_pages(handle, &arg, sizeof(arg)) != 0 )
-        return -1;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = dom;
+    arg->index = param;
+    arg->value = value;
     rc = do_xen_hypercall(handle, &hypercall);
-    unlock_pages(handle, &arg, sizeof(arg));
+    xc_hypercall_buffer_free(handle, arg);
     return rc;
 }
 
 int xc_get_hvm_param(xc_interface *handle, domid_t dom, int param, unsigned long *value)
 {
     DECLARE_HYPERCALL;
-    xen_hvm_param_t arg;
+    DECLARE_HYPERCALL_BUFFER(xen_hvm_param_t, arg);
     int rc;
+
+    arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg));
+    if ( arg == NULL )
+        return -1;
 
     hypercall.op     = __HYPERVISOR_hvm_op;
     hypercall.arg[0] = HVMOP_get_param;
-    hypercall.arg[1] = (unsigned long)&arg;
-    arg.domid = dom;
-    arg.index = param;
-    if ( lock_pages(handle, &arg, sizeof(arg)) != 0 )
-        return -1;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+    arg->domid = dom;
+    arg->index = param;
     rc = do_xen_hypercall(handle, &hypercall);
-    unlock_pages(handle, &arg, sizeof(arg));
-    *value = arg.value;
+    *value = arg->value;
+    xc_hypercall_buffer_free(handle, arg);
     return rc;
 }
 
diff -r 68dfe4921429 -r 413c6e963a87 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
@@ -299,18 +299,19 @@ int xc_hvm_set_pci_intx_level(
     unsigned int level)
 {
     DECLARE_HYPERCALL;
-    struct xen_hvm_set_pci_intx_level _arg, *arg = &_arg;
+    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_intx_level, arg);
     int rc;
 
-    if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 )
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
     {
         PERROR("Could not lock memory");
-        return rc;
+        return -1;
     }
 
     hypercall.op     = __HYPERVISOR_hvm_op;
     hypercall.arg[0] = HVMOP_set_pci_intx_level;
-    hypercall.arg[1] = (unsigned long)arg;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
 
     arg->domid  = dom;
     arg->domain = domain;
@@ -321,7 +322,7 @@ int xc_hvm_set_pci_intx_level(
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    hcall_buf_release(xch, (void **)&arg, sizeof(*arg));
+    xc_hypercall_buffer_free(xch, arg);
 
     return rc;
 }
@@ -332,18 +333,19 @@ int xc_hvm_set_isa_irq_level(
     unsigned int level)
 {
     DECLARE_HYPERCALL;
-    struct xen_hvm_set_isa_irq_level _arg, *arg = &_arg;
+    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_isa_irq_level, arg);
     int rc;
 
-    if ( (rc = hcall_buf_prep(xch, (void **)&arg, sizeof(*arg))) != 0 )
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
     {
         PERROR("Could not lock memory");
-        return rc;
+        return -1;
     }
 
     hypercall.op     = __HYPERVISOR_hvm_op;
     hypercall.arg[0] = HVMOP_set_isa_irq_level;
-    hypercall.arg[1] = (unsigned long)arg;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
 
     arg->domid   = dom;
     arg->isa_irq = isa_irq;
@@ -351,7 +353,7 @@ int xc_hvm_set_isa_irq_level(
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    hcall_buf_release(xch, (void **)&arg, sizeof(*arg));
+    xc_hypercall_buffer_free(xch, arg);
 
     return rc;
 }
@@ -360,26 +362,27 @@ int xc_hvm_set_pci_link_route(
     xc_interface *xch, domid_t dom, uint8_t link, uint8_t isa_irq)
 {
     DECLARE_HYPERCALL;
-    struct xen_hvm_set_pci_link_route arg;
+    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_link_route, arg);
     int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+    {
+        PERROR("Could not lock memory");
+        return -1;
+    }
 
     hypercall.op     = __HYPERVISOR_hvm_op;
     hypercall.arg[0] = HVMOP_set_pci_link_route;
-    hypercall.arg[1] = (unsigned long)&arg;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
 
-    arg.domid   = dom;
-    arg.link    = link;
-    arg.isa_irq = isa_irq;
-
-    if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
-    {
-        PERROR("Could not lock memory");
-        return rc;
-    }
+    arg->domid   = dom;
+    arg->link    = link;
+    arg->isa_irq = isa_irq;
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(xch, &arg, sizeof(arg));
+    xc_hypercall_buffer_free(xch, arg);
 
     return rc;
 }
@@ -390,28 +393,32 @@ int xc_hvm_track_dirty_vram(
     unsigned long *dirty_bitmap)
 {
     DECLARE_HYPERCALL;
-    struct xen_hvm_track_dirty_vram arg;
+    DECLARE_HYPERCALL_BOUNCE(dirty_bitmap, (nr+31) / 32, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
+    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_track_dirty_vram, arg);
     int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL || xc_hypercall_bounce_pre(xch, dirty_bitmap) )
+    {
+        PERROR("Could not lock memory");
+        rc = -1;
+        goto out;
+    }
 
     hypercall.op     = __HYPERVISOR_hvm_op;
     hypercall.arg[0] = HVMOP_track_dirty_vram;
-    hypercall.arg[1] = (unsigned long)&arg;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
 
-    arg.domid     = dom;
-    arg.first_pfn = first_pfn;
-    arg.nr        = nr;
-    set_xen_guest_handle(arg.dirty_bitmap, (uint8_t *)dirty_bitmap);
-
-    if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
-    {
-        PERROR("Could not lock memory");
-        return rc;
-    }
+    arg->domid     = dom;
+    arg->first_pfn = first_pfn;
+    arg->nr        = nr;
+    xc_set_xen_guest_handle(arg->dirty_bitmap, dirty_bitmap);
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(xch, &arg, sizeof(arg));
-
+out:
+    xc_hypercall_buffer_free(xch, arg);
+    xc_hypercall_bounce_post(xch, dirty_bitmap);
     return rc;
 }
 
@@ -419,26 +426,27 @@ int xc_hvm_modified_memory(
     xc_interface *xch, domid_t dom, uint64_t first_pfn, uint64_t nr)
 {
     DECLARE_HYPERCALL;
-    struct xen_hvm_modified_memory arg;
+    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_modified_memory, arg);
     int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+    {
+        PERROR("Could not lock memory");
+        return -1;
+    }
 
     hypercall.op     = __HYPERVISOR_hvm_op;
     hypercall.arg[0] = HVMOP_modified_memory;
-    hypercall.arg[1] = (unsigned long)&arg;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
 
-    arg.domid     = dom;
-    arg.first_pfn = first_pfn;
-    arg.nr        = nr;
-
-    if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
-    {
-        PERROR("Could not lock memory");
-        return rc;
-    }
+    arg->domid     = dom;
+    arg->first_pfn = first_pfn;
+    arg->nr        = nr;
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(xch, &arg, sizeof(arg));
+    xc_hypercall_buffer_free(xch, arg);
 
     return rc;
 }
@@ -447,27 +455,28 @@ int xc_hvm_set_mem_type(
     xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t first_pfn, uint64_t nr)
 {
     DECLARE_HYPERCALL;
-    struct xen_hvm_set_mem_type arg;
+    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg);
     int rc;
+
+    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+    if ( arg == NULL )
+    {
+        PERROR("Could not lock memory");
+        return -1;
+    }
+
+    arg->domid        = dom;
+    arg->hvmmem_type  = mem_type;
+    arg->first_pfn    = first_pfn;
+    arg->nr           = nr;
 
     hypercall.op     = __HYPERVISOR_hvm_op;
     hypercall.arg[0] = HVMOP_set_mem_type;
-    hypercall.arg[1] = (unsigned long)&arg;
-
-    arg.domid        = dom;
-    arg.hvmmem_type  = mem_type;
-    arg.first_pfn    = first_pfn;
-    arg.nr           = nr;
-
-    if ( (rc = lock_pages(xch, &arg, sizeof(arg))) != 0 )
-    {
-        PERROR("Could not lock memory");
-        return rc;
-    }
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
 
     rc = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(xch, &arg, sizeof(arg));
+    xc_hypercall_buffer_free(xch, arg);
 
     return rc;
 }

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 22 of 24] libxc: convert mca interface over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (20 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 21 of 24] libxc: convert hvmop " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 23 of 24] libxc: convert tmem " Ian Campbell
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID 649c4386d5904838f801d30e908b0f3bb1387d2c
# Parent  413c6e963a87945e05a8fc1eb761c1e976445d9c
libxc: convert mca interface over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 413c6e963a87 -r 649c4386d590 tools/libxc/xc_misc.c
--- a/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_misc.c	Mon Sep 06 14:28:11 2010 +0100
@@ -153,18 +153,19 @@ int xc_mca_op(xc_interface *xch, struct 
 {
     int ret = 0;
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BOUNCE(mc, sizeof(*mc), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
 
-    mc->interface_version = XEN_MCA_INTERFACE_VERSION;
-    if ( lock_pages(xch, mc, sizeof(*mc)) )
+    if ( xc_hypercall_bounce_pre(xch, mc) )
     {
         PERROR("Could not lock xen_mc memory");
-        return -EINVAL;
+        return -1;
     }
+    mc->interface_version = XEN_MCA_INTERFACE_VERSION;
 
     hypercall.op = __HYPERVISOR_mca;
-    hypercall.arg[0] = (unsigned long)mc;
+    hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(mc);
     ret = do_xen_hypercall(xch, &hypercall);
-    unlock_pages(xch, mc, sizeof(*mc));
+    xc_hypercall_bounce_post(xch, mc);
     return ret;
 }
 #endif

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 23 of 24] libxc: convert tmem interface over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (21 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 22 of 24] libxc: convert mca interface " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:38 ` [PATCH 24 of 24] libxc: convert gnttab interfaces " Ian Campbell
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID afdedd2e14c342a381094df14de9446636780283
# Parent  649c4386d5904838f801d30e908b0f3bb1387d2c
libxc: convert tmem interface over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r 649c4386d590 -r afdedd2e14c3 tools/libxc/xc_tmem.c
--- a/tools/libxc/xc_tmem.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_tmem.c	Mon Sep 06 14:28:11 2010 +0100
@@ -25,21 +25,23 @@ static int do_tmem_op(xc_interface *xch,
 {
     int ret;
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BOUNCE(op, sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
 
-    hypercall.op = __HYPERVISOR_tmem_op;
-    hypercall.arg[0] = (unsigned long)op;
-    if (lock_pages(xch, op, sizeof(*op)) != 0)
+    if ( xc_hypercall_bounce_pre(xch, op) )
     {
         PERROR("Could not lock memory for Xen hypercall");
         return -EFAULT;
     }
+
+    hypercall.op = __HYPERVISOR_tmem_op;
+    hypercall.arg[0] = HYPERCALL_BUFFER_AS_ARG(op);
     if ((ret = do_xen_hypercall(xch, &hypercall)) < 0)
     {
         if ( errno == EACCES )
             DPRINTF("tmem operation failed -- need to"
                     " rebuild the user-space tool set?\n");
     }
-    unlock_pages(xch, op, sizeof(*op));
+    xc_hypercall_bounce_post(xch, op);
 
     return ret;
 }
@@ -54,36 +56,41 @@ int xc_tmem_control(xc_interface *xch,
                     void *buf)
 {
     tmem_op_t op;
+    DECLARE_HYPERCALL_BOUNCE(buf, arg1, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
     int rc;
 
     op.cmd = TMEM_CONTROL;
     op.pool_id = pool_id;
     op.u.ctrl.subop = subop;
     op.u.ctrl.cli_id = cli_id;
-    set_xen_guest_handle(op.u.ctrl.buf,buf);
     op.u.ctrl.arg1 = arg1;
     op.u.ctrl.arg2 = arg2;
     op.u.ctrl.arg3 = arg3;
-
-    if (subop == TMEMC_LIST) {
-        if ((arg1 != 0) && (lock_pages(xch, buf, arg1) != 0))
-        {
-            PERROR("Could not lock memory for Xen hypercall");
-            return -ENOMEM;
-        }
-    }
 
 #ifdef VALGRIND
     if (arg1 != 0)
         memset(buf, 0, arg1);
 #endif
 
+    if ( arg1 != 0 )
+    {
+        if ( buf == NULL )
+            return -EINVAL;
+        if ( xc_hypercall_bounce_pre(xch, buf) )
+        {
+            PERROR("Could not lock memory for Xen hypercall");
+            return -ENOMEM;
+        }
+
+        xc_set_xen_guest_handle(op.u.ctrl.buf, buf);
+    }
+    else
+        xc_set_xen_guest_handle(op.u.ctrl.buf, HYPERCALL_BUFFER_NULL);
+
     rc = do_tmem_op(xch, &op);
 
-    if (subop == TMEMC_LIST) {
-        if (arg1 != 0)
-            unlock_pages(xch, buf, arg1);
-    }
+    if (arg1 != 0)
+        xc_hypercall_bounce_post(xch, buf);
 
     return rc;
 }

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 24 of 24] libxc: convert gnttab interfaces over to hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (22 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 23 of 24] libxc: convert tmem " Ian Campbell
@ 2010-09-06 13:38 ` Ian Campbell
  2010-09-06 13:41 ` [PATCH 00 of 24] [RFC] libxc: " Ian Campbell
  2010-09-07 16:35 ` Ian Jackson
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:38 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Campbell

# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1283779691 -3600
# Node ID b0a0fd294854b3a0bf778125c85e56702ffecbc2
# Parent  afdedd2e14c342a381094df14de9446636780283
libxc: convert gnttab interfaces over to hypercall buffers

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff -r afdedd2e14c3 -r b0a0fd294854 tools/libxc/xc_linux.c
--- a/tools/libxc/xc_linux.c	Mon Sep 06 14:28:11 2010 +0100
+++ b/tools/libxc/xc_linux.c	Mon Sep 06 14:28:11 2010 +0100
@@ -612,21 +612,22 @@ int xc_gnttab_op(xc_interface *xch, int 
 {
     int ret = 0;
     DECLARE_HYPERCALL;
+    DECLARE_HYPERCALL_BOUNCE(op, count * op_size, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
 
-    hypercall.op = __HYPERVISOR_grant_table_op;
-    hypercall.arg[0] = cmd;
-    hypercall.arg[1] = (unsigned long)op;
-    hypercall.arg[2] = count;
-
-    if ( lock_pages(xch, op, count* op_size) != 0 )
+    if ( xc_hypercall_bounce_pre(xch, op) )
     {
         PERROR("Could not lock memory for Xen hypercall");
         goto out1;
     }
 
+    hypercall.op = __HYPERVISOR_grant_table_op;
+    hypercall.arg[0] = cmd;
+    hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(op);
+    hypercall.arg[2] = count;
+
     ret = do_xen_hypercall(xch, &hypercall);
 
-    unlock_pages(xch, op, count * op_size);
+    xc_hypercall_bounce_post(xch, op);
 
  out1:
     return ret;
@@ -651,7 +652,7 @@ static void *_gnttab_map_table(xc_interf
     int rc, i;
     struct gnttab_query_size query;
     struct gnttab_setup_table setup;
-    unsigned long *frame_list = NULL;
+    DECLARE_HYPERCALL_BUFFER(unsigned long, frame_list);
     xen_pfn_t *pfn_list = NULL;
     grant_entry_v1_t *gnt = NULL;
 
@@ -669,13 +670,10 @@ static void *_gnttab_map_table(xc_interf
 
     *gnt_num = query.nr_frames * (PAGE_SIZE / sizeof(grant_entry_v1_t) );
 
-    frame_list = malloc(query.nr_frames * sizeof(unsigned long));
-    if ( !frame_list || lock_pages(xch, frame_list,
-                                   query.nr_frames * sizeof(unsigned long)) )
+    frame_list = xc_hypercall_buffer_alloc(xch, frame_list, query.nr_frames * sizeof(unsigned long));
+    if ( !frame_list )
     {
         ERROR("Alloc/lock frame_list in xc_gnttab_map_table\n");
-        if ( frame_list )
-            free(frame_list);
         return NULL;
     }
 
@@ -688,7 +686,7 @@ static void *_gnttab_map_table(xc_interf
 
     setup.dom = domid;
     setup.nr_frames = query.nr_frames;
-    set_xen_guest_handle(setup.frame_list, frame_list);
+    xc_set_xen_guest_handle(setup.frame_list, frame_list);
 
     /* XXX Any race with other setup_table hypercall? */
     rc = xc_gnttab_op(xch, GNTTABOP_setup_table, &setup, sizeof(setup),
@@ -713,10 +711,7 @@ static void *_gnttab_map_table(xc_interf
 
 err:
     if ( frame_list )
-    {
-        unlock_pages(xch, frame_list, query.nr_frames * sizeof(unsigned long));
-        free(frame_list);
-    }
+        xc_hypercall_buffer_free(xch, frame_list);
     if ( pfn_list )
         free(pfn_list);

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 00 of 24] [RFC] libxc: hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (23 preceding siblings ...)
  2010-09-06 13:38 ` [PATCH 24 of 24] libxc: convert gnttab interfaces " Ian Campbell
@ 2010-09-06 13:41 ` Ian Campbell
  2010-09-07 16:35 ` Ian Jackson
  25 siblings, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-06 13:41 UTC (permalink / raw)
  To: xen-devel

On Mon, 2010-09-06 at 14:38 +0100, Ian Campbell wrote:
> 
> The RFC has already grown to many more patches than I originally
> intended so I'd like to solicit some comments on the basic premise,
> usability of the interface etc, before I dig down and convert/cleanup
> the rest. 

To that end patch 10 of 24 and one or two of the subsequent patches
selected at random are the ones worth looking at...

Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers
  2010-09-06 13:38 ` [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers Ian Campbell
@ 2010-09-07  8:44   ` Jeremy Fitzhardinge
  2010-09-07  9:56     ` Ian Campbell
  0 siblings, 1 reply; 34+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-07  8:44 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel

 On 09/06/2010 11:38 PM, Ian Campbell wrote:
> # HG changeset patch
> # User Ian Campbell <ian.campbell@citrix.com>
> # Date 1283779691 -3600
> # Node ID bf7fb64762eb7decea9a6804460f0f966496ba07
> # Parent  7b45202f78cd82d320fb32fea67c0a618697baec
> libxc: infrastructure for hypercall safe data buffers.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>
> diff -r 7b45202f78cd -r bf7fb64762eb tools/libxc/Makefile
> --- a/tools/libxc/Makefile	Mon Sep 06 14:28:11 2010 +0100
> +++ b/tools/libxc/Makefile	Mon Sep 06 14:28:11 2010 +0100
> @@ -27,6 +27,7 @@ CTRL_SRCS-y       += xc_mem_event.c
>  CTRL_SRCS-y       += xc_mem_event.c
>  CTRL_SRCS-y       += xc_mem_paging.c
>  CTRL_SRCS-y       += xc_memshr.c
> +CTRL_SRCS-y       += xc_hcall_buf.c
>  CTRL_SRCS-y       += xtl_core.c
>  CTRL_SRCS-y       += xtl_logger_stdio.c
>  CTRL_SRCS-$(CONFIG_X86) += xc_pagetab.c
> diff -r 7b45202f78cd -r bf7fb64762eb tools/libxc/xc_hcall_buf.c
> --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
> +++ b/tools/libxc/xc_hcall_buf.c	Mon Sep 06 14:28:11 2010 +0100
> @@ -0,0 +1,147 @@
> +/*
> + * Copyright (c) 2010, Citrix Systems, Inc.
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation;
> + * version 2.1 of the License.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
> + */
> +
> +#include <inttypes.h>
> +#include "xc_private.h"
> +#include "xg_private.h"
> +
> +DECLARE_NAMED_HYPERCALL_BUFFER(HYPERCALL_BUFFER_NULL);
> +
> +void *xc__hypercall_buffer_alloc_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
> +{
> +    size_t size = nr_pages * PAGE_SIZE;
> +    void *p;
> +#if defined(_POSIX_C_SOURCE) && !defined(__sun__)
> +    int ret;
> +    ret = posix_memalign(&p, PAGE_SIZE, size);
> +    if (ret != 0)
> +        return NULL;
> +#elif defined(__NetBSD__) || defined(__OpenBSD__)
> +    p = valloc(size);
> +#else
> +    p = memalign(PAGE_SIZE, size);
> +#endif
> +
> +    if (!p)
> +        return NULL;
> +
> +#ifndef __sun__
> +    if ( mlock(p, size) < 0 )
> +    {
> +        free(p);
> +        return NULL;
> +    }
> +#endif
> +
> +    b->hbuf = p;
> +
> +    memset(p, 0, size);
> +    return b->hbuf;
> +}
> +
> +void xc__hypercall_buffer_free_pages(xc_interface *xch, xc_hypercall_buffer_t *b, int nr_pages)
> +{
> +    if ( b->hbuf == NULL )
> +        return;
> +
> +#ifndef __sun__
> +    (void) munlock(b->hbuf, nr_pages * PAGE_SIZE);
> +#endif
> +
> +    free(b->hbuf);
> +}

How does this end up making the memory suitable for passing to Xen? 
Where does it get locked down in the non-__sun__ case?  And why just
__sun__ here?

Is there any way to make memory hypercall-safe with existing syscalls,
or does/will it end up copying from this memory into the kernel before
issuing the hypercall?  Or adding some other mechanism for pinning down
the pages?

    J

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers
  2010-09-07  8:44   ` Jeremy Fitzhardinge
@ 2010-09-07  9:56     ` Ian Campbell
  2010-09-07 17:23       ` Ian Jackson
  0 siblings, 1 reply; 34+ messages in thread
From: Ian Campbell @ 2010-09-07  9:56 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: xen-devel

On Tue, 2010-09-07 at 09:44 +0100, Jeremy Fitzhardinge wrote:
> How does this end up making the memory suitable for passing to Xen? 
> Where does it get locked down in the non-__sun__ case?  And why just
> __sun__ here?

As described in patch 0/24 the series still uses the same mlock
mechanism as before to actually obtain "suitable" memory. The __sun__
stuff is the same as before too -- this part was ported direct from the
existing bounce implementation in xc_private.c.

This series only:
      * ensures that everywhere which should be using special hypercall
        memory is actually using the correct (or any!) interface to
        obtain it. Not everywhere was, sometimes by omission but more
        often because the current implementation will only bounce one
        buffer at a time and just locks any subsequent nested to bounce
        attempts in place. The currently implementation also only
        bounces buffers smaller than 1 page and just locks anything else
        in place.
      * ensures that each buffer is only locked once -- some callchains
        were (un)locking the same buffer multiple times going down/up
        the stack (particularly concerning for buffers which are reused)
      * removes the use of mlock on portions of the stack, which is
        considered more dubious than using mlock in general.
      * makes it easier to switch to a better mechanism than mlock in
        the future (i.e. phase 2) by consolidating the magic allocations
        into one place.

> Is there any way to make memory hypercall-safe with existing syscalls,
> or does/will it end up copying from this memory into the kernel before
> issuing the hypercall?  Or adding some other mechanism for pinning 
> down the pages?

It's not clear what phase 2 actually is (although phase 3 is clearly
profit), I don't think any existing syscalls do what we need. mlock
(avoiding the stack) gets pretty close and so far the issues with mlock
seem to have been more potential than hurting us in practice, but it
pays to be prepared e.g. for more aggressive page migration/coalescing
in the future, I think.

It's not possible to copy the necessary buffers in the kernel without
adding deep introspection of the necessary hypercalls to the kernel
itself, I think we want to avoid this if possible.

Also some of the buffers can be quite large and/or potentially
performance sensitive so we would like to retain the ability to allocate
the correct sort of memory in userspace from the get go and therefor
avoid bouncing at all.

I was thinking we might need to implement some sort of special anonymous
mmap on the privcmd device or an ioctl or something along those lines,
but I'm open to better suggestions.

Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 00 of 24] [RFC] libxc: hypercall buffers
  2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
                   ` (24 preceding siblings ...)
  2010-09-06 13:41 ` [PATCH 00 of 24] [RFC] libxc: " Ian Campbell
@ 2010-09-07 16:35 ` Ian Jackson
  2010-09-07 16:36   ` Ian Campbell
  25 siblings, 1 reply; 34+ messages in thread
From: Ian Jackson @ 2010-09-07 16:35 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel

Ian Campbell writes ("[Xen-devel] [PATCH 00 of 24] [RFC] libxc: hypercall buffers"):
> This RFC series only partially translates over to the the new
> scheme. It is intended that the final series end with a patch which
> effectively does s/xc_set_xen_guest_handle/set_xen_guest_handle/g in
> order to catch future errors (it should also remove the now redundant
> hcall_buf_prep and hcall_buf_release calls and assiciated
> infrastructure).

This seems like a good idea.  Some of the early parts of your series
should probably go in right away, as they're just bugfixes, right ?

Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 00 of 24] [RFC] libxc: hypercall buffers
  2010-09-07 16:35 ` Ian Jackson
@ 2010-09-07 16:36   ` Ian Campbell
  2010-09-07 17:28     ` Ian Jackson
  0 siblings, 1 reply; 34+ messages in thread
From: Ian Campbell @ 2010-09-07 16:36 UTC (permalink / raw)
  To: Ian Jackson; +Cc: xen-devel

On Tue, 2010-09-07 at 17:35 +0100, Ian Jackson wrote:
> Ian Campbell writes ("[Xen-devel] [PATCH 00 of 24] [RFC] libxc: hypercall buffers"):
> > This RFC series only partially translates over to the the new
> > scheme. It is intended that the final series end with a patch which
> > effectively does s/xc_set_xen_guest_handle/set_xen_guest_handle/g in
> > order to catch future errors (it should also remove the now redundant
> > hcall_buf_prep and hcall_buf_release calls and assiciated
> > infrastructure).
> 
> This seems like a good idea.  Some of the early parts of your series
> should probably go in right away, as they're just bugfixes, right ?

Yes, although I probably need to just double check they are sane and/or
don't rely on later patches for correctness...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers
  2010-09-07  9:56     ` Ian Campbell
@ 2010-09-07 17:23       ` Ian Jackson
  2010-09-07 18:44         ` Ian Campbell
  2010-09-07 23:31         ` Jeremy Fitzhardinge
  0 siblings, 2 replies; 34+ messages in thread
From: Ian Jackson @ 2010-09-07 17:23 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Jeremy Fitzhardinge, xen-devel

Ian Campbell writes ("Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers"):
> It's not clear what phase 2 actually is (although phase 3 is clearly
> profit), I don't think any existing syscalls do what we need. mlock
> (avoiding the stack) gets pretty close and so far the issues with mlock
> seem to have been more potential than hurting us in practice, but it
> pays to be prepared e.g. for more aggressive page migration/coalescing
> in the future, I think.

Ian and I discussed this extensively on IRC, during which conversation
I became convinced that mlock() must do what we want.  Having read the
code in the kernel I'm not not so sure.

The ordinary userspace access functions are all written to cope with
pagefaults and retry the access.  So userspace addresses are not in
general valid in kernel mode even if you've called functions to try to
test them.  It's not clear what mlock prevents; does it prevent NUMA
page migration ?  If not then I think indeed the page could be made
not present by one VCPU editing the page tables while another VCPU is
entering the hypercall, so that the 2nd VCPU will get a spurious
EFAULT.

OTOH: there must be other things that work like Xen - what about user
mode device drivers of various kinds ?  Do X servers not mlock memory
and expect to be able to tell the video card to DMA to it ?  etc.
I think if linux-kernel think that people haven't assumed that mlock()
actually pins the page, they're mistaken - and it's likely to be not
just us.

Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 00 of 24] [RFC] libxc: hypercall buffers
  2010-09-07 16:36   ` Ian Campbell
@ 2010-09-07 17:28     ` Ian Jackson
  0 siblings, 0 replies; 34+ messages in thread
From: Ian Jackson @ 2010-09-07 17:28 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel

Ian Campbell writes ("Re: [Xen-devel] [PATCH 00 of 24] [RFC] libxc: hypercall buffers"):
> Yes, although I probably need to just double check they are sane and/or
> don't rely on later patches for correctness...

OK, I'll hold off.  I have plenty of other things to go in first
anyway - rather a backlog, in fact ...

Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers
  2010-09-07 17:23       ` Ian Jackson
@ 2010-09-07 18:44         ` Ian Campbell
  2010-09-07 23:31         ` Jeremy Fitzhardinge
  1 sibling, 0 replies; 34+ messages in thread
From: Ian Campbell @ 2010-09-07 18:44 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Jeremy Fitzhardinge, xen-devel

On Tue, 2010-09-07 at 18:23 +0100, Ian Jackson wrote: 
> Ian Campbell writes ("Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers"):
> > It's not clear what phase 2 actually is (although phase 3 is clearly
> > profit), I don't think any existing syscalls do what we need. mlock
> > (avoiding the stack) gets pretty close and so far the issues with mlock
> > seem to have been more potential than hurting us in practice, but it
> > pays to be prepared e.g. for more aggressive page migration/coalescing
> > in the future, I think.
> 
> Ian and I discussed this extensively on IRC, during which conversation
> I became convinced that mlock() must do what we want.  Having read the
> code in the kernel I'm not not so sure.

After we had our discussion some other conversation I had (I forget
where/with whom) which made me pretty sure we were wrong as well.

> The ordinary userspace access functions are all written to cope with
> pagefaults and retry the access.  So userspace addresses are not in
> general valid in kernel mode even if you've called functions to try to
> test them.

Correct, the difference between a normal userspace access function and a
hypercall is that it is possible to inject (and handle) and page fault
in the former case whereas we cannot inject a page fault to a VCPU while
it is processing a hypercall.

(Maybe it is possible in principle to make all hypercalls restartable
such that we can return to the guest in order to inject page faults but
its not the case right now and I suspect it would be an enormous amount
of work to make it so)

>   It's not clear what mlock prevents; does it prevent NUMA
> page migration ?  If not then I think indeed the page could be made
> not present by one VCPU editing the page tables while another VCPU is
> entering the hypercall, so that the 2nd VCPU will get a spurious
> EFAULT.

I think you are right, these kinds of page faults are possible.

It seems that mlock is only specified to prevent major page faults (i.e.
those requiring I/O to service) but doesn't specify anything regarding
minor page faults. It ensures that the data is resident in RAM but not
necessarily that it is continuously mapped into your virtual address
space nor writeable.

Minor page faults could be caused by NUMA migration (as you say), CoW
mappings or by the kernel trying to consolidate free memory in order to
satisfy a higher order allocation (Linux has recently gained this exact
functionality, I believe). I'm sure there are a host of other potential
causes too...

It's possible that historically most of these potential minor fault
causes were either not implemented in the kernels we were using for
domain 0 (e.g. consolidation is pretty new) or not likely to hit in
practice (e.g. perhaps libxc's usage patterns make it likely that any
CoW mappings are already dealt with by the time the hypercall happens).

Going forward I think it's likely that NUMA migration and memory
consolidation and the like will become more widespread.

> OTOH: there must be other things that work like Xen - what about user
> mode device drivers of various kinds ?  Do X servers not mlock memory
> and expect to be able to tell the video card to DMA to it ?  etc.

DMA would require physical (or more strictly DMA) addresses rather than
virtual addresses so locking the page into a particular virtual address
space doesn't matter all that much from a DMA point of view. I don't
think pure user mode device drivers can do DMA, there is always some
sort of kernel stub required.

In any case the kernel has been moving away from needing privileged X
servers with direct access to hardware in favour of KMS for a while so
I'm not sure an appeal to any similarity we may have with that case
helps us much.

> I think if linux-kernel think that people haven't assumed that mlock()
> actually pins the page, they're mistaken - and it's likely to be not
> just us.

Unfortunately, I think we're reasonably unique. 

Ian.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers
  2010-09-07 17:23       ` Ian Jackson
  2010-09-07 18:44         ` Ian Campbell
@ 2010-09-07 23:31         ` Jeremy Fitzhardinge
  1 sibling, 0 replies; 34+ messages in thread
From: Jeremy Fitzhardinge @ 2010-09-07 23:31 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Ian Campbell, xen-devel

 On 09/08/2010 03:23 AM, Ian Jackson wrote:
> Ian and I discussed this extensively on IRC, during which conversation
> I became convinced that mlock() must do what we want.  Having read the
> code in the kernel I'm not not so sure.
>
> The ordinary userspace access functions are all written to cope with
> pagefaults and retry the access.  So userspace addresses are not in
> general valid in kernel mode even if you've called functions to try to
> test them.  It's not clear what mlock prevents; does it prevent NUMA
> page migration ?  If not then I think indeed the page could be made
> not present by one VCPU editing the page tables while another VCPU is
> entering the hypercall, so that the 2nd VCPU will get a spurious
> EFAULT.

As IanC said, the only thing mlock() guarantees is that accessing the
page won't cause a major fault - ie, need to go to disk to satisfy it. 
You can and will get minor faults on mlocked pages, as a result of the
pte being either non-present or RO.  It can be non-present as a result
of page migration (not necessarily NUMA migration, just defragging
kernel memory to make it possible to allocate higher-order pages), and
RO when doing page-dirtiness tracking.  And I think they can happen
concurrently on different vcpus, so you may end up with a hypercall
being able to start reading the memory, but then fail writing back the
results.

I think the only way to do this properly is to do ioctls out of kernel
memory rather than user process memory.  Perhaps the easiest way to do
this is add an mmap operation to privcmd which allocates a set of kernel
pages and maps them into the process memory, which it can then use as
its hypercall buffer.  The alternatives would be to copy the argument
memory into/out of kernel space around the call, or do some ad-hoc
pinning of pages around the call.  But if we can arrange for all
argument memory to come from a particular buffer, then its easier to
just make sure that buffer has the right properties.

> OTOH: there must be other things that work like Xen - what about user
> mode device drivers of various kinds ?  Do X servers not mlock memory
> and expect to be able to tell the video card to DMA to it ?  etc.
> I think if linux-kernel think that people haven't assumed that mlock()
> actually pins the page, they're mistaken - and it's likely to be not
> just us.

Not really - nothing much depends on keeping a page physically resident
and having a pte in a specific state.  DMA just cares about physical
residency, and you can't do usermode DMA without some way of also
getting the physical address of the page, which would mean you've
already got some kind of kernel driver.  And there would be no way to
make such DMA safe anyway (mlock wouldn't protect against a process
being killed, for example).

Trying to share memory via virtual addresses with an entity which is
entirely external to the kernel is just plain weird.

    J

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2010-09-07 23:31 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-06 13:38 [PATCH 00 of 24] [RFC] libxc: hypercall buffers Ian Campbell
2010-09-06 13:38 ` [PATCH 01 of 24] xen: define raw version of set_xen_guest_handle Ian Campbell
2010-09-06 13:38 ` [PATCH 02 of 24] libxc: flask: use (un)lock pages rather than open coding m(un)lock Ian Campbell
2010-09-06 13:38 ` [PATCH 03 of 24] libxc: pass an xc_interface handle to page locking functions Ian Campbell
2010-09-06 13:38 ` [PATCH 04 of 24] libxc: Remove unnecessary double indirection from xc_readconsolering Ian Campbell
2010-09-06 13:38 ` [PATCH 05 of 24] libxc: use correct size of struct xen_mc Ian Campbell
2010-09-06 13:38 ` [PATCH 06 of 24] libxc: add to xc_domain_maximum_gpfn Ian Campbell
2010-09-06 13:38 ` [PATCH 07 of 24] libxc: replace open-coded use of XENMEM_decrease_reservation Ian Campbell
2010-09-06 13:38 ` [PATCH 08 of 24] libxc: simplify performance counters API Ian Campbell
2010-09-06 13:38 ` [PATCH 09 of 24] libxc: simplify lock profiling API Ian Campbell
2010-09-06 13:38 ` [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers Ian Campbell
2010-09-07  8:44   ` Jeremy Fitzhardinge
2010-09-07  9:56     ` Ian Campbell
2010-09-07 17:23       ` Ian Jackson
2010-09-07 18:44         ` Ian Campbell
2010-09-07 23:31         ` Jeremy Fitzhardinge
2010-09-06 13:38 ` [PATCH 11 of 24] libxc: convert xc_version over to hypercall buffers Ian Campbell
2010-09-06 13:38 ` [PATCH 12 of 24] libxc: convert domctl interfaces " Ian Campbell
2010-09-06 13:38 ` [PATCH 13 of 24] libxc: convert shadow domctl interfaces and save/restore " Ian Campbell
2010-09-06 13:38 ` [PATCH 14 of 24] libxc: convert sysctl interfaces " Ian Campbell
2010-09-06 13:38 ` [PATCH 15 of 24] libxc: convert watchdog interface " Ian Campbell
2010-09-06 13:38 ` [PATCH 16 of 24] libxc: convert acm interfaces " Ian Campbell
2010-09-06 13:38 ` [PATCH 17 of 24] libxc: convert evtchn " Ian Campbell
2010-09-06 13:38 ` [PATCH 18 of 24] libxc: convert schedop " Ian Campbell
2010-09-06 13:38 ` [PATCH 19 of 24] libxc: convert physdevop interface " Ian Campbell
2010-09-06 13:38 ` [PATCH 20 of 24] libxc: convert flask interfaces " Ian Campbell
2010-09-06 13:38 ` [PATCH 21 of 24] libxc: convert hvmop " Ian Campbell
2010-09-06 13:38 ` [PATCH 22 of 24] libxc: convert mca interface " Ian Campbell
2010-09-06 13:38 ` [PATCH 23 of 24] libxc: convert tmem " Ian Campbell
2010-09-06 13:38 ` [PATCH 24 of 24] libxc: convert gnttab interfaces " Ian Campbell
2010-09-06 13:41 ` [PATCH 00 of 24] [RFC] libxc: " Ian Campbell
2010-09-07 16:35 ` Ian Jackson
2010-09-07 16:36   ` Ian Campbell
2010-09-07 17:28     ` Ian Jackson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.