All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] Tmem fixes for v4.9.
@ 2017-04-04 19:10 Konrad Rzeszutek Wilk
  2017-04-04 19:10 ` [PATCH v2 1/5] xen/libcx/tmem: Replace TMEM_RESTORE_NEW with XEN_SYSCTL_TMEM_OP_SET_POOLS Konrad Rzeszutek Wilk
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-04-04 19:10 UTC (permalink / raw)
  To: xen-devel; +Cc: wei.liu2, jbeulich

Hey!

Since v1:
 - Added Acks
 - Fixed hypervisor patches per Jan's review.

Please see the attached patches. I had hoped that I would have the
initial migrationv2 patches ready but other things preempted me <sigh>.

This patchset moves two sub-ops (TMEM_RESTORE_NEW, TMEM_AUTH)
from the guest accessible one to the system admin controlled.

It made no sense to have them there in the guest one, and worst
it has some security implications (a hostile guest could join
another shared pool without any authorization) - but since XSA-7
the tmem code is not yet out of "experimental" which means we can
treat these fixes without worrying about security aspects.

Pls see the diff stat, shortlog and also the full diff for easier
review.


 docs/misc/tmem-internals.html       |   6 +--
 docs/misc/xen-command-line.markdown |   3 --
 tools/libxc/include/xenctrl.h       |   2 +-
 tools/libxc/xc_tmem.c               |  78 +++++++++++-----------------
 tools/xl/xl_cmdtable.c              |   2 +-
 xen/common/tmem.c                   |  38 ++++++--------
 xen/common/tmem_control.c           | 101 ++++++++++++++++++++++++++++++++++++
 xen/common/tmem_xen.c               |   3 --
 xen/include/public/sysctl.h         |  15 ++++--
 xen/include/public/tmem.h           |   8 +--
 xen/include/xen/tmem_control.h      |   6 +++
 xen/include/xen/tmem_xen.h          |   9 ----
 12 files changed, 173 insertions(+), 98 deletions(-)

Konrad Rzeszutek Wilk (5):
      xen/libcx/tmem: Replace TMEM_RESTORE_NEW with XEN_SYSCTL_TMEM_OP_SET_POOLS
      xen/libxc: Move TMEM_AUTH to XEN_SYSCTL_TMEM_OP_SET_AUTH
      tmem: By default to join an shared pool it must be authorized.
      tmem: Fix tmem-shared-auth 'auth' values
      tmem: Parse UUIDs correctly.


diff --git a/docs/misc/tmem-internals.html b/docs/misc/tmem-internals.html
index 2d8635d..9b7e70e 100644
--- a/docs/misc/tmem-internals.html
+++ b/docs/misc/tmem-internals.html
@@ -715,11 +715,9 @@ Ocfs2 has only very limited security; it is assumed that anyone who can
 access the filesystem bits on the shared disk can mount the filesystem and use
 it.  But in a virtualized data center,
 higher isolation requirements may apply.
-As a result, a Xen boot option -- &quot;tmem_shared_auth&quot; -- was
-added.  The option defaults to disabled,
-but when it is enabled, management tools must explicitly authenticate (or may
+As a result, management tools must explicitly authenticate (or may
 explicitly deny) shared pool access to any client.
-On Xen, this is done with the &quot;xm
+On Xen, this is done with the &quot;xl
 tmem-shared-auth&quot; command.
 <P>
 <b><i>32-bit implementation</i>.</b>
diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 9eb85d6..9690512 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1539,9 +1539,6 @@ pages) must also be specified via the tbuf\_size parameter.
 ### tmem\_compress
 > `= <boolean>`
 
-### tmem\_shared\_auth
-> `= <boolean>`
-
 ### tsc
 > `= unstable | skewed | stable:socket`
 
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 2d97d36..e2b4cc1 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1901,7 +1901,7 @@ int xc_tmem_control_oid(xc_interface *xch, int32_t pool_id, uint32_t subop,
 int xc_tmem_control(xc_interface *xch,
                     int32_t pool_id, uint32_t subop, uint32_t cli_id,
                     uint32_t len, uint32_t arg, void *buf);
-int xc_tmem_auth(xc_interface *xch, int cli_id, char *uuid_str, int arg1);
+int xc_tmem_auth(xc_interface *xch, int cli_id, char *uuid_str, int enable);
 int xc_tmem_save(xc_interface *xch, int dom, int live, int fd, int field_marker);
 int xc_tmem_save_extra(xc_interface *xch, int dom, int fd, int field_marker);
 void xc_tmem_save_done(xc_interface *xch, int dom);
diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 51d11ef..9bf5cc3 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -22,30 +22,6 @@
 #include <assert.h>
 #include <xen/tmem.h>
 
-static int do_tmem_op(xc_interface *xch, tmem_op_t *op)
-{
-    int ret;
-    DECLARE_HYPERCALL_BOUNCE(op, sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
-
-    if ( xc_hypercall_bounce_pre(xch, op) )
-    {
-        PERROR("Could not bounce buffer for tmem op hypercall");
-        return -EFAULT;
-    }
-
-    ret = xencall1(xch->xcall, __HYPERVISOR_tmem_op,
-                   HYPERCALL_BUFFER_AS_ARG(op));
-    if ( ret < 0 )
-    {
-        if ( errno == EACCES )
-            DPRINTF("tmem operation failed -- need to"
-                    " rebuild the user-space tool set?\n");
-    }
-    xc_hypercall_bounce_post(xch, op);
-
-    return ret;
-}
-
 int xc_tmem_control(xc_interface *xch,
                     int32_t pool_id,
                     uint32_t cmd,
@@ -69,7 +45,8 @@ int xc_tmem_control(xc_interface *xch,
     sysctl.u.tmem_op.oid.oid[1] = 0;
     sysctl.u.tmem_op.oid.oid[2] = 0;
 
-    if ( cmd == XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO )
+    if ( cmd == XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO ||
+         cmd == XEN_SYSCTL_TMEM_OP_SET_AUTH )
         HYPERCALL_BOUNCE_SET_DIR(buf, XC_HYPERCALL_BUFFER_BOUNCE_IN);
     if ( len )
     {
@@ -161,9 +138,9 @@ static int xc_tmem_uuid_parse(char *uuid_str, uint64_t *uuid_lo, uint64_t *uuid_
         else if ( *p >= '0' && *p <= '9' )
             digit = *p - '0';
         else if ( *p >= 'A' && *p <= 'F' )
-            digit = *p - 'A';
+            digit = *p - 'A' + 10;
         else if ( *p >= 'a' && *p <= 'f' )
-            digit = *p - 'a';
+            digit = *p - 'a' + 10;
         else
             return -1;
         *x = (*x << 4) | digit;
@@ -176,22 +153,25 @@ static int xc_tmem_uuid_parse(char *uuid_str, uint64_t *uuid_lo, uint64_t *uuid_
 int xc_tmem_auth(xc_interface *xch,
                  int cli_id,
                  char *uuid_str,
-                 int arg1)
+                 int enable)
 {
-    tmem_op_t op;
-
-    op.cmd = TMEM_AUTH;
-    op.pool_id = 0;
-    op.u.creat.arg1 = cli_id;
-    op.u.creat.flags = arg1;
-    if ( xc_tmem_uuid_parse(uuid_str, &op.u.creat.uuid[0],
-                                      &op.u.creat.uuid[1]) < 0 )
+    xen_tmem_pool_info_t pool = {
+        .flags.u.auth = enable,
+        .id = 0,
+        .n_pages = 0,
+        .uuid[0] = 0,
+        .uuid[1] = 0,
+    };
+    if ( xc_tmem_uuid_parse(uuid_str, &pool.uuid[0],
+                                      &pool.uuid[1]) < 0 )
     {
         PERROR("Can't parse uuid, use xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx");
         return -1;
     }
-
-    return do_tmem_op(xch, &op);
+    return xc_tmem_control(xch, 0 /* pool_id */,
+                           XEN_SYSCTL_TMEM_OP_SET_AUTH,
+                           cli_id, sizeof(pool),
+                           0 /* arg */, &pool);
 }
 
 /* Save/restore/live migrate */
@@ -385,16 +365,18 @@ static int xc_tmem_restore_new_pool(
                     uint64_t uuid_lo,
                     uint64_t uuid_hi)
 {
-    tmem_op_t op;
-
-    op.cmd = TMEM_RESTORE_NEW;
-    op.pool_id = pool_id;
-    op.u.creat.arg1 = cli_id;
-    op.u.creat.flags = flags;
-    op.u.creat.uuid[0] = uuid_lo;
-    op.u.creat.uuid[1] = uuid_hi;
-
-    return do_tmem_op(xch, &op);
+    xen_tmem_pool_info_t pool = {
+        .flags.raw = flags,
+        .id = pool_id,
+        .n_pages = 0,
+        .uuid[0] = uuid_lo,
+        .uuid[1] = uuid_hi,
+    };
+
+    return xc_tmem_control(xch, pool_id,
+                           XEN_SYSCTL_TMEM_OP_SET_POOLS,
+                           cli_id, sizeof(pool),
+                           0 /* arg */, &pool);
 }
 
 int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7d97811..30eb93c 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -416,7 +416,7 @@ struct cmd_spec cmd_table[] = {
       "  -a                             Authenticate for all tmem pools\n"
       "  -u UUID                        Specify uuid\n"
       "                                 (abcdef01-2345-6789-1234-567890abcdef)\n"
-      "  -A AUTH                        0=auth,1=deauth",
+      "  -A AUTH                        0=deauth,1=auth",
     },
     { "tmem-freeable",
       &main_tmem_freeable, 0, 0,
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 6d5de5b..4f77a8c 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -804,7 +804,7 @@ static void pool_flush(struct tmem_pool *pool, domid_t cli_id)
 
 /************ CLIENT MANIPULATION OPERATIONS **************************/
 
-static struct client *client_create(domid_t cli_id)
+struct client *client_create(domid_t cli_id)
 {
     struct client *client = xzalloc(struct client);
     int i, shift;
@@ -846,7 +846,6 @@ static struct client *client_create(domid_t cli_id)
     client->info.version = TMEM_SPEC_VERSION;
     client->info.maxpools = MAX_POOLS_PER_DOMAIN;
     client->info.flags.u.compress = tmem_compression_enabled();
-    client->shared_auth_required = tmem_shared_auth();
     for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
         client->shared_auth_uuid[i][0] =
             client->shared_auth_uuid[i][1] = -1L;
@@ -1435,9 +1434,9 @@ static int do_tmem_destroy_pool(uint32_t pool_id)
     return 1;
 }
 
-static int do_tmem_new_pool(domid_t this_cli_id,
-                                     uint32_t d_poolid, uint32_t flags,
-                                     uint64_t uuid_lo, uint64_t uuid_hi)
+int do_tmem_new_pool(domid_t this_cli_id,
+                     uint32_t d_poolid, uint32_t flags,
+                     uint64_t uuid_lo, uint64_t uuid_hi)
 {
     struct client *client;
     domid_t cli_id;
@@ -1530,7 +1529,8 @@ static int do_tmem_new_pool(domid_t this_cli_id,
             pool->shared = 0;
             goto out;
         }
-        if ( client->shared_auth_required && !tmem_global.shared_auth )
+        /* By default only join domains that are authorized by admin. */
+        if ( !tmem_global.shared_auth )
         {
             for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
                 if ( (client->shared_auth_uuid[i][0] == uuid_lo) &&
@@ -1604,8 +1604,8 @@ fail:
 
 /************ TMEM CONTROL OPERATIONS ************************************/
 
-static int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
-                                  uint64_t uuid_hi, bool_t auth)
+int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
+                           uint64_t uuid_hi, bool auth)
 {
     struct client *client;
     int i, free = -1;
@@ -1908,21 +1908,15 @@ long do_tmem_op(tmem_cli_op_t uops)
     /* Acquire write lock for all commands at first. */
     write_lock(&tmem_rwlock);
 
-    if ( op.cmd == TMEM_CONTROL )
+    switch ( op.cmd )
     {
+    case TMEM_CONTROL:
+    case TMEM_RESTORE_NEW:
+    case TMEM_AUTH:
         rc = -EOPNOTSUPP;
-    }
-    else if ( op.cmd == TMEM_AUTH )
-    {
-        rc = tmemc_shared_pool_auth(op.u.creat.arg1,op.u.creat.uuid[0],
-                         op.u.creat.uuid[1],op.u.creat.flags);
-    }
-    else if ( op.cmd == TMEM_RESTORE_NEW )
-    {
-        rc = do_tmem_new_pool(op.u.creat.arg1, op.pool_id, op.u.creat.flags,
-                         op.u.creat.uuid[0], op.u.creat.uuid[1]);
-    }
-    else {
+        break;
+
+    default:
     /*
 	 * For other commands, create per-client tmem structure dynamically on
 	 * first use by client.
@@ -1999,6 +1993,8 @@ long do_tmem_op(tmem_cli_op_t uops)
                 tmem_stats.errored_tmem_ops++;
             return rc;
         }
+        break;
+
     }
 out:
     write_unlock(&tmem_rwlock);
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index ddd9cfe..2d980e3 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -402,6 +402,101 @@ static int tmemc_get_pool(int cli_id,
     return rc ? : idx;
 }
 
+static int tmemc_set_pools(int cli_id,
+                           XEN_GUEST_HANDLE(xen_tmem_pool_info_t) pools,
+                           uint32_t len)
+{
+    unsigned int i;
+    int rc = 0;
+    unsigned int nr = len / sizeof(xen_tmem_pool_info_t);
+    struct client *client = tmem_client_from_cli_id(cli_id);
+
+    if ( len % sizeof(xen_tmem_pool_info_t) )
+        return -EINVAL;
+
+    if ( nr > MAX_POOLS_PER_DOMAIN )
+        return -E2BIG;
+
+    if ( !guest_handle_okay(pools, nr) )
+        return -EINVAL;
+
+    if ( !client )
+    {
+        client = client_create(cli_id);
+        if ( !client )
+            return -ENOMEM;
+    }
+    for ( i = 0; i < nr; i++ )
+    {
+        xen_tmem_pool_info_t pool;
+
+        if ( __copy_from_guest_offset(&pool, pools, i, 1 ) )
+            return -EFAULT;
+
+        if ( pool.n_pages )
+            return -EINVAL;
+
+        rc = do_tmem_new_pool(cli_id, pool.id, pool.flags.raw,
+                              pool.uuid[0], pool.uuid[1]);
+        if ( rc < 0 )
+            break;
+
+        pool.id = rc;
+        if ( __copy_to_guest_offset(pools, i, &pool, 1) )
+            return -EFAULT;
+    }
+
+    /* And how many we have processed. */
+    return rc ? : i;
+}
+
+static int tmemc_auth_pools(int cli_id,
+                            XEN_GUEST_HANDLE(xen_tmem_pool_info_t) pools,
+                            uint32_t len)
+{
+    unsigned int i;
+    int rc = 0;
+    unsigned int nr = len / sizeof(xen_tmem_pool_info_t);
+    struct client *client = tmem_client_from_cli_id(cli_id);
+
+    if ( len % sizeof(xen_tmem_pool_info_t) )
+        return -EINVAL;
+
+    if ( nr > MAX_POOLS_PER_DOMAIN )
+        return -E2BIG;
+
+    if ( !guest_handle_okay(pools, nr) )
+        return -EINVAL;
+
+    if ( !client )
+    {
+        client = client_create(cli_id);
+        if ( !client )
+            return -ENOMEM;
+    }
+
+    for ( i = 0; i < nr; i++ )
+    {
+        xen_tmem_pool_info_t pool;
+
+        if ( __copy_from_guest_offset(&pool, pools, i, 1 ) )
+            return -EFAULT;
+
+        if ( pool.n_pages )
+            return -EINVAL;
+
+        rc = tmemc_shared_pool_auth(cli_id, pool.uuid[0], pool.uuid[1],
+                                    pool.flags.u.auth);
+
+        if ( rc < 0 )
+            break;
+
+    }
+
+    /* And how many we have processed. */
+    return rc ? : i;
+}
+
 int tmem_control(struct xen_sysctl_tmem_op *op)
 {
     int ret;
@@ -438,6 +533,12 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
     case XEN_SYSCTL_TMEM_OP_GET_POOLS:
         ret = tmemc_get_pool(op->cli_id, op->u.pool, op->len);
         break;
+    case XEN_SYSCTL_TMEM_OP_SET_POOLS: /* TMEM_RESTORE_NEW */
+        ret = tmemc_set_pools(op->cli_id, op->u.pool, op->len);
+        break;
+    case XEN_SYSCTL_TMEM_OP_SET_AUTH: /* TMEM_AUTH */
+        ret = tmemc_auth_pools(op->cli_id, op->u.pool, op->len);
+        break;
     default:
         ret = do_tmem_control(op);
         break;
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index 7d60b71..06ce3ef 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -20,9 +20,6 @@ boolean_param("tmem", opt_tmem);
 bool_t __read_mostly opt_tmem_compress = 0;
 boolean_param("tmem_compress", opt_tmem_compress);
 
-bool_t __read_mostly opt_tmem_shared_auth = 0;
-boolean_param("tmem_shared_auth", opt_tmem_shared_auth);
-
 atomic_t freeable_page_count = ATOMIC_INIT(0);
 
 /* these are a concurrency bottleneck, could be percpu and dynamically
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 00f5e77..ee76a66 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -770,7 +770,9 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
 #define XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO        6
 #define XEN_SYSCTL_TMEM_OP_GET_POOLS              7
 #define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
+#define XEN_SYSCTL_TMEM_OP_SET_POOLS              9
 #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
+#define XEN_SYSCTL_TMEM_OP_SET_AUTH               11
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE     19
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_INV      20
 #define XEN_SYSCTL_TMEM_OP_SAVE_END               21
@@ -812,10 +814,14 @@ typedef struct xen_tmem_client xen_tmem_client_t;
 DEFINE_XEN_GUEST_HANDLE(xen_tmem_client_t);
 
 /*
- * XEN_SYSCTL_TMEM_OP_GET_POOLS uses the 'pool' array in
- * xen_sysctl_tmem_op with this structure. The hypercall will
+ * XEN_SYSCTL_TMEM_OP_[GET|SET]_POOLS or XEN_SYSCTL_TMEM_OP_SET_AUTH
+ * uses the 'pool' array in * xen_sysctl_tmem_op with this structure.
+ * The XEN_SYSCTL_TMEM_OP_GET_POOLS hypercall will
  * return the number of entries in 'pool' or a negative value
  * if an error was encountered.
+ * The XEN_SYSCTL_TMEM_OP_SET_[AUTH|POOLS] will return the number of
+ * entries in 'pool' processed or a negative value if an error
+ * was encountered.
  */
 struct xen_tmem_pool_info {
     union {
@@ -823,14 +829,15 @@ struct xen_tmem_pool_info {
         struct {
             uint32_t persist:1,    /* See TMEM_POOL_PERSIST. */
                      shared:1,     /* See TMEM_POOL_SHARED. */
-                     rsv:2,
+                     auth:1,       /* See TMEM_POOL_AUTH. */
+                     rsv1:1,
                      pagebits:8,   /* TMEM_POOL_PAGESIZE_[SHIFT,MASK]. */
                      rsv2:12,
                      version:8;    /* TMEM_POOL_VERSION_[SHIFT,MASK]. */
         } u;
     } flags;
     uint32_t id;                  /* Less than tmem_client.maxpools. */
-    uint64_t n_pages;
+    uint64_t n_pages;             /* Zero on XEN_SYSCTL_TMEM_OP_SET_[AUTH|POOLS]. */
     uint64_aligned_t uuid[2];
 };
 typedef struct xen_tmem_pool_info xen_tmem_pool_info_t;
diff --git a/xen/include/public/tmem.h b/xen/include/public/tmem.h
index 2d805fb..aa0aafa 100644
--- a/xen/include/public/tmem.h
+++ b/xen/include/public/tmem.h
@@ -51,9 +51,9 @@
 #define TMEM_XCHG                 10
 #endif
 
-/* Privileged commands to HYPERVISOR_tmem_op() */
-#define TMEM_AUTH                 101
-#define TMEM_RESTORE_NEW          102
+/* Privileged commands now called via XEN_SYSCTL_tmem_op. */
+#define TMEM_AUTH                 101 /* as XEN_SYSCTL_TMEM_OP_SET_AUTH. */
+#define TMEM_RESTORE_NEW          102 /* as XEN_SYSCTL_TMEM_OP_SET_POOL. */
 
 /* Bits for HYPERVISOR_tmem_op(TMEM_NEW_POOL) */
 #define TMEM_POOL_PERSIST          1
@@ -92,7 +92,7 @@ struct tmem_op {
             uint64_t uuid[2];
             uint32_t flags;
             uint32_t arg1;
-        } creat; /* for cmd == TMEM_NEW_POOL, TMEM_AUTH, TMEM_RESTORE_NEW */
+        } creat; /* for cmd == TMEM_NEW_POOL. */
         struct {
 #if __XEN_INTERFACE_VERSION__ < 0x00040600
             uint64_t oid[3];
diff --git a/xen/include/xen/tmem_control.h b/xen/include/xen/tmem_control.h
index 44bc07f..ad04cf7 100644
--- a/xen/include/xen/tmem_control.h
+++ b/xen/include/xen/tmem_control.h
@@ -18,6 +18,12 @@ extern rwlock_t tmem_rwlock;
 int tmem_evict(void);
 int do_tmem_control(struct xen_sysctl_tmem_op *op);
 
+struct client *client_create(domid_t cli_id);
+int do_tmem_new_pool(domid_t this_cli_id, uint32_t d_poolid, uint32_t flags,
+                     uint64_t uuid_lo, uint64_t uuid_hi);
+
+int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
+                           uint64_t uuid_hi, bool auth);
 #endif /* CONFIG_TMEM */
 
 #endif /* __XEN_TMEM_CONTROL_H__ */
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 13cf7bc..dc5888c 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -41,12 +41,6 @@ static inline bool_t tmem_compression_enabled(void)
     return opt_tmem_compress;
 }
 
-extern bool_t opt_tmem_shared_auth;
-static inline bool_t tmem_shared_auth(void)
-{
-    return opt_tmem_shared_auth;
-}
-
 #ifdef CONFIG_TMEM
 extern bool_t opt_tmem;
 static inline bool_t tmem_enabled(void)
@@ -198,8 +192,6 @@ static inline int tmem_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
         switch ( cop.cmd )
         {
         case TMEM_NEW_POOL:   u = XLAT_tmem_op_u_creat; break;
-        case TMEM_AUTH:       u = XLAT_tmem_op_u_creat; break;
-        case TMEM_RESTORE_NEW:u = XLAT_tmem_op_u_creat; break;
         default:              u = XLAT_tmem_op_u_gen ;  break;
         }
         XLAT_tmem_op(op, &cop);
@@ -293,7 +285,6 @@ struct client {
     long eph_count, eph_count_max;
     domid_t cli_id;
     xen_tmem_client_t info;
-    bool_t shared_auth_required;
     /* For save/restore/migration. */
     bool_t was_frozen;
     struct list_head persistent_invalidated_list;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 1/5] xen/libcx/tmem: Replace TMEM_RESTORE_NEW with XEN_SYSCTL_TMEM_OP_SET_POOLS
  2017-04-04 19:10 [PATCH v2] Tmem fixes for v4.9 Konrad Rzeszutek Wilk
@ 2017-04-04 19:10 ` Konrad Rzeszutek Wilk
  2017-04-05  9:28   ` Jan Beulich
  2017-04-04 19:10 ` [PATCH v2 2/5] xen/libxc: Move TMEM_AUTH to XEN_SYSCTL_TMEM_OP_SET_AUTH Konrad Rzeszutek Wilk
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-04-04 19:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, wei.liu2, jbeulich, Konrad Rzeszutek Wilk

This used to be done under TMEM_RESTORE_NEW which was an hypercall
accessible by the guest. However there are couple of reasons
not to do it:
 - No checking of domid on TMEM_RESTORE_NEW which meant that
   any guest could create TMEM pools for other guests.
 - The guest can already create pools using TMEM_NEW_POOL
   (which is limited to guest doing the hypercall)
 - This functionality is only needed during migration - there
   is no need for the guest to have this functionality.

However to move this we also have to allocate the 'struct domain'
->tmem pointer. It is by default set to NULL and would be initialized
via the guest do_tmem() hypercalls. Presumarily that was the
initial reason that TMEM_RESTORE_NEW was in the guest accessible
hypercalls.

Acked-by: Wei Liu <wei.liu2@citrix.com> [libxc change] [libxc change] [libxc change] [libxc change]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v1: First version.

v2: Added Wei's Ack.
  - Used 'switch' in do_tmem_op.
  - Dropped 'idx' in tmemc_set_pools.
  - Update comment in sysctl.h about xen_tmem_pool_info_t structure.
---
 tools/libxc/xc_tmem.c          | 22 +++++++++---------
 xen/common/tmem.c              | 30 ++++++++++++-------------
 xen/common/tmem_control.c      | 51 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h    | 11 ++++++---
 xen/include/public/tmem.h      |  5 +++--
 xen/include/xen/tmem_control.h |  4 ++++
 xen/include/xen/tmem_xen.h     |  1 -
 7 files changed, 93 insertions(+), 31 deletions(-)

diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 51d11ef..181de48 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -385,16 +385,18 @@ static int xc_tmem_restore_new_pool(
                     uint64_t uuid_lo,
                     uint64_t uuid_hi)
 {
-    tmem_op_t op;
-
-    op.cmd = TMEM_RESTORE_NEW;
-    op.pool_id = pool_id;
-    op.u.creat.arg1 = cli_id;
-    op.u.creat.flags = flags;
-    op.u.creat.uuid[0] = uuid_lo;
-    op.u.creat.uuid[1] = uuid_hi;
-
-    return do_tmem_op(xch, &op);
+    xen_tmem_pool_info_t pool = {
+        .flags.raw = flags,
+        .id = pool_id,
+        .n_pages = 0,
+        .uuid[0] = uuid_lo,
+        .uuid[1] = uuid_hi,
+    };
+
+    return xc_tmem_control(xch, pool_id,
+                           XEN_SYSCTL_TMEM_OP_SET_POOLS,
+                           cli_id, sizeof(pool),
+                           0 /* arg */, &pool);
 }
 
 int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 6d5de5b..ee43f13 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -804,7 +804,7 @@ static void pool_flush(struct tmem_pool *pool, domid_t cli_id)
 
 /************ CLIENT MANIPULATION OPERATIONS **************************/
 
-static struct client *client_create(domid_t cli_id)
+struct client *client_create(domid_t cli_id)
 {
     struct client *client = xzalloc(struct client);
     int i, shift;
@@ -1435,9 +1435,9 @@ static int do_tmem_destroy_pool(uint32_t pool_id)
     return 1;
 }
 
-static int do_tmem_new_pool(domid_t this_cli_id,
-                                     uint32_t d_poolid, uint32_t flags,
-                                     uint64_t uuid_lo, uint64_t uuid_hi)
+int do_tmem_new_pool(domid_t this_cli_id,
+                     uint32_t d_poolid, uint32_t flags,
+                     uint64_t uuid_lo, uint64_t uuid_hi)
 {
     struct client *client;
     domid_t cli_id;
@@ -1908,21 +1908,19 @@ long do_tmem_op(tmem_cli_op_t uops)
     /* Acquire write lock for all commands at first. */
     write_lock(&tmem_rwlock);
 
-    if ( op.cmd == TMEM_CONTROL )
+    switch ( op.cmd )
     {
+    case TMEM_CONTROL:
+    case TMEM_RESTORE_NEW:
         rc = -EOPNOTSUPP;
-    }
-    else if ( op.cmd == TMEM_AUTH )
-    {
+        break;
+
+    case TMEM_AUTH:
         rc = tmemc_shared_pool_auth(op.u.creat.arg1,op.u.creat.uuid[0],
                          op.u.creat.uuid[1],op.u.creat.flags);
-    }
-    else if ( op.cmd == TMEM_RESTORE_NEW )
-    {
-        rc = do_tmem_new_pool(op.u.creat.arg1, op.pool_id, op.u.creat.flags,
-                         op.u.creat.uuid[0], op.u.creat.uuid[1]);
-    }
-    else {
+        break;
+
+    default:
     /*
 	 * For other commands, create per-client tmem structure dynamically on
 	 * first use by client.
@@ -1999,6 +1997,8 @@ long do_tmem_op(tmem_cli_op_t uops)
                 tmem_stats.errored_tmem_ops++;
             return rc;
         }
+        break;
+
     }
 out:
     write_unlock(&tmem_rwlock);
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index ddd9cfe..3e99257 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -402,6 +402,54 @@ static int tmemc_get_pool(int cli_id,
     return rc ? : idx;
 }
 
+static int tmemc_set_pools(int cli_id,
+                           XEN_GUEST_HANDLE(xen_tmem_pool_info_t) pools,
+                           uint32_t len)
+{
+    unsigned int i;
+    int rc = 0;
+    unsigned int nr = len / sizeof(xen_tmem_pool_info_t);
+    struct client *client = tmem_client_from_cli_id(cli_id);
+
+    if ( len % sizeof(xen_tmem_pool_info_t) )
+        return -EINVAL;
+
+    if ( nr > MAX_POOLS_PER_DOMAIN )
+        return -E2BIG;
+
+    if ( !guest_handle_okay(pools, nr) )
+        return -EINVAL;
+
+    if ( !client )
+    {
+        client = client_create(cli_id);
+        if ( !client )
+            return -ENOMEM;
+    }
+    for ( i = 0; i < nr; i++ )
+    {
+        xen_tmem_pool_info_t pool;
+
+        if ( __copy_from_guest_offset(&pool, pools, i, 1 ) )
+            return -EFAULT;
+
+        if ( pool.n_pages )
+            return -EINVAL;
+
+        rc = do_tmem_new_pool(cli_id, pool.id, pool.flags.raw,
+                              pool.uuid[0], pool.uuid[1]);
+        if ( rc < 0 )
+            break;
+
+        pool.id = rc;
+        if ( __copy_to_guest_offset(pools, i, &pool, 1) )
+            return -EFAULT;
+    }
+
+    /* And how many we have processed. */
+    return rc ? : i;
+}
+
 int tmem_control(struct xen_sysctl_tmem_op *op)
 {
     int ret;
@@ -438,6 +486,9 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
     case XEN_SYSCTL_TMEM_OP_GET_POOLS:
         ret = tmemc_get_pool(op->cli_id, op->u.pool, op->len);
         break;
+    case XEN_SYSCTL_TMEM_OP_SET_POOLS: /* TMEM_RESTORE_NEW */
+        ret = tmemc_set_pools(op->cli_id, op->u.pool, op->len);
+        break;
     default:
         ret = do_tmem_control(op);
         break;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 00f5e77..c03d027 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -770,6 +770,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
 #define XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO        6
 #define XEN_SYSCTL_TMEM_OP_GET_POOLS              7
 #define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
+#define XEN_SYSCTL_TMEM_OP_SET_POOLS              9
 #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE     19
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_INV      20
@@ -812,10 +813,14 @@ typedef struct xen_tmem_client xen_tmem_client_t;
 DEFINE_XEN_GUEST_HANDLE(xen_tmem_client_t);
 
 /*
- * XEN_SYSCTL_TMEM_OP_GET_POOLS uses the 'pool' array in
- * xen_sysctl_tmem_op with this structure. The hypercall will
+ * XEN_SYSCTL_TMEM_OP_[GET|SET]_POOLS uses the 'pool' array in
+ * xen_sysctl_tmem_op with this structure.
+ * The XEN_SYSCTL_TMEM_OP_GET_POOLS hypercall will
  * return the number of entries in 'pool' or a negative value
  * if an error was encountered.
+ * The XEN_SYSCTL_TMEM_OP_SET_POOLS will return the number of
+ * entries in 'pool' processed or a negative value if an error
+ * was encountered.
  */
 struct xen_tmem_pool_info {
     union {
@@ -830,7 +835,7 @@ struct xen_tmem_pool_info {
         } u;
     } flags;
     uint32_t id;                  /* Less than tmem_client.maxpools. */
-    uint64_t n_pages;
+    uint64_t n_pages;             /* Zero on XEN_SYSCTL_TMEM_OP_SET_POOLS. */
     uint64_aligned_t uuid[2];
 };
 typedef struct xen_tmem_pool_info xen_tmem_pool_info_t;
diff --git a/xen/include/public/tmem.h b/xen/include/public/tmem.h
index 2d805fb..b9f3537 100644
--- a/xen/include/public/tmem.h
+++ b/xen/include/public/tmem.h
@@ -53,7 +53,8 @@
 
 /* Privileged commands to HYPERVISOR_tmem_op() */
 #define TMEM_AUTH                 101
-#define TMEM_RESTORE_NEW          102
+#define TMEM_RESTORE_NEW          102 /* Now called via XEN_SYSCTL_tmem_op as
+                                         XEN_SYSCTL_TMEM_OP_SET_POOL. */
 
 /* Bits for HYPERVISOR_tmem_op(TMEM_NEW_POOL) */
 #define TMEM_POOL_PERSIST          1
@@ -92,7 +93,7 @@ struct tmem_op {
             uint64_t uuid[2];
             uint32_t flags;
             uint32_t arg1;
-        } creat; /* for cmd == TMEM_NEW_POOL, TMEM_AUTH, TMEM_RESTORE_NEW */
+        } creat; /* for cmd == TMEM_NEW_POOL, TMEM_AUTH */
         struct {
 #if __XEN_INTERFACE_VERSION__ < 0x00040600
             uint64_t oid[3];
diff --git a/xen/include/xen/tmem_control.h b/xen/include/xen/tmem_control.h
index 44bc07f..91c185e 100644
--- a/xen/include/xen/tmem_control.h
+++ b/xen/include/xen/tmem_control.h
@@ -18,6 +18,10 @@ extern rwlock_t tmem_rwlock;
 int tmem_evict(void);
 int do_tmem_control(struct xen_sysctl_tmem_op *op);
 
+struct client *client_create(domid_t cli_id);
+int do_tmem_new_pool(domid_t this_cli_id, uint32_t d_poolid, uint32_t flags,
+                     uint64_t uuid_lo, uint64_t uuid_hi);
+
 #endif /* CONFIG_TMEM */
 
 #endif /* __XEN_TMEM_CONTROL_H__ */
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 13cf7bc..b6bd61b 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -199,7 +199,6 @@ static inline int tmem_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
         {
         case TMEM_NEW_POOL:   u = XLAT_tmem_op_u_creat; break;
         case TMEM_AUTH:       u = XLAT_tmem_op_u_creat; break;
-        case TMEM_RESTORE_NEW:u = XLAT_tmem_op_u_creat; break;
         default:              u = XLAT_tmem_op_u_gen ;  break;
         }
         XLAT_tmem_op(op, &cop);
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/5] xen/libxc: Move TMEM_AUTH to XEN_SYSCTL_TMEM_OP_SET_AUTH
  2017-04-04 19:10 [PATCH v2] Tmem fixes for v4.9 Konrad Rzeszutek Wilk
  2017-04-04 19:10 ` [PATCH v2 1/5] xen/libcx/tmem: Replace TMEM_RESTORE_NEW with XEN_SYSCTL_TMEM_OP_SET_POOLS Konrad Rzeszutek Wilk
@ 2017-04-04 19:10 ` Konrad Rzeszutek Wilk
  2017-04-05  9:30   ` Jan Beulich
  2017-04-05 10:02   ` Wei Liu
  2017-04-04 19:10 ` [PATCH v2 3/5] tmem: By default to join an shared pool it must be authorized Konrad Rzeszutek Wilk
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-04-04 19:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, wei.liu2, jbeulich, Konrad Rzeszutek Wilk

which surprisingly (or maybe not) looks like
XEN_SYSCTL_TMEM_OP_SET_POOLS.

This hypercall came about, as explained in docs/misc/tmem-internals.html:

When tmem was first proposed to the linux kernel mailing list
(LKML), there was concern expressed about security of shared ephemeral
pools.  The initial tmem implementation only
required a client to provide a 128-bit UUID to identify a shared pool, and the
linux-side tmem implementation obtained this UUID from the superblock of the
shared filesystem (in ocfs2).  It was
pointed out on LKML that the UUID was essentially a security key and any
malicious domain that guessed it would have access to any data from the shared
filesystem that found its way into tmem.
..
As a result, a Xen boot option -- tmem_shared_auth; -- was
added.  The option defaults to disabled,
but when it is enabled, management tools must explicitly authenticate (or may
explicitly deny) shared pool access to any client.
On Xen, this is done with the xm tmem-shared-auth command.
"

However the implementation has some rather large holes:

a) The hypercall was accessed from any guest.

b) If the ->domain id value is 0xFFFF then one can toggle the
   tmem_global.shared_auth knob on/off. That with a)
   made it pretty bad.

c) If one toggles the tmem_global.shared_auth off, then the
  'tmem_shared_auth=1' bootup parameter is ignored and
   one can join any shared pool (if UUID is known)!

d) If the 'tmem_shared_auth=1' and tmem_global.shared_auth is
  set to 1, then one can only join an shared pool if the
  UUID has been set by 'xl tmem-shared-auth'. Otherwise
  the joining of a pool fails and a non-shared pool is
  created (without errors to guest). Not exactly sure if
  the shared pool creation at that point should error out
  or not.

e) If a guest is migrated, the policy values (which UUID
  can be shared, whether tmem_global.shared_auth is set, etc)
  are completely ignored.

This patch only fixes a) and only allows the hypercall to
be called by the control domain. Subsequent patches will
fix the remaining issues.

We also have to call client_create as the guest at this
point may not have done any tmem hypercalls - and hence
the '->tmem' from 'struct domain' is still NULL. Us calling
client_create fixes this.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v1: First posting
v2: Drop 'idx' in tmemc_auth_pools
    Check for 'n_pages' in tmemc_auth_pools.
    Update comment for xen_tmem_pool_info_t.
---
 tools/libxc/include/xenctrl.h  |  2 +-
 tools/libxc/xc_tmem.c          | 52 +++++++++++++-----------------------------
 xen/common/tmem.c              | 10 +++-----
 xen/common/tmem_control.c      | 50 ++++++++++++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h    | 12 ++++++----
 xen/include/public/tmem.h      |  9 ++++----
 xen/include/xen/tmem_control.h |  2 ++
 xen/include/xen/tmem_xen.h     |  1 -
 8 files changed, 83 insertions(+), 55 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 2d97d36..e2b4cc1 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1901,7 +1901,7 @@ int xc_tmem_control_oid(xc_interface *xch, int32_t pool_id, uint32_t subop,
 int xc_tmem_control(xc_interface *xch,
                     int32_t pool_id, uint32_t subop, uint32_t cli_id,
                     uint32_t len, uint32_t arg, void *buf);
-int xc_tmem_auth(xc_interface *xch, int cli_id, char *uuid_str, int arg1);
+int xc_tmem_auth(xc_interface *xch, int cli_id, char *uuid_str, int enable);
 int xc_tmem_save(xc_interface *xch, int dom, int live, int fd, int field_marker);
 int xc_tmem_save_extra(xc_interface *xch, int dom, int fd, int field_marker);
 void xc_tmem_save_done(xc_interface *xch, int dom);
diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 181de48..5f5e18f 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -22,30 +22,6 @@
 #include <assert.h>
 #include <xen/tmem.h>
 
-static int do_tmem_op(xc_interface *xch, tmem_op_t *op)
-{
-    int ret;
-    DECLARE_HYPERCALL_BOUNCE(op, sizeof(*op), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
-
-    if ( xc_hypercall_bounce_pre(xch, op) )
-    {
-        PERROR("Could not bounce buffer for tmem op hypercall");
-        return -EFAULT;
-    }
-
-    ret = xencall1(xch->xcall, __HYPERVISOR_tmem_op,
-                   HYPERCALL_BUFFER_AS_ARG(op));
-    if ( ret < 0 )
-    {
-        if ( errno == EACCES )
-            DPRINTF("tmem operation failed -- need to"
-                    " rebuild the user-space tool set?\n");
-    }
-    xc_hypercall_bounce_post(xch, op);
-
-    return ret;
-}
-
 int xc_tmem_control(xc_interface *xch,
                     int32_t pool_id,
                     uint32_t cmd,
@@ -69,7 +45,8 @@ int xc_tmem_control(xc_interface *xch,
     sysctl.u.tmem_op.oid.oid[1] = 0;
     sysctl.u.tmem_op.oid.oid[2] = 0;
 
-    if ( cmd == XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO )
+    if ( cmd == XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO ||
+         cmd == XEN_SYSCTL_TMEM_OP_SET_AUTH )
         HYPERCALL_BOUNCE_SET_DIR(buf, XC_HYPERCALL_BUFFER_BOUNCE_IN);
     if ( len )
     {
@@ -176,22 +153,25 @@ static int xc_tmem_uuid_parse(char *uuid_str, uint64_t *uuid_lo, uint64_t *uuid_
 int xc_tmem_auth(xc_interface *xch,
                  int cli_id,
                  char *uuid_str,
-                 int arg1)
+                 int enable)
 {
-    tmem_op_t op;
-
-    op.cmd = TMEM_AUTH;
-    op.pool_id = 0;
-    op.u.creat.arg1 = cli_id;
-    op.u.creat.flags = arg1;
-    if ( xc_tmem_uuid_parse(uuid_str, &op.u.creat.uuid[0],
-                                      &op.u.creat.uuid[1]) < 0 )
+    xen_tmem_pool_info_t pool = {
+        .flags.u.auth = enable,
+        .id = 0,
+        .n_pages = 0,
+        .uuid[0] = 0,
+        .uuid[1] = 0,
+    };
+    if ( xc_tmem_uuid_parse(uuid_str, &pool.uuid[0],
+                                      &pool.uuid[1]) < 0 )
     {
         PERROR("Can't parse uuid, use xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx");
         return -1;
     }
-
-    return do_tmem_op(xch, &op);
+    return xc_tmem_control(xch, 0 /* pool_id */,
+                           XEN_SYSCTL_TMEM_OP_SET_AUTH,
+                           cli_id, sizeof(pool),
+                           0 /* arg */, &pool);
 }
 
 /* Save/restore/live migrate */
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index ee43f13..158d660 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1604,8 +1604,8 @@ fail:
 
 /************ TMEM CONTROL OPERATIONS ************************************/
 
-static int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
-                                  uint64_t uuid_hi, bool_t auth)
+int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
+                           uint64_t uuid_hi, bool auth)
 {
     struct client *client;
     int i, free = -1;
@@ -1912,12 +1912,8 @@ long do_tmem_op(tmem_cli_op_t uops)
     {
     case TMEM_CONTROL:
     case TMEM_RESTORE_NEW:
-        rc = -EOPNOTSUPP;
-        break;
-
     case TMEM_AUTH:
-        rc = tmemc_shared_pool_auth(op.u.creat.arg1,op.u.creat.uuid[0],
-                         op.u.creat.uuid[1],op.u.creat.flags);
+        rc = -EOPNOTSUPP;
         break;
 
     default:
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index 3e99257..2d980e3 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -450,6 +450,53 @@ static int tmemc_set_pools(int cli_id,
     return rc ? : i;
 }
 
+static int tmemc_auth_pools(int cli_id,
+                            XEN_GUEST_HANDLE(xen_tmem_pool_info_t) pools,
+                            uint32_t len)
+{
+    unsigned int i;
+    int rc = 0;
+    unsigned int nr = len / sizeof(xen_tmem_pool_info_t);
+    struct client *client = tmem_client_from_cli_id(cli_id);
+
+    if ( len % sizeof(xen_tmem_pool_info_t) )
+        return -EINVAL;
+
+    if ( nr > MAX_POOLS_PER_DOMAIN )
+        return -E2BIG;
+
+    if ( !guest_handle_okay(pools, nr) )
+        return -EINVAL;
+
+    if ( !client )
+    {
+        client = client_create(cli_id);
+        if ( !client )
+            return -ENOMEM;
+    }
+
+    for ( i = 0; i < nr; i++ )
+    {
+        xen_tmem_pool_info_t pool;
+
+        if ( __copy_from_guest_offset(&pool, pools, i, 1 ) )
+            return -EFAULT;
+
+        if ( pool.n_pages )
+            return -EINVAL;
+
+        rc = tmemc_shared_pool_auth(cli_id, pool.uuid[0], pool.uuid[1],
+                                    pool.flags.u.auth);
+
+        if ( rc < 0 )
+            break;
+
+    }
+
+    /* And how many we have processed. */
+    return rc ? : i;
+}
+
 int tmem_control(struct xen_sysctl_tmem_op *op)
 {
     int ret;
@@ -489,6 +536,9 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
     case XEN_SYSCTL_TMEM_OP_SET_POOLS: /* TMEM_RESTORE_NEW */
         ret = tmemc_set_pools(op->cli_id, op->u.pool, op->len);
         break;
+    case XEN_SYSCTL_TMEM_OP_SET_AUTH: /* TMEM_AUTH */
+        ret = tmemc_auth_pools(op->cli_id, op->u.pool, op->len);
+        break;
     default:
         ret = do_tmem_control(op);
         break;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index c03d027..ee76a66 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -772,6 +772,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
 #define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
 #define XEN_SYSCTL_TMEM_OP_SET_POOLS              9
 #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
+#define XEN_SYSCTL_TMEM_OP_SET_AUTH               11
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE     19
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_INV      20
 #define XEN_SYSCTL_TMEM_OP_SAVE_END               21
@@ -813,12 +814,12 @@ typedef struct xen_tmem_client xen_tmem_client_t;
 DEFINE_XEN_GUEST_HANDLE(xen_tmem_client_t);
 
 /*
- * XEN_SYSCTL_TMEM_OP_[GET|SET]_POOLS uses the 'pool' array in
- * xen_sysctl_tmem_op with this structure.
+ * XEN_SYSCTL_TMEM_OP_[GET|SET]_POOLS or XEN_SYSCTL_TMEM_OP_SET_AUTH
+ * uses the 'pool' array in * xen_sysctl_tmem_op with this structure.
  * The XEN_SYSCTL_TMEM_OP_GET_POOLS hypercall will
  * return the number of entries in 'pool' or a negative value
  * if an error was encountered.
- * The XEN_SYSCTL_TMEM_OP_SET_POOLS will return the number of
+ * The XEN_SYSCTL_TMEM_OP_SET_[AUTH|POOLS] will return the number of
  * entries in 'pool' processed or a negative value if an error
  * was encountered.
  */
@@ -828,14 +829,15 @@ struct xen_tmem_pool_info {
         struct {
             uint32_t persist:1,    /* See TMEM_POOL_PERSIST. */
                      shared:1,     /* See TMEM_POOL_SHARED. */
-                     rsv:2,
+                     auth:1,       /* See TMEM_POOL_AUTH. */
+                     rsv1:1,
                      pagebits:8,   /* TMEM_POOL_PAGESIZE_[SHIFT,MASK]. */
                      rsv2:12,
                      version:8;    /* TMEM_POOL_VERSION_[SHIFT,MASK]. */
         } u;
     } flags;
     uint32_t id;                  /* Less than tmem_client.maxpools. */
-    uint64_t n_pages;             /* Zero on XEN_SYSCTL_TMEM_OP_SET_POOLS. */
+    uint64_t n_pages;             /* Zero on XEN_SYSCTL_TMEM_OP_SET_[AUTH|POOLS]. */
     uint64_aligned_t uuid[2];
 };
 typedef struct xen_tmem_pool_info xen_tmem_pool_info_t;
diff --git a/xen/include/public/tmem.h b/xen/include/public/tmem.h
index b9f3537..aa0aafa 100644
--- a/xen/include/public/tmem.h
+++ b/xen/include/public/tmem.h
@@ -51,10 +51,9 @@
 #define TMEM_XCHG                 10
 #endif
 
-/* Privileged commands to HYPERVISOR_tmem_op() */
-#define TMEM_AUTH                 101
-#define TMEM_RESTORE_NEW          102 /* Now called via XEN_SYSCTL_tmem_op as
-                                         XEN_SYSCTL_TMEM_OP_SET_POOL. */
+/* Privileged commands now called via XEN_SYSCTL_tmem_op. */
+#define TMEM_AUTH                 101 /* as XEN_SYSCTL_TMEM_OP_SET_AUTH. */
+#define TMEM_RESTORE_NEW          102 /* as XEN_SYSCTL_TMEM_OP_SET_POOL. */
 
 /* Bits for HYPERVISOR_tmem_op(TMEM_NEW_POOL) */
 #define TMEM_POOL_PERSIST          1
@@ -93,7 +92,7 @@ struct tmem_op {
             uint64_t uuid[2];
             uint32_t flags;
             uint32_t arg1;
-        } creat; /* for cmd == TMEM_NEW_POOL, TMEM_AUTH */
+        } creat; /* for cmd == TMEM_NEW_POOL. */
         struct {
 #if __XEN_INTERFACE_VERSION__ < 0x00040600
             uint64_t oid[3];
diff --git a/xen/include/xen/tmem_control.h b/xen/include/xen/tmem_control.h
index 91c185e..ad04cf7 100644
--- a/xen/include/xen/tmem_control.h
+++ b/xen/include/xen/tmem_control.h
@@ -22,6 +22,8 @@ struct client *client_create(domid_t cli_id);
 int do_tmem_new_pool(domid_t this_cli_id, uint32_t d_poolid, uint32_t flags,
                      uint64_t uuid_lo, uint64_t uuid_hi);
 
+int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
+                           uint64_t uuid_hi, bool auth);
 #endif /* CONFIG_TMEM */
 
 #endif /* __XEN_TMEM_CONTROL_H__ */
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index b6bd61b..70cc108 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -198,7 +198,6 @@ static inline int tmem_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
         switch ( cop.cmd )
         {
         case TMEM_NEW_POOL:   u = XLAT_tmem_op_u_creat; break;
-        case TMEM_AUTH:       u = XLAT_tmem_op_u_creat; break;
         default:              u = XLAT_tmem_op_u_gen ;  break;
         }
         XLAT_tmem_op(op, &cop);
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/5] tmem: By default to join an shared pool it must be authorized.
  2017-04-04 19:10 [PATCH v2] Tmem fixes for v4.9 Konrad Rzeszutek Wilk
  2017-04-04 19:10 ` [PATCH v2 1/5] xen/libcx/tmem: Replace TMEM_RESTORE_NEW with XEN_SYSCTL_TMEM_OP_SET_POOLS Konrad Rzeszutek Wilk
  2017-04-04 19:10 ` [PATCH v2 2/5] xen/libxc: Move TMEM_AUTH to XEN_SYSCTL_TMEM_OP_SET_AUTH Konrad Rzeszutek Wilk
@ 2017-04-04 19:10 ` Konrad Rzeszutek Wilk
  2017-04-05  9:36   ` Jan Beulich
  2017-04-04 19:10 ` [PATCH v2 4/5] tmem: Fix tmem-shared-auth 'auth' values Konrad Rzeszutek Wilk
  2017-04-04 19:10 ` [PATCH v2 5/5] tmem: Parse UUIDs correctly Konrad Rzeszutek Wilk
  4 siblings, 1 reply; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-04-04 19:10 UTC (permalink / raw)
  To: xen-devel; +Cc: wei.liu2, jbeulich, Konrad Rzeszutek Wilk

Having an off by default option allowing guests to join
_any_ shared pool is not very secure.

Lets eliminate tmem_shared_auth bootup option (which
was disabled by default) and have the code force this by default.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 docs/misc/tmem-internals.html       | 6 ++----
 docs/misc/xen-command-line.markdown | 3 ---
 xen/common/tmem.c                   | 4 ++--
 xen/common/tmem_xen.c               | 3 ---
 xen/include/xen/tmem_xen.h          | 7 -------
 5 files changed, 4 insertions(+), 19 deletions(-)

diff --git a/docs/misc/tmem-internals.html b/docs/misc/tmem-internals.html
index 2d8635d..9b7e70e 100644
--- a/docs/misc/tmem-internals.html
+++ b/docs/misc/tmem-internals.html
@@ -715,11 +715,9 @@ Ocfs2 has only very limited security; it is assumed that anyone who can
 access the filesystem bits on the shared disk can mount the filesystem and use
 it.  But in a virtualized data center,
 higher isolation requirements may apply.
-As a result, a Xen boot option -- &quot;tmem_shared_auth&quot; -- was
-added.  The option defaults to disabled,
-but when it is enabled, management tools must explicitly authenticate (or may
+As a result, management tools must explicitly authenticate (or may
 explicitly deny) shared pool access to any client.
-On Xen, this is done with the &quot;xm
+On Xen, this is done with the &quot;xl
 tmem-shared-auth&quot; command.
 <P>
 <b><i>32-bit implementation</i>.</b>
diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 9eb85d6..9690512 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1539,9 +1539,6 @@ pages) must also be specified via the tbuf\_size parameter.
 ### tmem\_compress
 > `= <boolean>`
 
-### tmem\_shared\_auth
-> `= <boolean>`
-
 ### tsc
 > `= unstable | skewed | stable:socket`
 
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 158d660..4f77a8c 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -846,7 +846,6 @@ struct client *client_create(domid_t cli_id)
     client->info.version = TMEM_SPEC_VERSION;
     client->info.maxpools = MAX_POOLS_PER_DOMAIN;
     client->info.flags.u.compress = tmem_compression_enabled();
-    client->shared_auth_required = tmem_shared_auth();
     for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
         client->shared_auth_uuid[i][0] =
             client->shared_auth_uuid[i][1] = -1L;
@@ -1530,7 +1529,8 @@ int do_tmem_new_pool(domid_t this_cli_id,
             pool->shared = 0;
             goto out;
         }
-        if ( client->shared_auth_required && !tmem_global.shared_auth )
+        /* By default only join domains that are authorized by admin. */
+        if ( !tmem_global.shared_auth )
         {
             for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
                 if ( (client->shared_auth_uuid[i][0] == uuid_lo) &&
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index 7d60b71..06ce3ef 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -20,9 +20,6 @@ boolean_param("tmem", opt_tmem);
 bool_t __read_mostly opt_tmem_compress = 0;
 boolean_param("tmem_compress", opt_tmem_compress);
 
-bool_t __read_mostly opt_tmem_shared_auth = 0;
-boolean_param("tmem_shared_auth", opt_tmem_shared_auth);
-
 atomic_t freeable_page_count = ATOMIC_INIT(0);
 
 /* these are a concurrency bottleneck, could be percpu and dynamically
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 70cc108..dc5888c 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -41,12 +41,6 @@ static inline bool_t tmem_compression_enabled(void)
     return opt_tmem_compress;
 }
 
-extern bool_t opt_tmem_shared_auth;
-static inline bool_t tmem_shared_auth(void)
-{
-    return opt_tmem_shared_auth;
-}
-
 #ifdef CONFIG_TMEM
 extern bool_t opt_tmem;
 static inline bool_t tmem_enabled(void)
@@ -291,7 +285,6 @@ struct client {
     long eph_count, eph_count_max;
     domid_t cli_id;
     xen_tmem_client_t info;
-    bool_t shared_auth_required;
     /* For save/restore/migration. */
     bool_t was_frozen;
     struct list_head persistent_invalidated_list;
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 4/5] tmem: Fix tmem-shared-auth 'auth' values
  2017-04-04 19:10 [PATCH v2] Tmem fixes for v4.9 Konrad Rzeszutek Wilk
                   ` (2 preceding siblings ...)
  2017-04-04 19:10 ` [PATCH v2 3/5] tmem: By default to join an shared pool it must be authorized Konrad Rzeszutek Wilk
@ 2017-04-04 19:10 ` Konrad Rzeszutek Wilk
  2017-04-04 19:10 ` [PATCH v2 5/5] tmem: Parse UUIDs correctly Konrad Rzeszutek Wilk
  4 siblings, 0 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-04-04 19:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, wei.liu2, jbeulich, Konrad Rzeszutek Wilk

The hypervisor code (tmemc_shared_pool_auth) since the inception
would consider auth values of:
 0 - to disable authentication!
 1 - to enable authentication for the given UUID.

The docs have it the other way around, so lets fix it.

Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
--
v2: Added Wei's Ack.
---
 tools/xl/xl_cmdtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7d97811..30eb93c 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -416,7 +416,7 @@ struct cmd_spec cmd_table[] = {
       "  -a                             Authenticate for all tmem pools\n"
       "  -u UUID                        Specify uuid\n"
       "                                 (abcdef01-2345-6789-1234-567890abcdef)\n"
-      "  -A AUTH                        0=auth,1=deauth",
+      "  -A AUTH                        0=deauth,1=auth",
     },
     { "tmem-freeable",
       &main_tmem_freeable, 0, 0,
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 5/5] tmem: Parse UUIDs correctly.
  2017-04-04 19:10 [PATCH v2] Tmem fixes for v4.9 Konrad Rzeszutek Wilk
                   ` (3 preceding siblings ...)
  2017-04-04 19:10 ` [PATCH v2 4/5] tmem: Fix tmem-shared-auth 'auth' values Konrad Rzeszutek Wilk
@ 2017-04-04 19:10 ` Konrad Rzeszutek Wilk
  4 siblings, 0 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-04-04 19:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, wei.liu2, jbeulich, Konrad Rzeszutek Wilk

A simple
xl tmem-shared-auth -u 00000000-0000-000A-0000-000000000001 -A 0 0

resulted in uuid_low = 1 (correct) and uuid_high = 0 (umm?).

The issue was that for hex values above 'A' (or 'a') we forgot
to add 10.

Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v2:
 - Added Wei's Ack.
---
 tools/libxc/xc_tmem.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 5f5e18f..9bf5cc3 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -138,9 +138,9 @@ static int xc_tmem_uuid_parse(char *uuid_str, uint64_t *uuid_lo, uint64_t *uuid_
         else if ( *p >= '0' && *p <= '9' )
             digit = *p - '0';
         else if ( *p >= 'A' && *p <= 'F' )
-            digit = *p - 'A';
+            digit = *p - 'A' + 10;
         else if ( *p >= 'a' && *p <= 'f' )
-            digit = *p - 'a';
+            digit = *p - 'a' + 10;
         else
             return -1;
         *x = (*x << 4) | digit;
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/5] xen/libcx/tmem: Replace TMEM_RESTORE_NEW with XEN_SYSCTL_TMEM_OP_SET_POOLS
  2017-04-04 19:10 ` [PATCH v2 1/5] xen/libcx/tmem: Replace TMEM_RESTORE_NEW with XEN_SYSCTL_TMEM_OP_SET_POOLS Konrad Rzeszutek Wilk
@ 2017-04-05  9:28   ` Jan Beulich
  0 siblings, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2017-04-05  9:28 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Ian Jackson, wei.liu2, xen-devel

>>> On 04.04.17 at 21:10, <konrad.wilk@oracle.com> wrote:
> This used to be done under TMEM_RESTORE_NEW which was an hypercall
> accessible by the guest. However there are couple of reasons
> not to do it:
>  - No checking of domid on TMEM_RESTORE_NEW which meant that
>    any guest could create TMEM pools for other guests.
>  - The guest can already create pools using TMEM_NEW_POOL
>    (which is limited to guest doing the hypercall)
>  - This functionality is only needed during migration - there
>    is no need for the guest to have this functionality.
> 
> However to move this we also have to allocate the 'struct domain'
> ->tmem pointer. It is by default set to NULL and would be initialized
> via the guest do_tmem() hypercalls. Presumarily that was the
> initial reason that TMEM_RESTORE_NEW was in the guest accessible
> hypercalls.
> 
> Acked-by: Wei Liu <wei.liu2@citrix.com> [libxc change] [libxc change] [libxc change] [libxc change]

What's this four fold repetition?

> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/5] xen/libxc: Move TMEM_AUTH to XEN_SYSCTL_TMEM_OP_SET_AUTH
  2017-04-04 19:10 ` [PATCH v2 2/5] xen/libxc: Move TMEM_AUTH to XEN_SYSCTL_TMEM_OP_SET_AUTH Konrad Rzeszutek Wilk
@ 2017-04-05  9:30   ` Jan Beulich
  2017-04-05 10:02   ` Wei Liu
  1 sibling, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2017-04-05  9:30 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Ian Jackson, wei.liu2, xen-devel

>>> On 04.04.17 at 21:10, <konrad.wilk@oracle.com> wrote:
> which surprisingly (or maybe not) looks like
> XEN_SYSCTL_TMEM_OP_SET_POOLS.
> 
> This hypercall came about, as explained in docs/misc/tmem-internals.html:
> 
> When tmem was first proposed to the linux kernel mailing list
> (LKML), there was concern expressed about security of shared ephemeral
> pools.  The initial tmem implementation only
> required a client to provide a 128-bit UUID to identify a shared pool, and 
> the
> linux-side tmem implementation obtained this UUID from the superblock of the
> shared filesystem (in ocfs2).  It was
> pointed out on LKML that the UUID was essentially a security key and any
> malicious domain that guessed it would have access to any data from the 
> shared
> filesystem that found its way into tmem.
> ..
> As a result, a Xen boot option -- tmem_shared_auth; -- was
> added.  The option defaults to disabled,
> but when it is enabled, management tools must explicitly authenticate (or 
> may
> explicitly deny) shared pool access to any client.
> On Xen, this is done with the xm tmem-shared-auth command.
> "
> 
> However the implementation has some rather large holes:
> 
> a) The hypercall was accessed from any guest.
> 
> b) If the ->domain id value is 0xFFFF then one can toggle the
>    tmem_global.shared_auth knob on/off. That with a)
>    made it pretty bad.
> 
> c) If one toggles the tmem_global.shared_auth off, then the
>   'tmem_shared_auth=1' bootup parameter is ignored and
>    one can join any shared pool (if UUID is known)!
> 
> d) If the 'tmem_shared_auth=1' and tmem_global.shared_auth is
>   set to 1, then one can only join an shared pool if the
>   UUID has been set by 'xl tmem-shared-auth'. Otherwise
>   the joining of a pool fails and a non-shared pool is
>   created (without errors to guest). Not exactly sure if
>   the shared pool creation at that point should error out
>   or not.
> 
> e) If a guest is migrated, the policy values (which UUID
>   can be shared, whether tmem_global.shared_auth is set, etc)
>   are completely ignored.
> 
> This patch only fixes a) and only allows the hypercall to
> be called by the control domain. Subsequent patches will
> fix the remaining issues.
> 
> We also have to call client_create as the guest at this
> point may not have done any tmem hypercalls - and hence
> the '->tmem' from 'struct domain' is still NULL. Us calling
> client_create fixes this.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Hypervisor side (restriction also applies on patch 1)
Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/5] tmem: By default to join an shared pool it must be authorized.
  2017-04-04 19:10 ` [PATCH v2 3/5] tmem: By default to join an shared pool it must be authorized Konrad Rzeszutek Wilk
@ 2017-04-05  9:36   ` Jan Beulich
  2017-04-05 13:40     ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2017-04-05  9:36 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, wei.liu2

>>> On 04.04.17 at 21:10, <konrad.wilk@oracle.com> wrote:
> @@ -1530,7 +1529,8 @@ int do_tmem_new_pool(domid_t this_cli_id,
>              pool->shared = 0;
>              goto out;
>          }
> -        if ( client->shared_auth_required && !tmem_global.shared_auth )
> +        /* By default only join domains that are authorized by admin. */
> +        if ( !tmem_global.shared_auth )

Why "by default"? Is this comment really useful here? Other than
that the patch looks okay, but I won't claim to understand enough
of tmem to know this is sufficiently backwards compatible, so I
won't claim to have reviewed it in full.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/5] xen/libxc: Move TMEM_AUTH to XEN_SYSCTL_TMEM_OP_SET_AUTH
  2017-04-04 19:10 ` [PATCH v2 2/5] xen/libxc: Move TMEM_AUTH to XEN_SYSCTL_TMEM_OP_SET_AUTH Konrad Rzeszutek Wilk
  2017-04-05  9:30   ` Jan Beulich
@ 2017-04-05 10:02   ` Wei Liu
  1 sibling, 0 replies; 12+ messages in thread
From: Wei Liu @ 2017-04-05 10:02 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, wei.liu2, Ian Jackson, jbeulich

On Tue, Apr 04, 2017 at 03:10:14PM -0400, Konrad Rzeszutek Wilk wrote:
> which surprisingly (or maybe not) looks like
> XEN_SYSCTL_TMEM_OP_SET_POOLS.
> 
> This hypercall came about, as explained in docs/misc/tmem-internals.html:
> 
> When tmem was first proposed to the linux kernel mailing list
> (LKML), there was concern expressed about security of shared ephemeral
> pools.  The initial tmem implementation only
> required a client to provide a 128-bit UUID to identify a shared pool, and the
> linux-side tmem implementation obtained this UUID from the superblock of the
> shared filesystem (in ocfs2).  It was
> pointed out on LKML that the UUID was essentially a security key and any
> malicious domain that guessed it would have access to any data from the shared
> filesystem that found its way into tmem.
> ..
> As a result, a Xen boot option -- tmem_shared_auth; -- was
> added.  The option defaults to disabled,
> but when it is enabled, management tools must explicitly authenticate (or may
> explicitly deny) shared pool access to any client.
> On Xen, this is done with the xm tmem-shared-auth command.
> "
> 
> However the implementation has some rather large holes:
> 
> a) The hypercall was accessed from any guest.
> 
> b) If the ->domain id value is 0xFFFF then one can toggle the
>    tmem_global.shared_auth knob on/off. That with a)
>    made it pretty bad.
> 
> c) If one toggles the tmem_global.shared_auth off, then the
>   'tmem_shared_auth=1' bootup parameter is ignored and
>    one can join any shared pool (if UUID is known)!
> 
> d) If the 'tmem_shared_auth=1' and tmem_global.shared_auth is
>   set to 1, then one can only join an shared pool if the
>   UUID has been set by 'xl tmem-shared-auth'. Otherwise
>   the joining of a pool fails and a non-shared pool is
>   created (without errors to guest). Not exactly sure if
>   the shared pool creation at that point should error out
>   or not.
> 
> e) If a guest is migrated, the policy values (which UUID
>   can be shared, whether tmem_global.shared_auth is set, etc)
>   are completely ignored.
> 
> This patch only fixes a) and only allows the hypercall to
> be called by the control domain. Subsequent patches will
> fix the remaining issues.
> 
> We also have to call client_create as the guest at this
> point may not have done any tmem hypercalls - and hence
> the '->tmem' from 'struct domain' is still NULL. Us calling
> client_create fixes this.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> 
> v1: First posting
> v2: Drop 'idx' in tmemc_auth_pools
>     Check for 'n_pages' in tmemc_auth_pools.
>     Update comment for xen_tmem_pool_info_t.
> ---
>  tools/libxc/include/xenctrl.h  |  2 +-
>  tools/libxc/xc_tmem.c          | 52 +++++++++++++-----------------------------

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/5] tmem: By default to join an shared pool it must be authorized.
  2017-04-05  9:36   ` Jan Beulich
@ 2017-04-05 13:40     ` Konrad Rzeszutek Wilk
  2017-04-05 13:46       ` Jan Beulich
  0 siblings, 1 reply; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-04-05 13:40 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, wei.liu2

On Wed, Apr 05, 2017 at 03:36:51AM -0600, Jan Beulich wrote:
> >>> On 04.04.17 at 21:10, <konrad.wilk@oracle.com> wrote:
> > @@ -1530,7 +1529,8 @@ int do_tmem_new_pool(domid_t this_cli_id,
> >              pool->shared = 0;
> >              goto out;
> >          }
> > -        if ( client->shared_auth_required && !tmem_global.shared_auth )
> > +        /* By default only join domains that are authorized by admin. */
> > +        if ( !tmem_global.shared_auth )
> 
> Why "by default"? Is this comment really useful here? Other than

Took the comment out.
> that the patch looks okay, but I won't claim to understand enough
> of tmem to know this is sufficiently backwards compatible, so I
> won't claim to have reviewed it in full.

The old clients that used shared pools work just fine. That is as long
as the system admin invokes:
        xl tmem-shared-auth  -u 00000000-0000-0000-0000-0000deadbeef -A 1 <domain>

before hand (this is for UUID 0:deadbeef).
[And to be honest the API is a bit weird - if you can't join a shared
pool then you still get to join a private pool without any errors?!]


Before this change you didn't have to invoke this tmem-shared-auth
and any guest could join a shared pool, even malicious ones.
From that perspective I did break backwards compatibility, but fixed
a security hole.

But as said - the guest won't notice - if the system admin didn't invoke
the tmem-shared-auth - the hypervisor will gladly create another pool
for them, it just that it won't be shared.

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/5] tmem: By default to join an shared pool it must be authorized.
  2017-04-05 13:40     ` Konrad Rzeszutek Wilk
@ 2017-04-05 13:46       ` Jan Beulich
  0 siblings, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2017-04-05 13:46 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, wei.liu2

>>> On 05.04.17 at 15:40, <konrad.wilk@oracle.com> wrote:
> On Wed, Apr 05, 2017 at 03:36:51AM -0600, Jan Beulich wrote:
>> >>> On 04.04.17 at 21:10, <konrad.wilk@oracle.com> wrote:
>> > @@ -1530,7 +1529,8 @@ int do_tmem_new_pool(domid_t this_cli_id,
>> >              pool->shared = 0;
>> >              goto out;
>> >          }
>> > -        if ( client->shared_auth_required && !tmem_global.shared_auth )
>> > +        /* By default only join domains that are authorized by admin. */
>> > +        if ( !tmem_global.shared_auth )
>> 
>> Why "by default"? Is this comment really useful here? Other than
> 
> Took the comment out.
>> that the patch looks okay, but I won't claim to understand enough
>> of tmem to know this is sufficiently backwards compatible, so I
>> won't claim to have reviewed it in full.
> 
> The old clients that used shared pools work just fine. That is as long
> as the system admin invokes:
>         xl tmem-shared-auth  -u 00000000-0000-0000-0000-0000deadbeef -A 1 <domain>
> 
> before hand (this is for UUID 0:deadbeef).
> [And to be honest the API is a bit weird - if you can't join a shared
> pool then you still get to join a private pool without any errors?!]
> 
> 
> Before this change you didn't have to invoke this tmem-shared-auth
> and any guest could join a shared pool, even malicious ones.
> From that perspective I did break backwards compatibility, but fixed
> a security hole.
> 
> But as said - the guest won't notice - if the system admin didn't invoke
> the tmem-shared-auth - the hypervisor will gladly create another pool
> for them, it just that it won't be shared.

Oh, that's even better than I had expected.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-04-05 13:47 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-04 19:10 [PATCH v2] Tmem fixes for v4.9 Konrad Rzeszutek Wilk
2017-04-04 19:10 ` [PATCH v2 1/5] xen/libcx/tmem: Replace TMEM_RESTORE_NEW with XEN_SYSCTL_TMEM_OP_SET_POOLS Konrad Rzeszutek Wilk
2017-04-05  9:28   ` Jan Beulich
2017-04-04 19:10 ` [PATCH v2 2/5] xen/libxc: Move TMEM_AUTH to XEN_SYSCTL_TMEM_OP_SET_AUTH Konrad Rzeszutek Wilk
2017-04-05  9:30   ` Jan Beulich
2017-04-05 10:02   ` Wei Liu
2017-04-04 19:10 ` [PATCH v2 3/5] tmem: By default to join an shared pool it must be authorized Konrad Rzeszutek Wilk
2017-04-05  9:36   ` Jan Beulich
2017-04-05 13:40     ` Konrad Rzeszutek Wilk
2017-04-05 13:46       ` Jan Beulich
2017-04-04 19:10 ` [PATCH v2 4/5] tmem: Fix tmem-shared-auth 'auth' values Konrad Rzeszutek Wilk
2017-04-04 19:10 ` [PATCH v2 5/5] tmem: Parse UUIDs correctly Konrad Rzeszutek Wilk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.