All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1] Tmem cleanups/improvements for v4.8.
@ 2016-09-28  9:42 Konrad Rzeszutek Wilk
  2016-09-28  9:42 ` [PATCH v1 01/12] libxc/tmem/restore: Remove call to XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION Konrad Rzeszutek Wilk
                   ` (11 more replies)
  0 siblings, 12 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad

Hey!

This batch of fixes slowly marches toward ripping out pieces
of code in tmem that are hard to maintain and improve on code
that was orginacally developed.

I had hoped that I would have had the migration support all
working, but it took longer than I thought to get to this
point (and migration is still broken). And it may have become
a giant series too if I had it all worked out.

Anyhow please take a peek at the patches. The first
couple of them should be fairly easy. The rest are more of
squashing various subcommands in one.

Any advice, ideas, etc are more than welcome.

Thanks!

The git tree with these patches is:

 git://xenbits.xen.org/people/konradwilk/xen.git devel/tmem.v4.8.v1.r2

Konrad Rzeszutek Wilk (12):
      libxc/tmem/restore: Remove call to XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION
      tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP]
      tmem: Wrap tmem dedup code with CONFIG_TMEM_DEDUP
      tmem: Wrap tmem tze code with CONFIG_TMEM_TZE
      tmem: Delete deduplication (and tze) code.
      tmem: Move client weight,frozen,live_migrating, and compress
      tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE]..
      tmem: Handle 'struct tmem_info' as a seperate field in the
      tmem: Check version and maxpools when XEN_SYSCTL_TMEM_SET_CLIENT_INFO
      tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN]
      tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg.
      tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS,NPAGES,UUID]     in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS.

 docs/man/xl.pod.1.in                |   4 -
 docs/misc/xen-command-line.markdown |   6 -
 tools/libxc/include/xenctrl.h       |   4 +-
 tools/libxc/xc_tmem.c               | 272 ++++++++++++-------------
 tools/libxl/libxl.c                 |  30 ++-
 tools/libxl/xl_cmdtable.c           |   1 -
 tools/python/xen/lowlevel/xc/xc.c   |  19 +-
 xen/common/tmem.c                   | 384 ++++--------------------------------
 xen/common/tmem_control.c           | 214 ++++++++++----------
 xen/common/tmem_xen.c               |  28 ---
 xen/include/public/sysctl.h         |  75 +++++--
 xen/include/xen/tmem_xen.h          | 121 +-----------
 12 files changed, 390 insertions(+), 768 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* [PATCH v1 01/12] libxc/tmem/restore: Remove call to XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 11:00   ` Wei Liu
  2016-09-28  9:42 ` [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP] Konrad Rzeszutek Wilk
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Wei Liu, Ian Jackson, Konrad Rzeszutek Wilk

The only thing this hypercall returns is TMEM_SPEC_VERSION.

The comment around is also misleading - this call does not
do any domain operation.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v1: Initial submission.
---
 tools/libxc/xc_tmem.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 4e5c278..31ae3f5 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -381,16 +381,12 @@ static int xc_tmem_restore_new_pool(
 
 int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
 {
-    uint32_t save_version;
     uint32_t this_max_pools, this_version;
     uint32_t pool_id;
     uint32_t minusone;
     uint32_t weight, cap, flags;
     int checksum = 0;
 
-    save_version = xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION,dom,0,0,NULL);
-    if ( save_version == -1 )
-        return -1; /* domain doesn't exist */
     if ( read_exact(io_fd, &this_version, sizeof(this_version)) )
         return -1;
     if ( read_exact(io_fd, &this_max_pools, sizeof(this_max_pools)) )
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP]
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
  2016-09-28  9:42 ` [PATCH v1 01/12] libxc/tmem/restore: Remove call to XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 11:00   ` Wei Liu
  2016-09-28 12:10   ` Jan Beulich
  2016-09-28  9:42 ` [PATCH v1 03/12] tmem: Wrap tmem dedup code with CONFIG_TMEM_DEDUP Konrad Rzeszutek Wilk
                   ` (9 subsequent siblings)
  11 siblings, 2 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Wei Liu, Ian Jackson, Konrad Rzeszutek Wilk

It is not used by anything.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc:  Wei Liu <wei.liu2@citrix.com>

v1: First submission
---
 docs/man/xl.pod.1.in              |  4 ----
 tools/libxc/xc_tmem.c             | 13 +++----------
 tools/libxl/libxl.c               |  4 +---
 tools/libxl/xl_cmdtable.c         |  1 -
 tools/python/xen/lowlevel/xc/xc.c |  1 -
 xen/common/tmem_control.c         | 16 ++--------------
 xen/include/public/sysctl.h       |  2 --
 7 files changed, 6 insertions(+), 35 deletions(-)

diff --git a/docs/man/xl.pod.1.in b/docs/man/xl.pod.1.in
index a2be541..9b44f25 100644
--- a/docs/man/xl.pod.1.in
+++ b/docs/man/xl.pod.1.in
@@ -1580,10 +1580,6 @@ B<OPTIONS>
 
 Weight (int)
 
-=item B<-c> I<CAP>
-
-Cap (int)
-
 =item B<-p> I<COMPRESS>
 
 Compress (int)
diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 31ae3f5..24c8b43 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -212,7 +212,7 @@ int xc_tmem_save(xc_interface *xch,
     int marker = field_marker;
     int i, j;
     uint32_t max_pools, version;
-    uint32_t weight, cap, flags;
+    uint32_t weight, flags;
     uint32_t pool_id;
     uint32_t minusone = -1;
     struct tmem_handle *h;
@@ -238,10 +238,7 @@ int xc_tmem_save(xc_interface *xch,
     weight = xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT,dom,0,0,NULL);
     if ( write_exact(io_fd, &weight, sizeof(weight)) )
         return -1;
-    cap = xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP,dom,0,0,NULL);
-    if ( write_exact(io_fd, &cap, sizeof(cap)) )
-        return -1;
-    if ( flags == -1 || weight == -1 || cap == -1 )
+    if ( flags == -1 || weight == -1 )
         return -1;
     if ( write_exact(io_fd, &minusone, sizeof(minusone)) )
         return -1;
@@ -384,7 +381,7 @@ int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
     uint32_t this_max_pools, this_version;
     uint32_t pool_id;
     uint32_t minusone;
-    uint32_t weight, cap, flags;
+    uint32_t weight, flags;
     int checksum = 0;
 
     if ( read_exact(io_fd, &this_version, sizeof(this_version)) )
@@ -410,10 +407,6 @@ int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
         return -1;
     if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SET_WEIGHT,dom,0,0,NULL) < 0 )
         return -1;
-    if ( read_exact(io_fd, &cap, sizeof(cap)) )
-        return -1;
-    if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SET_CAP,dom,0,0,NULL) < 0 )
-        return -1;
     if ( read_exact(io_fd, &minusone, sizeof(minusone)) )
         return -1;
     while ( read_exact(io_fd, &pool_id, sizeof(pool_id)) == 0 && pool_id != -1 )
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 997d94c..e3ee49c 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -6065,8 +6065,6 @@ static int32_t tmem_setop_from_string(char *set_name)
 {
     if (!strcmp(set_name, "weight"))
         return XEN_SYSCTL_TMEM_OP_SET_WEIGHT;
-    else if (!strcmp(set_name, "cap"))
-        return XEN_SYSCTL_TMEM_OP_SET_CAP;
     else if (!strcmp(set_name, "compress"))
         return XEN_SYSCTL_TMEM_OP_SET_COMPRESS;
     else
@@ -6080,7 +6078,7 @@ int libxl_tmem_set(libxl_ctx *ctx, uint32_t domid, char* name, uint32_t set)
     GC_INIT(ctx);
 
     if (subop == -1) {
-        LOGEV(ERROR, -1, "Invalid set, valid sets are <weight|cap|compress>");
+        LOGEV(ERROR, -1, "Invalid set, valid sets are <weight|compress>");
         rc = ERROR_INVAL;
         goto out;
     }
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index 78786fe..368bfeb 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -405,7 +405,6 @@ struct cmd_spec cmd_table[] = {
       "[<Domain>|-a] [-w[=WEIGHT]|-c[=CAP]|-p[=COMPRESS]]",
       "  -a                             Operate on all tmem\n"
       "  -w WEIGHT                      Weight (int)\n"
-      "  -c CAP                         Cap (int)\n"
       "  -p COMPRESS                    Compress (int)",
     },
     { "tmem-shared-auth",
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 8411789..287f590 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1641,7 +1641,6 @@ static PyObject *pyxc_tmem_control(XcObject *self,
         case XEN_SYSCTL_TMEM_OP_FREEZE:
         case XEN_SYSCTL_TMEM_OP_DESTROY:
         case XEN_SYSCTL_TMEM_OP_SET_WEIGHT:
-        case XEN_SYSCTL_TMEM_OP_SET_CAP:
         case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
         default:
             break;
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index 81a2414..ca34852 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -103,9 +103,9 @@ static int tmemc_list_client(struct client *c, tmem_cli_va_param_t buf,
     struct tmem_pool *p;
     bool_t s;
 
-    n = scnprintf(info,BSIZE,"C=CI:%d,ww:%d,ca:%d,co:%d,fr:%d,"
+    n = scnprintf(info,BSIZE,"C=CI:%d,ww:%d,co:%d,fr:%d,"
         "Tc:%"PRIu64",Ge:%ld,Pp:%ld,Gp:%ld%c",
-        c->cli_id, c->weight, c->cap, c->compress, c->frozen,
+        c->cli_id, c->weight, c->compress, c->frozen,
         c->total_cycles, c->succ_eph_gets, c->succ_pers_puts, c->succ_pers_gets,
         use_long ? ',' : '\n');
     if (use_long)
@@ -273,11 +273,6 @@ static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
         atomic_sub(old_weight,&tmem_global.client_weight_total);
         atomic_add(client->weight,&tmem_global.client_weight_total);
         break;
-    case XEN_SYSCTL_TMEM_OP_SET_CAP:
-        client->cap = arg1;
-        tmem_client_info("tmem: cap set to %d for %s=%d\n",
-                        arg1, tmem_cli_id_str, cli_id);
-        break;
     case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
         if ( tmem_dedup_enabled() )
         {
@@ -341,11 +336,6 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
             break;
         rc = client->weight == -1 ? -2 : client->weight;
         break;
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP:
-        if ( client == NULL )
-            break;
-        rc = client->cap == -1 ? -2 : client->cap;
-        break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS:
         if ( client == NULL )
             break;
@@ -404,7 +394,6 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
                          guest_handle_cast(op->buf, char), op->arg1, op->arg2);
         break;
     case XEN_SYSCTL_TMEM_OP_SET_WEIGHT:
-    case XEN_SYSCTL_TMEM_OP_SET_CAP:
     case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
         ret = tmemc_set_var(op->cli_id, cmd, op->arg1);
         break;
@@ -414,7 +403,6 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT:
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES:
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8197c14..847ec35 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -757,14 +757,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
 #define XEN_SYSCTL_TMEM_OP_DESTROY                3
 #define XEN_SYSCTL_TMEM_OP_LIST                   4
 #define XEN_SYSCTL_TMEM_OP_SET_WEIGHT             5
-#define XEN_SYSCTL_TMEM_OP_SET_CAP                6
 #define XEN_SYSCTL_TMEM_OP_SET_COMPRESS           7
 #define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
 #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION       11
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS      12
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT 13
-#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP    14
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS  15
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS    16
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES   17
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 03/12] tmem: Wrap tmem dedup code with CONFIG_TMEM_DEDUP
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
  2016-09-28  9:42 ` [PATCH v1 01/12] libxc/tmem/restore: Remove call to XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION Konrad Rzeszutek Wilk
  2016-09-28  9:42 ` [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP] Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 12:18   ` Jan Beulich
  2016-09-28  9:42 ` [PATCH v1 04/12] tmem: Wrap tmem tze code with CONFIG_TMEM_TZE Konrad Rzeszutek Wilk
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Konrad Rzeszutek Wilk

This config is not defined anywhere but it makes it way
easier to figure out what code to deal with.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v1: First submission.
---
 xen/common/tmem.c          | 57 ++++++++++++++++++++++++++++++++++++++++++----
 xen/common/tmem_control.c  |  2 ++
 xen/common/tmem_xen.c      |  2 ++
 xen/include/xen/tmem_xen.h |  4 ++++
 4 files changed, 60 insertions(+), 5 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index efaafc4..8327218 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -71,10 +71,14 @@ struct tmem_page_descriptor {
     pagesize_t size; /* 0 == PAGE_SIZE (pfp), -1 == data invalid,
                     else compressed data (cdata). */
     uint32_t index;
+#ifdef CONFIG_TMEM_DEDUP
     /* Must hold pcd_tree_rwlocks[firstbyte] to use pcd pointer/siblings. */
     uint16_t firstbyte; /* NON_SHAREABLE->pfp  otherwise->pcd. */
+#endif
     bool_t eviction_attempted;  /* CHANGE TO lifetimes? (settable). */
+#ifdef CONFIG_TMEM_DEDUP
     struct list_head pcd_siblings;
+#endif
     union {
         struct page_info *pfp;  /* Page frame pointer. */
         char *cdata; /* Compressed data. */
@@ -94,15 +98,20 @@ struct tmem_page_content_descriptor {
         char *cdata; /* If compression_enabled. */
         char *tze; /* If !compression_enabled, trailing zeroes eliminated. */
     };
+#ifdef CONFIG_TMEM_DEDUP
     struct list_head pgp_list;
     struct rb_node pcd_rb_tree_node;
     uint32_t pgp_ref_count;
+#endif
     pagesize_t size; /* If compression_enabled -> 0<size<PAGE_SIZE (*cdata)
                      * else if tze, 0<=size<PAGE_SIZE, rounded up to mult of 8
                      * else PAGE_SIZE -> *pfp. */
 };
+
+#ifdef CONFIG_TMEM_DEDUP
 struct rb_root pcd_tree_roots[256]; /* Choose based on first byte of page. */
 rwlock_t pcd_tree_rwlocks[256]; /* Poor man's concurrency for now. */
+#endif
 
 static int tmem_initialized = 0;
 
@@ -262,6 +271,7 @@ static void tmem_persistent_pool_page_put(void *page_va)
  */
 #define NOT_SHAREABLE ((uint16_t)-1UL)
 
+#ifdef CONFIG_TMEM_DEDUP
 static int pcd_copy_to_client(xen_pfn_t cmfn, struct tmem_page_descriptor *pgp)
 {
     uint8_t firstbyte = pgp->firstbyte;
@@ -485,6 +495,7 @@ unlock:
     write_unlock(&pcd_tree_rwlocks[firstbyte]);
     return ret;
 }
+#endif
 
 /************ PAGE DESCRIPTOR MANIPULATION ROUTINES *******************/
 
@@ -503,12 +514,14 @@ static struct tmem_page_descriptor *pgp_alloc(struct tmem_object_root *obj)
     INIT_LIST_HEAD(&pgp->global_eph_pages);
     INIT_LIST_HEAD(&pgp->us.client_eph_pages);
     pgp->pfp = NULL;
+#ifdef CONFIG_TMEM_DEDUP
     if ( tmem_dedup_enabled() )
     {
         pgp->firstbyte = NOT_SHAREABLE;
         pgp->eviction_attempted = 0;
         INIT_LIST_HEAD(&pgp->pcd_siblings);
     }
+#endif
     pgp->size = -1;
     pgp->index = -1;
     pgp->timestamp = get_cycles();
@@ -533,9 +546,12 @@ static void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem_pool *po
 
     if ( pgp->pfp == NULL )
         return;
+#ifdef CONFIG_TMEM_DEDUP
     if ( tmem_dedup_enabled() && pgp->firstbyte != NOT_SHAREABLE )
         pcd_disassociate(pgp,pool,0); /* pgp->size lost. */
-    else if ( pgp_size )
+    else
+#endif
+    if ( pgp_size )
         tmem_free(pgp->cdata, pool);
     else
         tmem_free_page(pgp->us.obj->pool,pgp->pfp);
@@ -1085,6 +1101,8 @@ static struct client *client_create(domid_t cli_id)
 
     client->cli_id = cli_id;
     client->compress = tmem_compression_enabled();
+#ifdef CONFIG_TMEM_DEDUP
+#endif
     client->shared_auth_required = tmem_shared_auth();
     for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
         client->shared_auth_uuid[i][0] =
@@ -1141,13 +1159,16 @@ static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *ho
 {
     struct tmem_object_root *obj = pgp->us.obj;
     struct tmem_pool *pool = obj->pool;
+#ifdef CONFIG_TMEM_DEDUP
     struct client *client = pool->client;
     uint16_t firstbyte = pgp->firstbyte;
+#endif
 
     if ( pool->is_dying )
         return 0;
     if ( spin_trylock(&obj->obj_spinlock) )
     {
+#ifdef CONFIG_TMEM_DEDUP
         if ( tmem_dedup_enabled() )
         {
             firstbyte = pgp->firstbyte;
@@ -1166,6 +1187,7 @@ static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *ho
                 goto pcd_unlock;
             }
         }
+#endif
         if ( obj->pgp_count > 1 )
             return 1;
         if ( write_trylock(&pool->pool_rwlock) )
@@ -1173,10 +1195,12 @@ static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *ho
             *hold_pool_rwlock = 1;
             return 1;
         }
+#ifdef CONFIG_TMEM_DEDUP
 pcd_unlock:
         if ( tmem_dedup_enabled() )
             write_unlock(&pcd_tree_rwlocks[firstbyte]);
 obj_unlock:
+#endif
         spin_unlock(&obj->obj_spinlock);
     }
     return 0;
@@ -1232,11 +1256,13 @@ found:
     ASSERT_SPINLOCK(&obj->obj_spinlock);
     pgp_del = pgp_delete_from_obj(obj, pgp->index);
     ASSERT(pgp_del == pgp);
+#ifdef CONFIG_TMEM_DEDUP
     if ( tmem_dedup_enabled() && pgp->firstbyte != NOT_SHAREABLE )
     {
         ASSERT(pgp->pcd->pgp_ref_count == 1 || pgp->eviction_attempted);
         pcd_disassociate(pgp,pool,1);
     }
+#endif
 
     /* pgp already delist, so call pgp_free directly. */
     pgp_free(pgp);
@@ -1303,9 +1329,11 @@ static int do_tmem_put_compress(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn
     else if ( (size == 0) || (size >= tmem_mempool_maxalloc) ) {
         ret = 0;
         goto out;
+#ifdef CONFIG_TMEM_DEDUP
     } else if ( tmem_dedup_enabled() && !is_persistent(pgp->us.obj->pool) ) {
         if ( (ret = pcd_associate(pgp,dst,size)) == -ENOMEM )
             goto out;
+#endif
     } else if ( (p = tmem_malloc(size,pgp->us.obj->pool)) == NULL ) {
         ret = -ENOMEM;
         goto out;
@@ -1365,11 +1393,13 @@ copy_uncompressed:
     ret = tmem_copy_from_client(pgp->pfp, cmfn, tmem_cli_buf_null);
     if ( ret < 0 )
         goto bad_copy;
+#ifdef CONFIG_PAGE_DEDUP
     if ( tmem_dedup_enabled() && !is_persistent(pool) )
     {
         if ( pcd_associate(pgp,NULL,0) == -ENOMEM )
             goto failed_dup;
     }
+#endif
 
 done:
     /* Successfully replaced data, clean up and return success. */
@@ -1506,6 +1536,7 @@ copy_uncompressed:
     if ( ret < 0 )
         goto bad_copy;
 
+#ifdef CONFIG_TMEM_DEDUP
     if ( tmem_dedup_enabled() && !is_persistent(pool) )
     {
         if ( pcd_associate(pgp, NULL, 0) == -ENOMEM )
@@ -1514,6 +1545,7 @@ copy_uncompressed:
             goto del_pgp_from_obj;
         }
     }
+#endif
 
 insert_page:
     if ( !is_persistent(pool) )
@@ -1601,10 +1633,13 @@ static int do_tmem_get(struct tmem_pool *pool,
         return 0;
     }
     ASSERT(pgp->size != -1);
+#ifdef CONFIG_TMEM_DEDUP
     if ( tmem_dedup_enabled() && !is_persistent(pool) &&
               pgp->firstbyte != NOT_SHAREABLE )
         rc = pcd_copy_to_client(cmfn, pgp);
-    else if ( pgp->size != 0 )
+    else
+#endif
+    if ( pgp->size != 0 )
     {
         rc = tmem_decompress_to_client(cmfn, pgp->cdata, pgp->size, clibuf);
     }
@@ -2362,29 +2397,41 @@ unsigned long tmem_freeable_pages(void)
 /* Called at hypervisor startup. */
 static int __init init_tmem(void)
 {
-    int i;
     if ( !tmem_enabled() )
         return 0;
 
+#ifdef CONFIG_TMEM_DEDUP
     if ( tmem_dedup_enabled() )
+    {
+        unsigned int i;
+
         for (i = 0; i < 256; i++ )
         {
             pcd_tree_roots[i] = RB_ROOT;
             rwlock_init(&pcd_tree_rwlocks[i]);
         }
-
+    }
+#endif
     if ( !tmem_mempool_init() )
         return 0;
 
     if ( tmem_init() )
     {
         printk("tmem: initialized comp=%d dedup=%d tze=%d\n",
-            tmem_compression_enabled(), tmem_dedup_enabled(), tmem_tze_enabled());
+               tmem_compression_enabled(),
+#ifdef CONFIG_TMEM_DEDUP
+               tmem_dedup_enabled(),
+#else
+               0,
+#endif
+               tmem_tze_enabled());
+#ifdef CONFIG_TMEM_DEDUP
         if ( tmem_dedup_enabled()&&tmem_compression_enabled()&&tmem_tze_enabled() )
         {
             tmem_tze_disable();
             printk("tmem: tze and compression not compatible, disabling tze\n");
         }
+#endif
         tmem_initialized = 1;
     }
     else
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index ca34852..565e50b 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -274,6 +274,7 @@ static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
         atomic_add(client->weight,&tmem_global.client_weight_total);
         break;
     case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
+#ifdef CONFIG_TMEM_DEDUP
         if ( tmem_dedup_enabled() )
         {
             tmem_client_warn("tmem: compression %s for all %ss, cannot be changed when tmem_dedup is enabled\n",
@@ -281,6 +282,7 @@ static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
                             tmem_client_str);
             return -1;
         }
+#endif
         client->compress = arg1 ? 1 : 0;
         tmem_client_info("tmem: compression %s for %s=%d\n",
             arg1 ? "enabled" : "disabled",tmem_cli_id_str,cli_id);
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index 71cb7d5..9a2bfed 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -20,8 +20,10 @@ boolean_param("tmem", opt_tmem);
 bool_t __read_mostly opt_tmem_compress = 0;
 boolean_param("tmem_compress", opt_tmem_compress);
 
+#ifdef CONFIG_TMEM_DEDUP
 bool_t __read_mostly opt_tmem_dedup = 0;
 boolean_param("tmem_dedup", opt_tmem_dedup);
+#endif
 
 bool_t __read_mostly opt_tmem_tze = 0;
 boolean_param("tmem_tze", opt_tmem_tze);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 582951a..52f87b1 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -41,11 +41,13 @@ static inline bool_t tmem_compression_enabled(void)
     return opt_tmem_compress;
 }
 
+#ifdef CONFIG_TMEM_DEDUP
 extern bool_t opt_tmem_dedup;
 static inline bool_t tmem_dedup_enabled(void)
 {
     return opt_tmem_dedup;
 }
+#endif
 
 extern bool_t opt_tmem_tze;
 static inline bool_t tmem_tze_enabled(void)
@@ -192,6 +194,7 @@ static inline struct client *tmem_client_from_cli_id(domid_t cli_id)
     return c;
 }
 
+#ifdef CONFIG_TMEM_DEDUP
 static inline uint8_t tmem_get_first_byte(struct page_info *pfp)
 {
     const uint8_t *p = __map_domain_page(pfp);
@@ -288,6 +291,7 @@ static inline void tmem_tze_copy_from_pfp(void *tva, struct page_info *pfp, page
 
     unmap_domain_page(p);
 }
+#endif
 
 /* these typedefs are in the public/tmem.h interface
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 04/12] tmem: Wrap tmem tze code with CONFIG_TMEM_TZE
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
                   ` (2 preceding siblings ...)
  2016-09-28  9:42 ` [PATCH v1 03/12] tmem: Wrap tmem dedup code with CONFIG_TMEM_DEDUP Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 12:19   ` Jan Beulich
  2016-09-28  9:42 ` [PATCH v1 05/12] tmem: Delete deduplication (and tze) code Konrad Rzeszutek Wilk
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Konrad Rzeszutek Wilk

. which is actually dependent on CONFIG_TMEM_DEDUP

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v1: First submission.
---
 xen/common/tmem.c          | 12 ++++++++++--
 xen/common/tmem_xen.c      |  4 ++++
 xen/include/xen/tmem_xen.h |  4 ++++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 8327218..b673120 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -96,7 +96,9 @@ struct tmem_page_content_descriptor {
     union {
         struct page_info *pfp;  /* Page frame pointer. */
         char *cdata; /* If compression_enabled. */
+#ifdef CONFIG_TMEM_TZE
         char *tze; /* If !compression_enabled, trailing zeroes eliminated. */
+#endif
     };
 #ifdef CONFIG_TMEM_DEDUP
     struct list_head pgp_list;
@@ -2424,8 +2426,14 @@ static int __init init_tmem(void)
 #else
                0,
 #endif
-               tmem_tze_enabled());
-#ifdef CONFIG_TMEM_DEDUP
+#ifdef CONFIG_TMEM_TZE
+               tmem_tze_enabled()
+#else
+               0
+#endif
+            );
+
+#if defined(CONFIG_TMEM_DEDUP) && defined(CONFIG_TMEM_TZE)
         if ( tmem_dedup_enabled()&&tmem_compression_enabled()&&tmem_tze_enabled() )
         {
             tmem_tze_disable();
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index 9a2bfed..d42deef 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -25,8 +25,10 @@ bool_t __read_mostly opt_tmem_dedup = 0;
 boolean_param("tmem_dedup", opt_tmem_dedup);
 #endif
 
+#ifdef CONFIG_TMEM_TZE
 bool_t __read_mostly opt_tmem_tze = 0;
 boolean_param("tmem_tze", opt_tmem_tze);
+#endif
 
 bool_t __read_mostly opt_tmem_shared_auth = 0;
 boolean_param("tmem_shared_auth", opt_tmem_shared_auth);
@@ -218,6 +220,7 @@ int tmem_decompress_to_client(xen_pfn_t cmfn, void *tmem_va,
     return 1;
 }
 
+#ifdef CONFIG_TMEM_TZE
 int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va,
                                     pagesize_t len)
 {
@@ -239,6 +242,7 @@ int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va,
     smp_mb();
     return 1;
 }
+#endif
 
 /******************  XEN-SPECIFIC HOST INITIALIZATION ********************/
 static int dstmem_order, workmem_order;
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 52f87b1..3be8001 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -49,6 +49,7 @@ static inline bool_t tmem_dedup_enabled(void)
 }
 #endif
 
+#ifdef CONFIG_TMEM_TZE
 extern bool_t opt_tmem_tze;
 static inline bool_t tmem_tze_enabled(void)
 {
@@ -59,6 +60,7 @@ static inline void tmem_tze_disable(void)
 {
     opt_tmem_tze = 0;
 }
+#endif
 
 extern bool_t opt_tmem_shared_auth;
 static inline bool_t tmem_shared_auth(void)
@@ -340,7 +342,9 @@ int tmem_compress_from_client(xen_pfn_t, void **, size_t *,
 
 int tmem_copy_from_client(struct page_info *, xen_pfn_t, tmem_cli_va_param_t);
 int tmem_copy_to_client(xen_pfn_t, struct page_info *, tmem_cli_va_param_t);
+#ifdef CONFIG_TMEM_TZE
 extern int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va, pagesize_t len);
+#endif
 
 #define tmem_client_err(fmt, args...)  printk(XENLOG_G_ERR fmt, ##args)
 #define tmem_client_warn(fmt, args...) printk(XENLOG_G_WARNING fmt, ##args)
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 05/12] tmem: Delete deduplication (and tze) code.
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
                   ` (3 preceding siblings ...)
  2016-09-28  9:42 ` [PATCH v1 04/12] tmem: Wrap tmem tze code with CONFIG_TMEM_TZE Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 12:34   ` Jan Beulich
  2016-09-28  9:42 ` [PATCH v1 06/12] tmem: Move client weight, frozen, live_migrating, and compress Konrad Rzeszutek Wilk
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Konrad Rzeszutek Wilk

Couple of reasons:
 - It can lead to security issues (see row-hammer, KSM and such
   attacks).
 - Code is quite complex.
 - Deduplication is good if the pages themselves are the same
   but that is hardly guaranteed.
 - We got some gains (if pages are deduped) but at the cost of
   making code less maintainable.
 - tze depends on deduplication code.

As such, deleting it.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v1: First submission.
---
 docs/misc/xen-command-line.markdown |   6 -
 xen/common/tmem.c                   | 361 +-----------------------------------
 xen/common/tmem_control.c           |   9 -
 xen/common/tmem_xen.c               |  34 ----
 xen/include/xen/tmem_xen.h          | 123 ------------
 5 files changed, 1 insertion(+), 532 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 8ff57fa..cd9534b 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1500,15 +1500,9 @@ pages) must also be specified via the tbuf\_size parameter.
 ### tmem\_compress
 > `= <boolean>`
 
-### tmem\_dedup
-> `= <boolean>`
-
 ### tmem\_shared\_auth
 > `= <boolean>`
 
-### tmem\_tze
-> `= <integer>`
-
 ### tsc
 > `= unstable | skewed | stable:socket`
 
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index b673120..cf5271c 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -71,14 +71,7 @@ struct tmem_page_descriptor {
     pagesize_t size; /* 0 == PAGE_SIZE (pfp), -1 == data invalid,
                     else compressed data (cdata). */
     uint32_t index;
-#ifdef CONFIG_TMEM_DEDUP
-    /* Must hold pcd_tree_rwlocks[firstbyte] to use pcd pointer/siblings. */
-    uint16_t firstbyte; /* NON_SHAREABLE->pfp  otherwise->pcd. */
-#endif
     bool_t eviction_attempted;  /* CHANGE TO lifetimes? (settable). */
-#ifdef CONFIG_TMEM_DEDUP
-    struct list_head pcd_siblings;
-#endif
     union {
         struct page_info *pfp;  /* Page frame pointer. */
         char *cdata; /* Compressed data. */
@@ -96,25 +89,12 @@ struct tmem_page_content_descriptor {
     union {
         struct page_info *pfp;  /* Page frame pointer. */
         char *cdata; /* If compression_enabled. */
-#ifdef CONFIG_TMEM_TZE
-        char *tze; /* If !compression_enabled, trailing zeroes eliminated. */
-#endif
     };
-#ifdef CONFIG_TMEM_DEDUP
-    struct list_head pgp_list;
-    struct rb_node pcd_rb_tree_node;
-    uint32_t pgp_ref_count;
-#endif
     pagesize_t size; /* If compression_enabled -> 0<size<PAGE_SIZE (*cdata)
                      * else if tze, 0<=size<PAGE_SIZE, rounded up to mult of 8
                      * else PAGE_SIZE -> *pfp. */
 };
 
-#ifdef CONFIG_TMEM_DEDUP
-struct rb_root pcd_tree_roots[256]; /* Choose based on first byte of page. */
-rwlock_t pcd_tree_rwlocks[256]; /* Poor man's concurrency for now. */
-#endif
-
 static int tmem_initialized = 0;
 
 struct xmem_pool *tmem_mempool = 0;
@@ -273,232 +253,6 @@ static void tmem_persistent_pool_page_put(void *page_va)
  */
 #define NOT_SHAREABLE ((uint16_t)-1UL)
 
-#ifdef CONFIG_TMEM_DEDUP
-static int pcd_copy_to_client(xen_pfn_t cmfn, struct tmem_page_descriptor *pgp)
-{
-    uint8_t firstbyte = pgp->firstbyte;
-    struct tmem_page_content_descriptor *pcd;
-    int ret;
-
-    ASSERT(tmem_dedup_enabled());
-    read_lock(&pcd_tree_rwlocks[firstbyte]);
-    pcd = pgp->pcd;
-    if ( pgp->size < PAGE_SIZE && pgp->size != 0 &&
-         pcd->size < PAGE_SIZE && pcd->size != 0 )
-        ret = tmem_decompress_to_client(cmfn, pcd->cdata, pcd->size,
-                                       tmem_cli_buf_null);
-    else if ( tmem_tze_enabled() && pcd->size < PAGE_SIZE )
-        ret = tmem_copy_tze_to_client(cmfn, pcd->tze, pcd->size);
-    else
-        ret = tmem_copy_to_client(cmfn, pcd->pfp, tmem_cli_buf_null);
-    read_unlock(&pcd_tree_rwlocks[firstbyte]);
-    return ret;
-}
-
-/*
- * Ensure pgp no longer points to pcd, nor vice-versa.
- * Take pcd rwlock unless have_pcd_rwlock is set, always unlock when done.
- */
-static void pcd_disassociate(struct tmem_page_descriptor *pgp, struct tmem_pool *pool, bool_t have_pcd_rwlock)
-{
-    struct tmem_page_content_descriptor *pcd = pgp->pcd;
-    struct page_info *pfp = pgp->pcd->pfp;
-    uint16_t firstbyte = pgp->firstbyte;
-    char *pcd_tze = pgp->pcd->tze;
-    pagesize_t pcd_size = pcd->size;
-    pagesize_t pgp_size = pgp->size;
-    char *pcd_cdata = pgp->pcd->cdata;
-    pagesize_t pcd_csize = pgp->pcd->size;
-
-    ASSERT(tmem_dedup_enabled());
-    ASSERT(firstbyte != NOT_SHAREABLE);
-    ASSERT(firstbyte < 256);
-
-    if ( have_pcd_rwlock )
-        ASSERT_WRITELOCK(&pcd_tree_rwlocks[firstbyte]);
-    else
-        write_lock(&pcd_tree_rwlocks[firstbyte]);
-    list_del_init(&pgp->pcd_siblings);
-    pgp->pcd = NULL;
-    pgp->firstbyte = NOT_SHAREABLE;
-    pgp->size = -1;
-    if ( --pcd->pgp_ref_count )
-    {
-        write_unlock(&pcd_tree_rwlocks[firstbyte]);
-        return;
-    }
-
-    /* No more references to this pcd, recycle it and the physical page. */
-    ASSERT(list_empty(&pcd->pgp_list));
-    pcd->pfp = NULL;
-    /* Remove pcd from rbtree. */
-    rb_erase(&pcd->pcd_rb_tree_node,&pcd_tree_roots[firstbyte]);
-    /* Reinit the struct for safety for now. */
-    RB_CLEAR_NODE(&pcd->pcd_rb_tree_node);
-    /* Now free up the pcd memory. */
-    tmem_free(pcd, NULL);
-    atomic_dec_and_assert(global_pcd_count);
-    if ( pgp_size != 0 && pcd_size < PAGE_SIZE )
-    {
-        /* Compressed data. */
-        tmem_free(pcd_cdata, pool);
-        tmem_stats.pcd_tot_csize -= pcd_csize;
-    }
-    else if ( pcd_size != PAGE_SIZE )
-    {
-        /* Trailing zero data. */
-        tmem_stats.pcd_tot_tze_size -= pcd_size;
-        if ( pcd_size )
-            tmem_free(pcd_tze, pool);
-    } else {
-        /* Real physical page. */
-        if ( tmem_tze_enabled() )
-            tmem_stats.pcd_tot_tze_size -= PAGE_SIZE;
-        if ( tmem_compression_enabled() )
-            tmem_stats.pcd_tot_csize -= PAGE_SIZE;
-        tmem_free_page(pool,pfp);
-    }
-    write_unlock(&pcd_tree_rwlocks[firstbyte]);
-}
-
-
-static int pcd_associate(struct tmem_page_descriptor *pgp, char *cdata, pagesize_t csize)
-{
-    struct rb_node **new, *parent = NULL;
-    struct rb_root *root;
-    struct tmem_page_content_descriptor *pcd;
-    int cmp;
-    pagesize_t pfp_size = 0;
-    uint8_t firstbyte = (cdata == NULL) ? tmem_get_first_byte(pgp->pfp) : *cdata;
-    int ret = 0;
-
-    if ( !tmem_dedup_enabled() )
-        return 0;
-    ASSERT(pgp->us.obj != NULL);
-    ASSERT(pgp->us.obj->pool != NULL);
-    ASSERT(!pgp->us.obj->pool->persistent);
-    if ( cdata == NULL )
-    {
-        ASSERT(pgp->pfp != NULL);
-        pfp_size = PAGE_SIZE;
-        if ( tmem_tze_enabled() )
-        {
-            pfp_size = tmem_tze_pfp_scan(pgp->pfp);
-            if ( pfp_size > PCD_TZE_MAX_SIZE )
-                pfp_size = PAGE_SIZE;
-        }
-        ASSERT(pfp_size <= PAGE_SIZE);
-        ASSERT(!(pfp_size & (sizeof(uint64_t)-1)));
-    }
-    write_lock(&pcd_tree_rwlocks[firstbyte]);
-
-    /* Look for page match. */
-    root = &pcd_tree_roots[firstbyte];
-    new = &(root->rb_node);
-    while ( *new )
-    {
-        pcd = container_of(*new, struct tmem_page_content_descriptor, pcd_rb_tree_node);
-        parent = *new;
-        /* Compare new entry and rbtree entry, set cmp accordingly. */
-        if ( cdata != NULL )
-        {
-            if ( pcd->size < PAGE_SIZE )
-                /* Both new entry and rbtree entry are compressed. */
-                cmp = tmem_pcd_cmp(cdata,csize,pcd->cdata,pcd->size);
-            else
-                /* New entry is compressed, rbtree entry is not. */
-                cmp = -1;
-        } else if ( pcd->size < PAGE_SIZE )
-            /* Rbtree entry is compressed, rbtree entry is not. */
-            cmp = 1;
-        else if ( tmem_tze_enabled() ) {
-            if ( pcd->size < PAGE_SIZE )
-                /* Both new entry and rbtree entry are trailing zero. */
-                cmp = tmem_tze_pfp_cmp(pgp->pfp,pfp_size,pcd->tze,pcd->size);
-            else
-                /* New entry is trailing zero, rbtree entry is not. */
-                cmp = tmem_tze_pfp_cmp(pgp->pfp,pfp_size,pcd->pfp,PAGE_SIZE);
-        } else  {
-            /* Both new entry and rbtree entry are full physical pages. */
-            ASSERT(pgp->pfp != NULL);
-            ASSERT(pcd->pfp != NULL);
-            cmp = tmem_page_cmp(pgp->pfp,pcd->pfp);
-        }
-
-        /* Walk tree or match depending on cmp. */
-        if ( cmp < 0 )
-            new = &((*new)->rb_left);
-        else if ( cmp > 0 )
-            new = &((*new)->rb_right);
-        else
-        {
-            /*
-             * Match! if not compressed, free the no-longer-needed page
-             * but if compressed, data is assumed static so don't free!
-             */
-            if ( cdata == NULL )
-                tmem_free_page(pgp->us.obj->pool,pgp->pfp);
-            tmem_stats.deduped_puts++;
-            goto match;
-        }
-    }
-
-    /* Exited while loop with no match, so alloc a pcd and put it in the tree. */
-    if ( (pcd = tmem_malloc(sizeof(struct tmem_page_content_descriptor), NULL)) == NULL )
-    {
-        ret = -ENOMEM;
-        goto unlock;
-    } else if ( cdata != NULL ) {
-        if ( (pcd->cdata = tmem_malloc(csize,pgp->us.obj->pool)) == NULL )
-        {
-            tmem_free(pcd, NULL);
-            ret = -ENOMEM;
-            goto unlock;
-        }
-    }
-    atomic_inc_and_max(global_pcd_count);
-    RB_CLEAR_NODE(&pcd->pcd_rb_tree_node);  /* Is this necessary? */
-    INIT_LIST_HEAD(&pcd->pgp_list);  /* Is this necessary? */
-    pcd->pgp_ref_count = 0;
-    if ( cdata != NULL )
-    {
-        memcpy(pcd->cdata,cdata,csize);
-        pcd->size = csize;
-        tmem_stats.pcd_tot_csize += csize;
-    } else if ( pfp_size == 0 ) {
-        ASSERT(tmem_tze_enabled());
-        pcd->size = 0;
-        pcd->tze = NULL;
-    } else if ( pfp_size < PAGE_SIZE &&
-         ((pcd->tze = tmem_malloc(pfp_size,pgp->us.obj->pool)) != NULL) ) {
-        tmem_tze_copy_from_pfp(pcd->tze,pgp->pfp,pfp_size);
-        pcd->size = pfp_size;
-        tmem_stats.pcd_tot_tze_size += pfp_size;
-        tmem_free_page(pgp->us.obj->pool,pgp->pfp);
-    } else {
-        pcd->pfp = pgp->pfp;
-        pcd->size = PAGE_SIZE;
-        if ( tmem_tze_enabled() )
-            tmem_stats.pcd_tot_tze_size += PAGE_SIZE;
-        if ( tmem_compression_enabled() )
-            tmem_stats.pcd_tot_csize += PAGE_SIZE;
-    }
-    rb_link_node(&pcd->pcd_rb_tree_node, parent, new);
-    rb_insert_color(&pcd->pcd_rb_tree_node, root);
-
-match:
-    pcd->pgp_ref_count++;
-    list_add(&pgp->pcd_siblings,&pcd->pgp_list);
-    pgp->firstbyte = firstbyte;
-    pgp->eviction_attempted = 0;
-    pgp->pcd = pcd;
-
-unlock:
-    write_unlock(&pcd_tree_rwlocks[firstbyte]);
-    return ret;
-}
-#endif
-
 /************ PAGE DESCRIPTOR MANIPULATION ROUTINES *******************/
 
 /* Allocate a struct tmem_page_descriptor and associate it with an object. */
@@ -516,14 +270,6 @@ static struct tmem_page_descriptor *pgp_alloc(struct tmem_object_root *obj)
     INIT_LIST_HEAD(&pgp->global_eph_pages);
     INIT_LIST_HEAD(&pgp->us.client_eph_pages);
     pgp->pfp = NULL;
-#ifdef CONFIG_TMEM_DEDUP
-    if ( tmem_dedup_enabled() )
-    {
-        pgp->firstbyte = NOT_SHAREABLE;
-        pgp->eviction_attempted = 0;
-        INIT_LIST_HEAD(&pgp->pcd_siblings);
-    }
-#endif
     pgp->size = -1;
     pgp->index = -1;
     pgp->timestamp = get_cycles();
@@ -548,11 +294,6 @@ static void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem_pool *po
 
     if ( pgp->pfp == NULL )
         return;
-#ifdef CONFIG_TMEM_DEDUP
-    if ( tmem_dedup_enabled() && pgp->firstbyte != NOT_SHAREABLE )
-        pcd_disassociate(pgp,pool,0); /* pgp->size lost. */
-    else
-#endif
     if ( pgp_size )
         tmem_free(pgp->cdata, pool);
     else
@@ -1103,8 +844,6 @@ static struct client *client_create(domid_t cli_id)
 
     client->cli_id = cli_id;
     client->compress = tmem_compression_enabled();
-#ifdef CONFIG_TMEM_DEDUP
-#endif
     client->shared_auth_required = tmem_shared_auth();
     for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
         client->shared_auth_uuid[i][0] =
@@ -1161,35 +900,11 @@ static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *ho
 {
     struct tmem_object_root *obj = pgp->us.obj;
     struct tmem_pool *pool = obj->pool;
-#ifdef CONFIG_TMEM_DEDUP
-    struct client *client = pool->client;
-    uint16_t firstbyte = pgp->firstbyte;
-#endif
 
     if ( pool->is_dying )
         return 0;
     if ( spin_trylock(&obj->obj_spinlock) )
     {
-#ifdef CONFIG_TMEM_DEDUP
-        if ( tmem_dedup_enabled() )
-        {
-            firstbyte = pgp->firstbyte;
-            if ( firstbyte ==  NOT_SHAREABLE )
-                goto obj_unlock;
-            ASSERT(firstbyte < 256);
-            if ( !write_trylock(&pcd_tree_rwlocks[firstbyte]) )
-                goto obj_unlock;
-            if ( pgp->pcd->pgp_ref_count > 1 && !pgp->eviction_attempted )
-            {
-                pgp->eviction_attempted++;
-                list_del(&pgp->global_eph_pages);
-                list_add_tail(&pgp->global_eph_pages,&tmem_global.ephemeral_page_list);
-                list_del(&pgp->us.client_eph_pages);
-                list_add_tail(&pgp->us.client_eph_pages,&client->ephemeral_page_list);
-                goto pcd_unlock;
-            }
-        }
-#endif
         if ( obj->pgp_count > 1 )
             return 1;
         if ( write_trylock(&pool->pool_rwlock) )
@@ -1197,12 +912,6 @@ static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *ho
             *hold_pool_rwlock = 1;
             return 1;
         }
-#ifdef CONFIG_TMEM_DEDUP
-pcd_unlock:
-        if ( tmem_dedup_enabled() )
-            write_unlock(&pcd_tree_rwlocks[firstbyte]);
-obj_unlock:
-#endif
         spin_unlock(&obj->obj_spinlock);
     }
     return 0;
@@ -1258,13 +967,6 @@ found:
     ASSERT_SPINLOCK(&obj->obj_spinlock);
     pgp_del = pgp_delete_from_obj(obj, pgp->index);
     ASSERT(pgp_del == pgp);
-#ifdef CONFIG_TMEM_DEDUP
-    if ( tmem_dedup_enabled() && pgp->firstbyte != NOT_SHAREABLE )
-    {
-        ASSERT(pgp->pcd->pgp_ref_count == 1 || pgp->eviction_attempted);
-        pcd_disassociate(pgp,pool,1);
-    }
-#endif
 
     /* pgp already delist, so call pgp_free directly. */
     pgp_free(pgp);
@@ -1331,11 +1033,6 @@ static int do_tmem_put_compress(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn
     else if ( (size == 0) || (size >= tmem_mempool_maxalloc) ) {
         ret = 0;
         goto out;
-#ifdef CONFIG_TMEM_DEDUP
-    } else if ( tmem_dedup_enabled() && !is_persistent(pgp->us.obj->pool) ) {
-        if ( (ret = pcd_associate(pgp,dst,size)) == -ENOMEM )
-            goto out;
-#endif
     } else if ( (p = tmem_malloc(size,pgp->us.obj->pool)) == NULL ) {
         ret = -ENOMEM;
         goto out;
@@ -1395,13 +1092,6 @@ copy_uncompressed:
     ret = tmem_copy_from_client(pgp->pfp, cmfn, tmem_cli_buf_null);
     if ( ret < 0 )
         goto bad_copy;
-#ifdef CONFIG_PAGE_DEDUP
-    if ( tmem_dedup_enabled() && !is_persistent(pool) )
-    {
-        if ( pcd_associate(pgp,NULL,0) == -ENOMEM )
-            goto failed_dup;
-    }
-#endif
 
 done:
     /* Successfully replaced data, clean up and return success. */
@@ -1538,17 +1228,6 @@ copy_uncompressed:
     if ( ret < 0 )
         goto bad_copy;
 
-#ifdef CONFIG_TMEM_DEDUP
-    if ( tmem_dedup_enabled() && !is_persistent(pool) )
-    {
-        if ( pcd_associate(pgp, NULL, 0) == -ENOMEM )
-        {
-            ret = -ENOMEM;
-            goto del_pgp_from_obj;
-        }
-    }
-#endif
-
 insert_page:
     if ( !is_persistent(pool) )
     {
@@ -1635,12 +1314,6 @@ static int do_tmem_get(struct tmem_pool *pool,
         return 0;
     }
     ASSERT(pgp->size != -1);
-#ifdef CONFIG_TMEM_DEDUP
-    if ( tmem_dedup_enabled() && !is_persistent(pool) &&
-              pgp->firstbyte != NOT_SHAREABLE )
-        rc = pcd_copy_to_client(cmfn, pgp);
-    else
-#endif
     if ( pgp->size != 0 )
     {
         rc = tmem_decompress_to_client(cmfn, pgp->cdata, pgp->size, clibuf);
@@ -2402,44 +2075,12 @@ static int __init init_tmem(void)
     if ( !tmem_enabled() )
         return 0;
 
-#ifdef CONFIG_TMEM_DEDUP
-    if ( tmem_dedup_enabled() )
-    {
-        unsigned int i;
-
-        for (i = 0; i < 256; i++ )
-        {
-            pcd_tree_roots[i] = RB_ROOT;
-            rwlock_init(&pcd_tree_rwlocks[i]);
-        }
-    }
-#endif
     if ( !tmem_mempool_init() )
         return 0;
 
     if ( tmem_init() )
     {
-        printk("tmem: initialized comp=%d dedup=%d tze=%d\n",
-               tmem_compression_enabled(),
-#ifdef CONFIG_TMEM_DEDUP
-               tmem_dedup_enabled(),
-#else
-               0,
-#endif
-#ifdef CONFIG_TMEM_TZE
-               tmem_tze_enabled()
-#else
-               0
-#endif
-            );
-
-#if defined(CONFIG_TMEM_DEDUP) && defined(CONFIG_TMEM_TZE)
-        if ( tmem_dedup_enabled()&&tmem_compression_enabled()&&tmem_tze_enabled() )
-        {
-            tmem_tze_disable();
-            printk("tmem: tze and compression not compatible, disabling tze\n");
-        }
-#endif
+        printk("tmem: initialized comp=%d\n", tmem_compression_enabled());
         tmem_initialized = 1;
     }
     else
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index 565e50b..cda327b 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -274,15 +274,6 @@ static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
         atomic_add(client->weight,&tmem_global.client_weight_total);
         break;
     case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
-#ifdef CONFIG_TMEM_DEDUP
-        if ( tmem_dedup_enabled() )
-        {
-            tmem_client_warn("tmem: compression %s for all %ss, cannot be changed when tmem_dedup is enabled\n",
-                            tmem_compression_enabled() ? "enabled" : "disabled",
-                            tmem_client_str);
-            return -1;
-        }
-#endif
         client->compress = arg1 ? 1 : 0;
         tmem_client_info("tmem: compression %s for %s=%d\n",
             arg1 ? "enabled" : "disabled",tmem_cli_id_str,cli_id);
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index d42deef..84ae7fd 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -20,16 +20,6 @@ boolean_param("tmem", opt_tmem);
 bool_t __read_mostly opt_tmem_compress = 0;
 boolean_param("tmem_compress", opt_tmem_compress);
 
-#ifdef CONFIG_TMEM_DEDUP
-bool_t __read_mostly opt_tmem_dedup = 0;
-boolean_param("tmem_dedup", opt_tmem_dedup);
-#endif
-
-#ifdef CONFIG_TMEM_TZE
-bool_t __read_mostly opt_tmem_tze = 0;
-boolean_param("tmem_tze", opt_tmem_tze);
-#endif
-
 bool_t __read_mostly opt_tmem_shared_auth = 0;
 boolean_param("tmem_shared_auth", opt_tmem_shared_auth);
 
@@ -220,30 +210,6 @@ int tmem_decompress_to_client(xen_pfn_t cmfn, void *tmem_va,
     return 1;
 }
 
-#ifdef CONFIG_TMEM_TZE
-int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va,
-                                    pagesize_t len)
-{
-    void *cli_va;
-    unsigned long cli_mfn;
-    struct page_info *cli_pfp = NULL;
-
-    ASSERT(!(len & (sizeof(uint64_t)-1)));
-    ASSERT(len <= PAGE_SIZE);
-    ASSERT(len > 0 || tmem_va == NULL);
-    cli_va = cli_get_page(cmfn, &cli_mfn, &cli_pfp, 1);
-    if ( cli_va == NULL )
-        return -EFAULT;
-    if ( len > 0 )
-        memcpy((char *)cli_va,(char *)tmem_va,len);
-    if ( len < PAGE_SIZE )
-        memset((char *)cli_va+len,0,PAGE_SIZE-len);
-    cli_put_page(cli_va, cli_pfp, cli_mfn, 1);
-    smp_mb();
-    return 1;
-}
-#endif
-
 /******************  XEN-SPECIFIC HOST INITIALIZATION ********************/
 static int dstmem_order, workmem_order;
 
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 3be8001..7a1bb03 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -41,27 +41,6 @@ static inline bool_t tmem_compression_enabled(void)
     return opt_tmem_compress;
 }
 
-#ifdef CONFIG_TMEM_DEDUP
-extern bool_t opt_tmem_dedup;
-static inline bool_t tmem_dedup_enabled(void)
-{
-    return opt_tmem_dedup;
-}
-#endif
-
-#ifdef CONFIG_TMEM_TZE
-extern bool_t opt_tmem_tze;
-static inline bool_t tmem_tze_enabled(void)
-{
-    return opt_tmem_tze;
-}
-
-static inline void tmem_tze_disable(void)
-{
-    opt_tmem_tze = 0;
-}
-#endif
-
 extern bool_t opt_tmem_shared_auth;
 static inline bool_t tmem_shared_auth(void)
 {
@@ -196,105 +175,6 @@ static inline struct client *tmem_client_from_cli_id(domid_t cli_id)
     return c;
 }
 
-#ifdef CONFIG_TMEM_DEDUP
-static inline uint8_t tmem_get_first_byte(struct page_info *pfp)
-{
-    const uint8_t *p = __map_domain_page(pfp);
-    uint8_t byte = p[0];
-
-    unmap_domain_page(p);
-
-    return byte;
-}
-
-static inline int tmem_page_cmp(struct page_info *pfp1, struct page_info *pfp2)
-{
-    const uint64_t *p1 = __map_domain_page(pfp1);
-    const uint64_t *p2 = __map_domain_page(pfp2);
-    int rc = memcmp(p1, p2, PAGE_SIZE);
-
-    unmap_domain_page(p2);
-    unmap_domain_page(p1);
-
-    return rc;
-}
-
-static inline int tmem_pcd_cmp(void *va1, pagesize_t len1, void *va2, pagesize_t len2)
-{
-    const char *p1 = (char *)va1;
-    const char *p2 = (char *)va2;
-    pagesize_t i;
-
-    ASSERT(len1 <= PAGE_SIZE);
-    ASSERT(len2 <= PAGE_SIZE);
-    if ( len1 < len2 )
-        return -1;
-    if ( len1 > len2 )
-        return 1;
-    ASSERT(len1 == len2);
-    for ( i = len2; i && *p1 == *p2; i--, p1++, p2++ );
-    if ( !i )
-        return 0;
-    if ( *p1 < *p2 )
-        return -1;
-    return 1;
-}
-
-static inline int tmem_tze_pfp_cmp(struct page_info *pfp1, pagesize_t pfp_len,
-                                   void *tva, const pagesize_t tze_len)
-{
-    const uint64_t *p1 = __map_domain_page(pfp1);
-    const uint64_t *p2 = tze_len == PAGE_SIZE ?
-        __map_domain_page((struct page_info *)tva) : tva;
-    int rc;
-
-    ASSERT(pfp_len <= PAGE_SIZE);
-    ASSERT(!(pfp_len & (sizeof(uint64_t)-1)));
-    ASSERT(tze_len <= PAGE_SIZE);
-    ASSERT(!(tze_len & (sizeof(uint64_t)-1)));
-    if ( pfp_len < tze_len )
-        rc = -1;
-    else if ( pfp_len > tze_len )
-        rc = 1;
-    else
-        rc = memcmp(p1, p2, tze_len);
-
-    if ( tze_len == PAGE_SIZE )
-        unmap_domain_page(p2);
-    unmap_domain_page(p1);
-
-    return rc;
-}
-
-/* return the size of the data in the pfp, ignoring trailing zeroes and
- * rounded up to the nearest multiple of 8 */
-static inline pagesize_t tmem_tze_pfp_scan(struct page_info *pfp)
-{
-    const uint64_t *const page = __map_domain_page(pfp);
-    const uint64_t *p = page;
-    pagesize_t bytecount = PAGE_SIZE;
-    pagesize_t len = PAGE_SIZE/sizeof(uint64_t);
-
-    p += len;
-    while ( len-- && !*--p )
-        bytecount -= sizeof(uint64_t);
-
-    unmap_domain_page(page);
-
-    return bytecount;
-}
-
-static inline void tmem_tze_copy_from_pfp(void *tva, struct page_info *pfp, pagesize_t len)
-{
-    const uint64_t *p = __map_domain_page(pfp);
-
-    ASSERT(!(len & (sizeof(uint64_t)-1)));
-    memcpy(tva, p, len);
-
-    unmap_domain_page(p);
-}
-#endif
-
 /* these typedefs are in the public/tmem.h interface
 typedef XEN_GUEST_HANDLE(void) cli_mfn_t;
 typedef XEN_GUEST_HANDLE(char) cli_va_t;
@@ -342,9 +222,6 @@ int tmem_compress_from_client(xen_pfn_t, void **, size_t *,
 
 int tmem_copy_from_client(struct page_info *, xen_pfn_t, tmem_cli_va_param_t);
 int tmem_copy_to_client(xen_pfn_t, struct page_info *, tmem_cli_va_param_t);
-#ifdef CONFIG_TMEM_TZE
-extern int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va, pagesize_t len);
-#endif
 
 #define tmem_client_err(fmt, args...)  printk(XENLOG_G_ERR fmt, ##args)
 #define tmem_client_warn(fmt, args...) printk(XENLOG_G_WARNING fmt, ##args)
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 06/12] tmem: Move client weight, frozen, live_migrating, and compress
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
                   ` (4 preceding siblings ...)
  2016-09-28  9:42 ` [PATCH v1 05/12] tmem: Delete deduplication (and tze) code Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 12:39   ` Jan Beulich
  2016-09-28  9:42 ` [PATCH v1 07/12] tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE] Konrad Rzeszutek Wilk
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Konrad Rzeszutek Wilk

in its own structure. This paves the way to make only
one hypercall to retrieve/set this information instead of multiple
ones.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
v1: First submission.
---
 xen/common/tmem.c           | 40 +++++++++++++++++++++-------------------
 xen/common/tmem_control.c   | 18 +++++++++---------
 xen/include/public/sysctl.h | 14 ++++++++++++++
 xen/include/xen/tmem_xen.h  |  6 +-----
 4 files changed, 45 insertions(+), 33 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index cf5271c..27164ce 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -333,7 +333,7 @@ static void pgp_free(struct tmem_page_descriptor *pgp)
     atomic_dec(&pool->pgp_count);
     ASSERT(_atomic_read(pool->pgp_count) >= 0);
     pgp->size = -1;
-    if ( is_persistent(pool) && pool->client->live_migrating )
+    if ( is_persistent(pool) && pool->client->info.flags.u.migrating )
     {
         pgp->inv_oid = pgp->us.obj->oid;
         pgp->pool_id = pool->pool_id;
@@ -370,7 +370,7 @@ static void pgp_delist_free(struct tmem_page_descriptor *pgp)
     }
     else
     {
-        if ( client->live_migrating )
+        if ( client->info.flags.u.migrating )
         {
             spin_lock(&pers_lists_spinlock);
             list_add_tail(&pgp->client_inv_pages,
@@ -791,7 +791,7 @@ static void pool_flush(struct tmem_pool *pool, domid_t cli_id)
                     is_persistent(pool) ? "persistent" : "ephemeral" ,
                     is_shared(pool) ? "shared" : "private",
                     tmem_cli_id_str, pool->client->cli_id, pool->pool_id);
-    if ( pool->client->live_migrating )
+    if ( pool->client->info.flags.u.migrating )
     {
         tmem_client_warn("can't destroy pool while %s is live-migrating\n",
                     tmem_client_str);
@@ -843,7 +843,9 @@ static struct client *client_create(domid_t cli_id)
     rcu_unlock_domain(d);
 
     client->cli_id = cli_id;
-    client->compress = tmem_compression_enabled();
+    client->info.version = TMEM_SPEC_VERSION;
+    client->info.maxpools = MAX_POOLS_PER_DOMAIN;
+    client->info.flags.u.compress = tmem_compression_enabled();
     client->shared_auth_required = tmem_shared_auth();
     for ( i = 0; i < MAX_GLOBAL_SHARED_POOLS; i++)
         client->shared_auth_uuid[i][0] =
@@ -887,11 +889,11 @@ static bool_t client_over_quota(struct client *client)
     int total = _atomic_read(tmem_global.client_weight_total);
 
     ASSERT(client != NULL);
-    if ( (total == 0) || (client->weight == 0) ||
+    if ( (total == 0) || (client->info.weight == 0) ||
           (client->eph_count == 0) )
         return 0;
     return ( ((tmem_global.eph_count*100L) / client->eph_count ) >
-             ((total*100L) / client->weight) );
+             ((total*100L) / client->info.weight) );
 }
 
 /************ MEMORY REVOCATION ROUTINES *******************************/
@@ -1067,10 +1069,10 @@ static int do_tmem_dup_put(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn,
     pool = obj->pool;
     ASSERT(pool != NULL);
     client = pool->client;
-    if ( client->live_migrating )
+    if ( client->info.flags.u.migrating )
         goto failed_dup; /* No dups allowed when migrating. */
     /* Can we successfully manipulate pgp to change out the data? */
-    if ( client->compress && pgp->size != 0 )
+    if ( client->info.flags.u.compress && pgp->size != 0 )
     {
         ret = do_tmem_put_compress(pgp, cmfn, clibuf);
         if ( ret == 1 )
@@ -1142,7 +1144,7 @@ static int do_tmem_put(struct tmem_pool *pool,
     ASSERT(pool != NULL);
     client = pool->client;
     ASSERT(client != NULL);
-    ret = client->frozen ? -EFROZEN : -ENOMEM;
+    ret = client->info.flags.u.frozen  ? -EFROZEN : -ENOMEM;
     pool->puts++;
 
 refind:
@@ -1156,14 +1158,14 @@ refind:
         else
         {
             /* No puts allowed into a frozen pool (except dup puts). */
-            if ( client->frozen )
+            if ( client->info.flags.u.frozen )
                 goto unlock_obj;
         }
     }
     else
     {
         /* No puts allowed into a frozen pool (except dup puts). */
-        if ( client->frozen )
+        if ( client->info.flags.u.frozen )
             return ret;
         if ( (obj = obj_alloc(pool, oidp)) == NULL )
             return -ENOMEM;
@@ -1198,7 +1200,7 @@ refind:
     pgp->index = index;
     pgp->size = 0;
 
-    if ( client->compress )
+    if ( client->info.flags.u.compress )
     {
         ASSERT(pgp->pfp == NULL);
         ret = do_tmem_put_compress(pgp, cmfn, clibuf);
@@ -1390,7 +1392,7 @@ static int do_tmem_flush_page(struct tmem_pool *pool,
     pool->flushs_found++;
 
 out:
-    if ( pool->client->frozen )
+    if ( pool->client->info.flags.u.frozen )
         return -EFROZEN;
     else
         return 1;
@@ -1411,7 +1413,7 @@ static int do_tmem_flush_object(struct tmem_pool *pool,
     write_unlock(&pool->pool_rwlock);
 
 out:
-    if ( pool->client->frozen )
+    if ( pool->client->info.flags.u.frozen )
         return -EFROZEN;
     else
         return 1;
@@ -1669,10 +1671,10 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
             rc = 0;
             break;
         }
-        client->was_frozen = client->frozen;
-        client->frozen = 1;
+        client->was_frozen = client->info.flags.u.frozen;
+        client->info.flags.u.frozen = 1;
         if ( arg1 != 0 )
-            client->live_migrating = 1;
+            client->info.flags.u.migrating = 1;
         rc = 1;
         break;
     case XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN:
@@ -1682,12 +1684,12 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
     case XEN_SYSCTL_TMEM_OP_SAVE_END:
         if ( client == NULL )
             break;
-        client->live_migrating = 0;
+        client->info.flags.u.migrating = 0;
         if ( !list_empty(&client->persistent_invalidated_list) )
             list_for_each_entry_safe(pgp,pgp2,
               &client->persistent_invalidated_list, client_inv_pages)
                 __pgp_free(pgp, client->pools[pgp->pool_id]);
-        client->frozen = client->was_frozen;
+        client->info.flags.u.frozen = client->was_frozen;
         rc = 0;
         break;
     }
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index cda327b..ba003a8 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -27,14 +27,14 @@ static int tmemc_freeze_pools(domid_t cli_id, int arg)
     if ( cli_id == TMEM_CLI_ID_NULL )
     {
         list_for_each_entry(client,&tmem_global.client_list,client_list)
-            client->frozen = freeze;
+            client->info.flags.u.frozen = freeze;
         tmem_client_info("tmem: all pools %s for all %ss\n", s, tmem_client_str);
     }
     else
     {
         if ( (client = tmem_client_from_cli_id(cli_id)) == NULL)
             return -1;
-        client->frozen = freeze;
+        client->info.flags.u.frozen = freeze;
         tmem_client_info("tmem: all pools %s for %s=%d\n",
                          s, tmem_cli_id_str, cli_id);
     }
@@ -105,7 +105,7 @@ static int tmemc_list_client(struct client *c, tmem_cli_va_param_t buf,
 
     n = scnprintf(info,BSIZE,"C=CI:%d,ww:%d,co:%d,fr:%d,"
         "Tc:%"PRIu64",Ge:%ld,Pp:%ld,Gp:%ld%c",
-        c->cli_id, c->weight, c->compress, c->frozen,
+        c->cli_id, c->info.weight, c->info.flags.u.compress, c->info.flags.u.frozen,
         c->total_cycles, c->succ_eph_gets, c->succ_pers_puts, c->succ_pers_gets,
         use_long ? ',' : '\n');
     if (use_long)
@@ -266,15 +266,15 @@ static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
     switch (subop)
     {
     case XEN_SYSCTL_TMEM_OP_SET_WEIGHT:
-        old_weight = client->weight;
-        client->weight = arg1;
+        old_weight = client->info.weight;
+        client->info.weight = arg1;
         tmem_client_info("tmem: weight set to %d for %s=%d\n",
                         arg1, tmem_cli_id_str, cli_id);
         atomic_sub(old_weight,&tmem_global.client_weight_total);
-        atomic_add(client->weight,&tmem_global.client_weight_total);
+        atomic_add(client->info.weight,&tmem_global.client_weight_total);
         break;
     case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
-        client->compress = arg1 ? 1 : 0;
+        client->info.flags.u.compress = arg1 ? 1 : 0;
         tmem_client_info("tmem: compression %s for %s=%d\n",
             arg1 ? "enabled" : "disabled",tmem_cli_id_str,cli_id);
         break;
@@ -327,12 +327,12 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT:
         if ( client == NULL )
             break;
-        rc = client->weight == -1 ? -2 : client->weight;
+        rc = client->info.weight == -1 ? -2 : client->info.weight;
         break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS:
         if ( client == NULL )
             break;
-        rc = (client->compress ? TMEM_CLIENT_COMPRESS : 0 ) |
+        rc = (client->info.flags.u.compress ? TMEM_CLIENT_COMPRESS : 0 ) |
              (client->was_frozen ? TMEM_CLIENT_FROZEN : 0 );
         break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS:
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 847ec35..a29a8fa 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -785,6 +785,20 @@ struct tmem_handle {
     xen_tmem_oid_t oid;
 };
 
+struct tmem_client {
+    uint32_t version;
+    uint32_t maxpools;
+    union {             /* See TMEM_CLIENT_[COMPRESS,FROZEN] */
+        uint32_t raw;
+        struct {
+            uint8_t frozen:1,
+                    compress:1,
+                    migrating:1;
+        } u;
+    } flags;
+    uint32_t weight;
+};
+
 struct xen_sysctl_tmem_op {
     uint32_t cmd;       /* IN: XEN_SYSCTL_TMEM_OP_* . */
     int32_t pool_id;    /* IN: 0 by default unless _SAVE_*, RESTORE_* .*/
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 7a1bb03..c0e776c 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -293,13 +293,9 @@ struct client {
     struct list_head ephemeral_page_list;
     long eph_count, eph_count_max;
     domid_t cli_id;
-    uint32_t weight;
-    uint32_t cap;
-    bool_t compress;
-    bool_t frozen;
+    struct tmem_client info;
     bool_t shared_auth_required;
     /* For save/restore/migration. */
-    bool_t live_migrating;
     bool_t was_frozen;
     struct list_head persistent_invalidated_list;
     struct tmem_page_descriptor *cur_pgp;
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 07/12] tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE]..
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
                   ` (5 preceding siblings ...)
  2016-09-28  9:42 ` [PATCH v1 06/12] tmem: Move client weight, frozen, live_migrating, and compress Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 11:06   ` Wei Liu
  2016-09-28 12:50   ` Jan Beulich
  2016-09-28  9:42 ` [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the Konrad Rzeszutek Wilk
                   ` (4 subsequent siblings)
  11 siblings, 2 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Wei Liu, Ian Jackson, Konrad Rzeszutek Wilk

Specifically:

XEN_SYSCTL_TMEM_OP_SET_[WEIGHT,COMPRESS] are now done via:

 XEN_SYSCTL_TMEM_SET_CLIENT_INFO

and XEN_SYSCTL_TMEM_OP_SAVE_GET_[VERSION,MAXPOOLS,
CLIENT_WEIGHT, CLIENT_FLAGS] can now be retrieved via:

 XEN_SYSCTL_TMEM_GET_CLIENT_INFO

All this information is now in 'struct tmem_client' and
that is what we pass around.

Hypervisor wise we had to do a bit of casting. The
'struct xen_sysctl_tmem_op'->buf variable is a pointer to
char. Casting that using the guest_handle_cast to a structure
(struct tmem_client) does not work. Hence we first have to
cast it a void and then to xen_sysctl_tmem_client_t.
This could be improved by having an union and in fact the
patch titled:
"tmem: Handle 'struct tmem_info' as a seperate field in the"
does that. It could be squashed in here but that can make
it harder to review.

On the toolstack, prior to this patch, the xc_tmem_control
would use the bounce buffer only when arg1 was set and the cmd
was to list. With the 'XEN_SYSCTL_TMEM_OP_SET_[WEIGHT|COMPRESS]'
that made sense as the 'arg1' would have the value. However
for the other ones (say XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID)
the 'arg1' would be the length of the 'buf'. If this
confusing don't despair, patch patch titled:
tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg.
takes care of that.

The acute reader of the toolstack code will discover that
we only used the bounce buffer for LIST, not for any other
subcommands that used 'buf'!?! Which means that the contents
of 'buf' would never be copied back to the calleer 'buf'!

I am not sure how this could possibly work, perhaps Xen 4.1
(when this was introduced) was more relaxed about the bounce buffer
being enabled. Anyhow this fixes xc_tmem_control to do it for
any subcommand that has 'arg1'.

Lastly some of the checks in xc_tmem_[restore|save] are removed
as they can't ever be reached (not even sure how they could
have been reached in the original submission). One of them
is the check for the weight against -1 when in fact the
hypervisor would never have provided that value.

Now the checks are simple - as the hypercall always returns
->version and ->maxpools (which is mirroring how it was done
prior to this patch). But if one wants to check the if a guest
has any tmem activity then the patch titled
"tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_
[FLAGS,NPAGES,UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS."
adds an ->nr_pools to check for that.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v1: New submission.
---
 tools/libxc/xc_tmem.c             | 72 ++++++++++++----------------
 tools/libxl/libxl.c               | 26 +++++++---
 tools/python/xen/lowlevel/xc/xc.c |  2 -
 xen/common/tmem_control.c         | 99 ++++++++++++++++++++++-----------------
 xen/include/public/sysctl.h       | 18 ++++---
 xen/include/xen/tmem_xen.h        |  2 +-
 6 files changed, 117 insertions(+), 102 deletions(-)

diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 24c8b43..74d2d85 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -18,6 +18,7 @@
  */
 
 #include "xc_private.h"
+#include <assert.h>
 #include <xen/tmem.h>
 
 static int do_tmem_op(xc_interface *xch, tmem_op_t *op)
@@ -67,7 +68,12 @@ int xc_tmem_control(xc_interface *xch,
     sysctl.u.tmem_op.oid.oid[1] = 0;
     sysctl.u.tmem_op.oid.oid[2] = 0;
 
-    if ( cmd == XEN_SYSCTL_TMEM_OP_LIST && arg1 != 0 )
+    if ( cmd == XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO )
+    {
+        HYPERCALL_BOUNCE_SET_DIR(buf, XC_HYPERCALL_BUFFER_BOUNCE_IN);
+    }
+
+    if ( arg1 )
     {
         if ( buf == NULL )
         {
@@ -85,8 +91,8 @@ int xc_tmem_control(xc_interface *xch,
 
     rc = do_sysctl(xch, &sysctl);
 
-    if ( cmd == XEN_SYSCTL_TMEM_OP_LIST && arg1 != 0 )
-            xc_hypercall_bounce_post(xch, buf);
+    if ( arg1 )
+        xc_hypercall_bounce_post(xch, buf);
 
     return rc;
 }
@@ -211,38 +217,29 @@ int xc_tmem_save(xc_interface *xch,
 {
     int marker = field_marker;
     int i, j;
-    uint32_t max_pools, version;
-    uint32_t weight, flags;
-    uint32_t pool_id;
+    uint32_t flags;
     uint32_t minusone = -1;
+    uint32_t pool_id;
     struct tmem_handle *h;
+    xen_sysctl_tmem_client_t info;
 
     if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SAVE_BEGIN,dom,live,0,NULL) <= 0 )
         return 0;
 
     if ( write_exact(io_fd, &marker, sizeof(marker)) )
         return -1;
-    version = xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION,0,0,0,NULL);
-    if ( write_exact(io_fd, &version, sizeof(version)) )
-        return -1;
-    max_pools = xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS,0,0,0,NULL);
-    if ( write_exact(io_fd, &max_pools, sizeof(max_pools)) )
-        return -1;
-    if ( version == -1 || max_pools == -1 )
-        return -1;
-    if ( write_exact(io_fd, &minusone, sizeof(minusone)) )
-        return -1;
-    flags = xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS,dom,0,0,NULL);
-    if ( write_exact(io_fd, &flags, sizeof(flags)) )
-        return -1;
-    weight = xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT,dom,0,0,NULL);
-    if ( write_exact(io_fd, &weight, sizeof(weight)) )
+
+    if ( xc_tmem_control(xch, 0 /* pool_id */,
+                         XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO,
+                         dom /* cli_id */, sizeof(info) /* arg1 */, 0 /* arg2 */,
+                         &info) < 0 )
         return -1;
-    if ( flags == -1 || weight == -1 )
+
+    if ( write_exact(io_fd, &info, sizeof(info)) )
         return -1;
     if ( write_exact(io_fd, &minusone, sizeof(minusone)) )
         return -1;
-    for ( i = 0; i < max_pools; i++ )
+    for ( i = 0; i < info.maxpools; i++ )
     {
         uint64_t uuid[2];
         uint32_t n_pages;
@@ -378,35 +375,24 @@ static int xc_tmem_restore_new_pool(
 
 int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
 {
-    uint32_t this_max_pools, this_version;
     uint32_t pool_id;
     uint32_t minusone;
-    uint32_t weight, flags;
+    uint32_t flags;
+    xen_sysctl_tmem_client_t info;
     int checksum = 0;
 
-    if ( read_exact(io_fd, &this_version, sizeof(this_version)) )
-        return -1;
-    if ( read_exact(io_fd, &this_max_pools, sizeof(this_max_pools)) )
+    if ( read_exact(io_fd, &info, sizeof(info)) )
         return -1;
     /* FIXME check here to ensure no version mismatch or maxpools mismatch */
-    if ( read_exact(io_fd, &minusone, sizeof(minusone)) )
-        return -1;
-    if ( minusone != -1 )
-        return -1;
     if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN,dom,0,0,NULL) < 0 )
         return -1;
-    if ( read_exact(io_fd, &flags, sizeof(flags)) )
-        return -1;
-    if ( flags & TMEM_CLIENT_COMPRESS )
-        if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SET_COMPRESS,dom,1,0,NULL) < 0 )
-            return -1;
-    if ( flags & TMEM_CLIENT_FROZEN )
-        if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_FREEZE,dom,0,0,NULL) < 0 )
-            return -1;
-    if ( read_exact(io_fd, &weight, sizeof(weight)) )
-        return -1;
-    if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SET_WEIGHT,dom,0,0,NULL) < 0 )
+
+    if ( xc_tmem_control(xch, 0 /* pool_id */,
+                         XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO,
+                         dom /* cli_id */, sizeof(info) /* arg1 */, 0 /* arg2 */,
+                         &info) < 0 )
         return -1;
+
     if ( read_exact(io_fd, &minusone, sizeof(minusone)) )
         return -1;
     while ( read_exact(io_fd, &pool_id, sizeof(pool_id)) == 0 && pool_id != -1 )
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index e3ee49c..2f50e61 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -6061,28 +6061,42 @@ out:
     return rc;
 }
 
-static int32_t tmem_setop_from_string(char *set_name)
+static int32_t tmem_setop_from_string(char *set_name, uint32_t val,
+                                      struct tmem_client *info)
 {
     if (!strcmp(set_name, "weight"))
-        return XEN_SYSCTL_TMEM_OP_SET_WEIGHT;
+        info->weight = val;
     else if (!strcmp(set_name, "compress"))
-        return XEN_SYSCTL_TMEM_OP_SET_COMPRESS;
+        info->flags.u.compress = val;
     else
         return -1;
+
+    return 0;
 }
 
 int libxl_tmem_set(libxl_ctx *ctx, uint32_t domid, char* name, uint32_t set)
 {
     int r, rc;
-    int32_t subop = tmem_setop_from_string(name);
+    struct tmem_client info;
     GC_INIT(ctx);
 
-    if (subop == -1) {
+    r = xc_tmem_control(ctx->xch, -1 /* pool_id */,
+                        XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO,
+                        domid, sizeof(info), 0 /* arg2 */, &info);
+    if (r < 0) {
+        LOGE(ERROR, "Can not get tmem data!");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+    rc = tmem_setop_from_string(name, set, &info);
+    if (rc == -1) {
         LOGEV(ERROR, -1, "Invalid set, valid sets are <weight|compress>");
         rc = ERROR_INVAL;
         goto out;
     }
-    r = xc_tmem_control(ctx->xch, -1, subop, domid, set, 0, NULL);
+    r = xc_tmem_control(ctx->xch, -1 /* pool_id */,
+                        XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO,
+                        domid, sizeof(info), 0 /* arg2 */, &info);
     if (r < 0) {
         LOGE(ERROR, "Can not set tmem %s", name);
         rc = ERROR_FAIL;
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 287f590..a117737 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1640,8 +1640,6 @@ static PyObject *pyxc_tmem_control(XcObject *self,
         case XEN_SYSCTL_TMEM_OP_THAW:
         case XEN_SYSCTL_TMEM_OP_FREEZE:
         case XEN_SYSCTL_TMEM_OP_DESTROY:
-        case XEN_SYSCTL_TMEM_OP_SET_WEIGHT:
-        case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
         default:
             break;
     }
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index ba003a8..8bb9d48 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -258,43 +258,58 @@ static int tmemc_list(domid_t cli_id, tmem_cli_va_param_t buf, uint32_t len,
     return 0;
 }
 
-static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
+static int __tmemc_set_var(struct client *client, uint32_t subop,
+                           XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) buf)
 {
     domid_t cli_id = client->cli_id;
     uint32_t old_weight;
+    xen_sysctl_tmem_client_t info = { };
 
-    switch (subop)
+    ASSERT(client);
+
+    if ( subop != XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO )
+    {
+        tmem_client_warn("tmem: unknown subop %d for tmemc_set_var\n", subop);
+        return -1;
+    }
+    if ( copy_from_guest(&info, buf, 1) )
+        return -EFAULT;
+
+    if ( info.weight != client->info.weight )
     {
-    case XEN_SYSCTL_TMEM_OP_SET_WEIGHT:
         old_weight = client->info.weight;
-        client->info.weight = arg1;
+        client->info.weight = info.weight;
         tmem_client_info("tmem: weight set to %d for %s=%d\n",
-                        arg1, tmem_cli_id_str, cli_id);
+                         info.weight, tmem_cli_id_str, cli_id);
         atomic_sub(old_weight,&tmem_global.client_weight_total);
         atomic_add(client->info.weight,&tmem_global.client_weight_total);
-        break;
-    case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
-        client->info.flags.u.compress = arg1 ? 1 : 0;
+    }
+
+
+    if ( info.flags.u.compress != client->info.flags.u.compress )
+    {
+        client->info.flags.u.compress = info.flags.u.compress;
         tmem_client_info("tmem: compression %s for %s=%d\n",
-            arg1 ? "enabled" : "disabled",tmem_cli_id_str,cli_id);
-        break;
-    default:
-        tmem_client_warn("tmem: unknown subop %d for tmemc_set_var\n", subop);
-        return -1;
+                         info.flags.u.compress ? "enabled" : "disabled",
+                         tmem_cli_id_str,cli_id);
     }
     return 0;
 }
 
-static int tmemc_set_var(domid_t cli_id, uint32_t subop, uint32_t arg1)
+static int tmemc_set_var(domid_t cli_id, uint32_t subop,
+                         XEN_GUEST_HANDLE_PARAM(void) buf)
 {
     struct client *client;
-    int ret = -1;
+    int ret = -ENOENT;
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) info;
+
+    info = guest_handle_cast(buf, xen_sysctl_tmem_client_t);
 
     if ( cli_id == TMEM_CLI_ID_NULL )
     {
         list_for_each_entry(client,&tmem_global.client_list,client_list)
         {
-            ret =  __tmemc_set_var(client, subop, arg1);
+            ret =  __tmemc_set_var(client, subop, info);
             if (ret)
                 break;
         }
@@ -303,37 +318,38 @@ static int tmemc_set_var(domid_t cli_id, uint32_t subop, uint32_t arg1)
     {
         client = tmem_client_from_cli_id(cli_id);
         if ( client )
-            ret = __tmemc_set_var(client, subop, arg1);
+            ret = __tmemc_set_var(client, subop, info);
     }
     return ret;
 }
 
-static int tmemc_save_subop(int cli_id, uint32_t pool_id,
-                        uint32_t subop, tmem_cli_va_param_t buf, uint32_t arg1)
+static int tmemc_save_subop(int cli_id, uint32_t pool_id, uint32_t subop,
+                            XEN_GUEST_HANDLE_PARAM(void) buf, uint32_t arg1)
 {
     struct client *client = tmem_client_from_cli_id(cli_id);
     struct tmem_pool *pool = (client == NULL || pool_id >= MAX_POOLS_PER_DOMAIN)
                    ? NULL : client->pools[pool_id];
     int rc = -1;
+    XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) info;
 
     switch(subop)
     {
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION:
-        rc = TMEM_SPEC_VERSION;
-        break;
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS:
-        rc = MAX_POOLS_PER_DOMAIN;
-        break;
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT:
-        if ( client == NULL )
-            break;
-        rc = client->info.weight == -1 ? -2 : client->info.weight;
-        break;
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS:
-        if ( client == NULL )
-            break;
-        rc = (client->info.flags.u.compress ? TMEM_CLIENT_COMPRESS : 0 ) |
-             (client->was_frozen ? TMEM_CLIENT_FROZEN : 0 );
+    case XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO:
+        rc = 0;
+        info = guest_handle_cast(buf, xen_sysctl_tmem_client_t);
+        if ( client )
+        {
+            if ( copy_to_guest(info, &client->info, 1) )
+                rc = -EFAULT;
+        }
+        else
+        {
+            xen_sysctl_tmem_client_t generic = { .version = TMEM_SPEC_VERSION,
+                                                 .maxpools = MAX_POOLS_PER_DOMAIN };
+
+            if ( copy_to_guest(info, &generic, 1) )
+                rc = -EFAULT;
+        }
         break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS:
          if ( pool == NULL )
@@ -386,22 +402,19 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
         ret = tmemc_list(op->cli_id,
                          guest_handle_cast(op->buf, char), op->arg1, op->arg2);
         break;
-    case XEN_SYSCTL_TMEM_OP_SET_WEIGHT:
-    case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
-        ret = tmemc_set_var(op->cli_id, cmd, op->arg1);
+    case XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO:
+        ret = tmemc_set_var(op->cli_id, cmd,
+                            guest_handle_cast(op->buf, void));
         break;
     case XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB:
         ret = tmem_freeable_pages() >> (20 - PAGE_SHIFT);
         break;
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION:
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS:
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT:
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS:
+    case XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID:
         ret = tmemc_save_subop(op->cli_id, pool_id, cmd,
-                               guest_handle_cast(op->buf, char), op->arg1);
+                               guest_handle_cast(op->buf, void), op->arg1);
         break;
     default:
         ret = do_tmem_control(op);
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index a29a8fa..27e051b 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -756,14 +756,10 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
 #define XEN_SYSCTL_TMEM_OP_FLUSH                  2
 #define XEN_SYSCTL_TMEM_OP_DESTROY                3
 #define XEN_SYSCTL_TMEM_OP_LIST                   4
-#define XEN_SYSCTL_TMEM_OP_SET_WEIGHT             5
-#define XEN_SYSCTL_TMEM_OP_SET_COMPRESS           7
 #define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
 #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
-#define XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION       11
-#define XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS      12
-#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT 13
-#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS  15
+#define XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO        11
+#define XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO        12
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS    16
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES   17
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID     18
@@ -785,6 +781,11 @@ struct tmem_handle {
     xen_tmem_oid_t oid;
 };
 
+/*
+ * XEN_SYSCTL_TMEM_OP_[GET,SAVE]_CLIENT overrides the 'buf' in
+ * xen_sysctl_tmem_op with this structure - and contain data used
+ * during migration.
+ */
 struct tmem_client {
     uint32_t version;
     uint32_t maxpools;
@@ -798,6 +799,8 @@ struct tmem_client {
     } flags;
     uint32_t weight;
 };
+typedef struct tmem_client xen_sysctl_tmem_client_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_tmem_client_t);
 
 struct xen_sysctl_tmem_op {
     uint32_t cmd;       /* IN: XEN_SYSCTL_TMEM_OP_* . */
@@ -809,7 +812,8 @@ struct xen_sysctl_tmem_op {
     uint32_t arg2;      /* IN: If not applicable to command use 0. */
     uint32_t pad;       /* Padding so structure is the same under 32 and 64. */
     xen_tmem_oid_t oid; /* IN: If not applicable to command use 0s. */
-    XEN_GUEST_HANDLE_64(char) buf; /* IN/OUT: Buffer to save and restore ops. */
+    XEN_GUEST_HANDLE_64(char) buf; /* IN/OUT: Buffer to save/restore,
+                                      and set/get ops. */
 };
 typedef struct xen_sysctl_tmem_op xen_sysctl_tmem_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_tmem_op_t);
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index c0e776c..daf6c92 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -293,7 +293,7 @@ struct client {
     struct list_head ephemeral_page_list;
     long eph_count, eph_count_max;
     domid_t cli_id;
-    struct tmem_client info;
+    xen_sysctl_tmem_client_t info;
     bool_t shared_auth_required;
     /* For save/restore/migration. */
     bool_t was_frozen;
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
                   ` (6 preceding siblings ...)
  2016-09-28  9:42 ` [PATCH v1 07/12] tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE] Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 11:00   ` Wei Liu
  2016-09-28 12:56   ` Jan Beulich
  2016-09-28  9:42 ` [PATCH v1 09/12] tmem: Check version and maxpools when XEN_SYSCTL_TMEM_SET_CLIENT_INFO Konrad Rzeszutek Wilk
                   ` (3 subsequent siblings)
  11 siblings, 2 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Wei Liu, Ian Jackson, Konrad Rzeszutek Wilk

the struct tmem_op.

No functional change. But it makes the code so much easier
to read.

Note: We still have to do this awkward 'guest_handle_cast'
otherwise it will not compile on ARM - which defines _two_
of these macros (__guest_handle_64_xen_sysctl_tmem_client_t
and __guest_handle_xen_sysctl_tmem_client_t). And if cast is
not used then a compile error comes up as we use the wrong one.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v1: First submission.
---
 tools/libxc/xc_tmem.c       |  4 +--
 xen/common/tmem.c           |  8 +++---
 xen/common/tmem_control.c   | 68 ++++++++++++++++++++++-----------------------
 xen/include/public/sysctl.h | 11 +++++---
 4 files changed, 46 insertions(+), 45 deletions(-)

diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 74d2d85..cb0dfa1 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -87,7 +87,7 @@ int xc_tmem_control(xc_interface *xch,
         }
     }
 
-    set_xen_guest_handle(sysctl.u.tmem_op.buf, buf);
+    set_xen_guest_handle(sysctl.u.tmem_op.u.buf, buf);
 
     rc = do_sysctl(xch, &sysctl);
 
@@ -133,7 +133,7 @@ int xc_tmem_control_oid(xc_interface *xch,
         }
     }
 
-    set_xen_guest_handle(sysctl.u.tmem_op.buf, buf);
+    set_xen_guest_handle(sysctl.u.tmem_op.u.buf, buf);
 
     rc = do_sysctl(xch, &sysctl);
 
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 27164ce..ab354b6 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1843,19 +1843,19 @@ int do_tmem_control(struct xen_sysctl_tmem_op *op)
     case XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN:
     case XEN_SYSCTL_TMEM_OP_SAVE_END:
         ret = tmemc_save_subop(op->cli_id, pool_id, cmd,
-                               guest_handle_cast(op->buf, char), op->arg1);
+                               guest_handle_cast(op->u.buf, char), op->arg1);
         break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE:
         ret = tmemc_save_get_next_page(op->cli_id, pool_id,
-                                       guest_handle_cast(op->buf, char), op->arg1);
+                                       guest_handle_cast(op->u.buf, char), op->arg1);
         break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_INV:
         ret = tmemc_save_get_next_inv(op->cli_id,
-                                      guest_handle_cast(op->buf, char), op->arg1);
+                                      guest_handle_cast(op->u.buf, char), op->arg1);
         break;
     case XEN_SYSCTL_TMEM_OP_RESTORE_PUT_PAGE:
         ret = tmemc_restore_put_page(op->cli_id, pool_id, oidp, op->arg2,
-                                     guest_handle_cast(op->buf, char), op->arg1);
+                                     guest_handle_cast(op->u.buf, char), op->arg1);
         break;
     case XEN_SYSCTL_TMEM_OP_RESTORE_FLUSH_PAGE:
         ret = tmemc_restore_flush_page(op->cli_id, pool_id, oidp, op->arg2);
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index 8bb9d48..942d4c9 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -258,7 +258,7 @@ static int tmemc_list(domid_t cli_id, tmem_cli_va_param_t buf, uint32_t len,
     return 0;
 }
 
-static int __tmemc_set_var(struct client *client, uint32_t subop,
+static int __tmemc_set_var(struct client *client,
                            XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) buf)
 {
     domid_t cli_id = client->cli_id;
@@ -267,11 +267,6 @@ static int __tmemc_set_var(struct client *client, uint32_t subop,
 
     ASSERT(client);
 
-    if ( subop != XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO )
-    {
-        tmem_client_warn("tmem: unknown subop %d for tmemc_set_var\n", subop);
-        return -1;
-    }
     if ( copy_from_guest(&info, buf, 1) )
         return -EFAULT;
 
@@ -296,20 +291,17 @@ static int __tmemc_set_var(struct client *client, uint32_t subop,
     return 0;
 }
 
-static int tmemc_set_var(domid_t cli_id, uint32_t subop,
-                         XEN_GUEST_HANDLE_PARAM(void) buf)
+static int tmemc_set_client_info(domid_t cli_id,
+                                 XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) info)
 {
     struct client *client;
     int ret = -ENOENT;
-    XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) info;
-
-    info = guest_handle_cast(buf, xen_sysctl_tmem_client_t);
 
     if ( cli_id == TMEM_CLI_ID_NULL )
     {
         list_for_each_entry(client,&tmem_global.client_list,client_list)
         {
-            ret =  __tmemc_set_var(client, subop, info);
+            ret =  __tmemc_set_var(client, info);
             if (ret)
                 break;
         }
@@ -318,11 +310,32 @@ static int tmemc_set_var(domid_t cli_id, uint32_t subop,
     {
         client = tmem_client_from_cli_id(cli_id);
         if ( client )
-            ret = __tmemc_set_var(client, subop, info);
+            ret = __tmemc_set_var(client, info);
     }
     return ret;
 }
 
+static int tmemc_get_client_info(int cli_id,
+                                 XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) info)
+{
+    struct client *client = tmem_client_from_cli_id(cli_id);
+
+    if ( client )
+    {
+        if ( copy_to_guest(info, &client->info, 1) )
+            return  -EFAULT;
+    }
+    else
+    {
+            xen_sysctl_tmem_client_t generic = { .version = TMEM_SPEC_VERSION,
+                                                 .maxpools = MAX_POOLS_PER_DOMAIN };
+        if ( copy_to_guest(info, &generic, 1) )
+            return -EFAULT;
+    }
+
+    return 0;
+}
+
 static int tmemc_save_subop(int cli_id, uint32_t pool_id, uint32_t subop,
                             XEN_GUEST_HANDLE_PARAM(void) buf, uint32_t arg1)
 {
@@ -330,27 +343,9 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id, uint32_t subop,
     struct tmem_pool *pool = (client == NULL || pool_id >= MAX_POOLS_PER_DOMAIN)
                    ? NULL : client->pools[pool_id];
     int rc = -1;
-    XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) info;
 
     switch(subop)
     {
-    case XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO:
-        rc = 0;
-        info = guest_handle_cast(buf, xen_sysctl_tmem_client_t);
-        if ( client )
-        {
-            if ( copy_to_guest(info, &client->info, 1) )
-                rc = -EFAULT;
-        }
-        else
-        {
-            xen_sysctl_tmem_client_t generic = { .version = TMEM_SPEC_VERSION,
-                                                 .maxpools = MAX_POOLS_PER_DOMAIN };
-
-            if ( copy_to_guest(info, &generic, 1) )
-                rc = -EFAULT;
-        }
-        break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS:
          if ( pool == NULL )
              break;
@@ -400,21 +395,24 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
         break;
     case XEN_SYSCTL_TMEM_OP_LIST:
         ret = tmemc_list(op->cli_id,
-                         guest_handle_cast(op->buf, char), op->arg1, op->arg2);
+                         guest_handle_cast(op->u.buf, char), op->arg1, op->arg2);
         break;
     case XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO:
-        ret = tmemc_set_var(op->cli_id, cmd,
-                            guest_handle_cast(op->buf, void));
+        ret = tmemc_set_client_info(op->cli_id,
+                                    guest_handle_cast(op->u.client, xen_sysctl_tmem_client_t));
         break;
     case XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB:
         ret = tmem_freeable_pages() >> (20 - PAGE_SHIFT);
         break;
     case XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO:
+        ret = tmemc_get_client_info(op->cli_id,
+                                    guest_handle_cast(op->u.client, xen_sysctl_tmem_client_t));
+        break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID:
         ret = tmemc_save_subop(op->cli_id, pool_id, cmd,
-                               guest_handle_cast(op->buf, void), op->arg1);
+                               guest_handle_cast(op->u.buf, void), op->arg1);
         break;
     default:
         ret = do_tmem_control(op);
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 27e051b..52898a7 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -782,8 +782,8 @@ struct tmem_handle {
 };
 
 /*
- * XEN_SYSCTL_TMEM_OP_[GET,SAVE]_CLIENT overrides the 'buf' in
- * xen_sysctl_tmem_op with this structure - and contain data used
+ * XEN_SYSCTL_TMEM_OP_[GET,SAVE]_CLIENT uses the 'client' in
+ * xen_sysctl_tmem_op with this structure - which is mostly used
  * during migration.
  */
 struct tmem_client {
@@ -812,8 +812,11 @@ struct xen_sysctl_tmem_op {
     uint32_t arg2;      /* IN: If not applicable to command use 0. */
     uint32_t pad;       /* Padding so structure is the same under 32 and 64. */
     xen_tmem_oid_t oid; /* IN: If not applicable to command use 0s. */
-    XEN_GUEST_HANDLE_64(char) buf; /* IN/OUT: Buffer to save/restore,
-                                      and set/get ops. */
+    union {
+        XEN_GUEST_HANDLE_64(char) buf; /* IN/OUT: Buffer to save/restore */
+        XEN_GUEST_HANDLE_64(xen_sysctl_tmem_client_t) client; /* IN/OUT for */
+                        /*  XEN_SYSCTL_TMEM_OP_[GET,SAVE]_CLIENT. */
+    } u;
 };
 typedef struct xen_sysctl_tmem_op xen_sysctl_tmem_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_tmem_op_t);
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 09/12] tmem: Check version and maxpools when XEN_SYSCTL_TMEM_SET_CLIENT_INFO
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
                   ` (7 preceding siblings ...)
  2016-09-28  9:42 ` [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 11:00   ` Wei Liu
  2016-09-28 12:58   ` Jan Beulich
  2016-09-28  9:42 ` [PATCH v1 10/12] tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN] Konrad Rzeszutek Wilk
                   ` (2 subsequent siblings)
  11 siblings, 2 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Wei Liu, Ian Jackson, Konrad Rzeszutek Wilk

is called. If they are different from what the hypervisor
can support we will get the appropiate errors.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v1: First submission.
---
 tools/libxc/xc_tmem.c       | 1 -
 xen/common/tmem_control.c   | 6 ++++++
 xen/include/public/sysctl.h | 5 +++--
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index cb0dfa1..7873674 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -383,7 +383,6 @@ int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
 
     if ( read_exact(io_fd, &info, sizeof(info)) )
         return -1;
-    /* FIXME check here to ensure no version mismatch or maxpools mismatch */
     if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN,dom,0,0,NULL) < 0 )
         return -1;
 
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index 942d4c9..6f1de9d 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -270,6 +270,12 @@ static int __tmemc_set_var(struct client *client,
     if ( copy_from_guest(&info, buf, 1) )
         return -EFAULT;
 
+    if ( info.version != TMEM_SPEC_VERSION )
+        return -EOPNOTSUPP;
+
+    if ( info.maxpools > MAX_POOLS_PER_DOMAIN )
+        return -ERANGE;
+
     if ( info.weight != client->info.weight )
     {
         old_weight = client->info.weight;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 52898a7..36e2fd8 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -787,8 +787,9 @@ struct tmem_handle {
  * during migration.
  */
 struct tmem_client {
-    uint32_t version;
-    uint32_t maxpools;
+    uint32_t version;   /* If mismatched we will get XEN_ENOSPC. */
+    uint32_t maxpools;  /* If greater than what hypervisor supports, will get
+                           XEN_ERANGE. */
     union {             /* See TMEM_CLIENT_[COMPRESS,FROZEN] */
         uint32_t raw;
         struct {
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 10/12] tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN]
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
                   ` (8 preceding siblings ...)
  2016-09-28  9:42 ` [PATCH v1 09/12] tmem: Check version and maxpools when XEN_SYSCTL_TMEM_SET_CLIENT_INFO Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 11:06   ` Wei Liu
  2016-09-28 13:00   ` Jan Beulich
  2016-09-28  9:42 ` [PATCH v1 11/12] tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg Konrad Rzeszutek Wilk
  2016-09-28  9:42 ` [PATCH v1 12/12] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS Konrad Rzeszutek Wilk
  11 siblings, 2 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Wei Liu, Ian Jackson, Konrad Rzeszutek Wilk

return values. For success they used to be 1 ([SAVE,RESTORE]_BEGIN),
0 if guest did not have any tmem (but only for SAVE_BEGIN), and
-1 for any type of failure.

And SAVE_END (which you would think would mirror SAVE_BEGIN)
had 0 for success and -1 if guest did not any tmem enabled for it.

This is confusing. Now the code will return 0 if the operation was
success.  Various XEN_EXX values are returned if tmem is not enabled
or the operation could not performed.

The xc_tmem.c code only needs one place to check - where we use
SAVE_BEGIN. The place where RESTORE_BEGIN is used will have errno
with the proper error value and return will be -1, so will still
fail properly.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v1: First submission
---
 tools/libxc/xc_tmem.c | 14 +++++++++++---
 xen/common/tmem.c     | 15 ++++++++-------
 2 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 7873674..a843210 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -216,15 +216,23 @@ int xc_tmem_save(xc_interface *xch,
                  int dom, int io_fd, int live, int field_marker)
 {
     int marker = field_marker;
-    int i, j;
+    int i, j, rc;
     uint32_t flags;
     uint32_t minusone = -1;
     uint32_t pool_id;
     struct tmem_handle *h;
     xen_sysctl_tmem_client_t info;
 
-    if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_SAVE_BEGIN,dom,live,0,NULL) <= 0 )
-        return 0;
+    rc = xc_tmem_control(xch, 0, XEN_SYSCTL_TMEM_OP_SAVE_BEGIN,
+                         dom, live, 0, NULL);
+    if ( rc )
+    {
+        /* Nothing to save - no tmem enabled. */
+        if ( errno == ENOENT )
+            return 0;
+
+        return rc;
+    }
 
     if ( write_exact(io_fd, &marker, sizeof(marker)) )
         return -1;
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index ab354b6..997f2b0 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1656,30 +1656,31 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
     struct client *client = tmem_client_from_cli_id(cli_id);
     uint32_t p;
     struct tmem_page_descriptor *pgp, *pgp2;
-    int rc = -1;
+    int rc = -ENOENT;
 
     switch(subop)
     {
     case XEN_SYSCTL_TMEM_OP_SAVE_BEGIN:
         if ( client == NULL )
-            return 0;
+            break;
         for (p = 0; p < MAX_POOLS_PER_DOMAIN; p++)
             if ( client->pools[p] != NULL )
                 break;
+
         if ( p == MAX_POOLS_PER_DOMAIN )
-        {
-            rc = 0;
             break;
-        }
+
         client->was_frozen = client->info.flags.u.frozen;
         client->info.flags.u.frozen = 1;
         if ( arg1 != 0 )
             client->info.flags.u.migrating = 1;
-        rc = 1;
+        rc = 0;
         break;
     case XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN:
         if ( client == NULL && (client = client_create(cli_id)) != NULL )
-            return 1;
+            rc = 0;
+        else
+            rc = -EEXIST;
         break;
     case XEN_SYSCTL_TMEM_OP_SAVE_END:
         if ( client == NULL )
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 11/12] tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg.
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
                   ` (9 preceding siblings ...)
  2016-09-28  9:42 ` [PATCH v1 10/12] tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN] Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 11:07   ` Wei Liu
  2016-09-28  9:42 ` [PATCH v1 12/12] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS Konrad Rzeszutek Wilk
  11 siblings, 1 reply; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Wei Liu, Ian Jackson, Konrad Rzeszutek Wilk

That is what they are used for. Lets make it more clear.

Of all the various sub-commands, the only one that needed
semantic change is XEN_SYSCTL_TMEM_OP_SAVE_BEGIN. That in the
past used 'arg1', and now we are moving it to use 'arg'.
Since that code is only used during migration which is tied
to the toolstack it is OK to change it.

While at it, also fix xc_tmem_control_oid to properly handle
the 'buf' and bounce it as appropiate.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v1: First submission.
---
 tools/libxc/include/xenctrl.h     |  4 ++--
 tools/libxc/xc_tmem.c             | 37 ++++++++++++++++++-------------------
 tools/libxl/libxl.c               |  4 ++--
 tools/python/xen/lowlevel/xc/xc.c | 16 ++++++++--------
 xen/common/tmem.c                 | 16 ++++++++--------
 xen/common/tmem_control.c         |  8 ++++----
 xen/include/public/sysctl.h       |  4 ++--
 7 files changed, 44 insertions(+), 45 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 5e685a6..da982a6 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2061,11 +2061,11 @@ int xc_disable_turbo(xc_interface *xch, int cpuid);
  */
 
 int xc_tmem_control_oid(xc_interface *xch, int32_t pool_id, uint32_t subop,
-                        uint32_t cli_id, uint32_t arg1, uint32_t arg2,
+                        uint32_t cli_id, uint32_t len, uint32_t arg,
                         struct xen_tmem_oid oid, void *buf);
 int xc_tmem_control(xc_interface *xch,
                     int32_t pool_id, uint32_t subop, uint32_t cli_id,
-                    uint32_t arg1, uint32_t arg2, void *buf);
+                    uint32_t len, uint32_t arg, void *buf);
 int xc_tmem_auth(xc_interface *xch, int cli_id, char *uuid_str, int arg1);
 int xc_tmem_save(xc_interface *xch, int dom, int live, int fd, int field_marker);
 int xc_tmem_save_extra(xc_interface *xch, int dom, int fd, int field_marker);
diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index a843210..07fa101 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -49,20 +49,20 @@ int xc_tmem_control(xc_interface *xch,
                     int32_t pool_id,
                     uint32_t cmd,
                     uint32_t cli_id,
-                    uint32_t arg1,
-                    uint32_t arg2,
+                    uint32_t len,
+                    uint32_t arg,
                     void *buf)
 {
     DECLARE_SYSCTL;
-    DECLARE_HYPERCALL_BOUNCE(buf, arg1, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(buf, len, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
     int rc;
 
     sysctl.cmd = XEN_SYSCTL_tmem_op;
     sysctl.u.tmem_op.pool_id = pool_id;
     sysctl.u.tmem_op.cmd = cmd;
     sysctl.u.tmem_op.cli_id = cli_id;
-    sysctl.u.tmem_op.arg1 = arg1;
-    sysctl.u.tmem_op.arg2 = arg2;
+    sysctl.u.tmem_op.len = len;
+    sysctl.u.tmem_op.arg = arg;
     sysctl.u.tmem_op.pad = 0;
     sysctl.u.tmem_op.oid.oid[0] = 0;
     sysctl.u.tmem_op.oid.oid[1] = 0;
@@ -72,8 +72,7 @@ int xc_tmem_control(xc_interface *xch,
     {
         HYPERCALL_BOUNCE_SET_DIR(buf, XC_HYPERCALL_BUFFER_BOUNCE_IN);
     }
-
-    if ( arg1 )
+    if ( len )
     {
         if ( buf == NULL )
         {
@@ -91,7 +90,7 @@ int xc_tmem_control(xc_interface *xch,
 
     rc = do_sysctl(xch, &sysctl);
 
-    if ( arg1 )
+    if ( len )
         xc_hypercall_bounce_post(xch, buf);
 
     return rc;
@@ -101,25 +100,25 @@ int xc_tmem_control_oid(xc_interface *xch,
                         int32_t pool_id,
                         uint32_t cmd,
                         uint32_t cli_id,
-                        uint32_t arg1,
-                        uint32_t arg2,
+                        uint32_t len,
+                        uint32_t arg,
                         struct xen_tmem_oid oid,
                         void *buf)
 {
     DECLARE_SYSCTL;
-    DECLARE_HYPERCALL_BOUNCE(buf, arg1, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(buf, len, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
     int rc;
 
     sysctl.cmd = XEN_SYSCTL_tmem_op;
     sysctl.u.tmem_op.pool_id = pool_id;
     sysctl.u.tmem_op.cmd = cmd;
     sysctl.u.tmem_op.cli_id = cli_id;
-    sysctl.u.tmem_op.arg1 = arg1;
-    sysctl.u.tmem_op.arg2 = arg2;
+    sysctl.u.tmem_op.len = len;
+    sysctl.u.tmem_op.arg = arg;
     sysctl.u.tmem_op.pad = 0;
     sysctl.u.tmem_op.oid = oid;
 
-    if ( cmd == XEN_SYSCTL_TMEM_OP_LIST && arg1 != 0 )
+    if ( len  )
     {
         if ( buf == NULL )
         {
@@ -137,8 +136,8 @@ int xc_tmem_control_oid(xc_interface *xch,
 
     rc = do_sysctl(xch, &sysctl);
 
-    if ( cmd == XEN_SYSCTL_TMEM_OP_LIST && arg1 != 0 )
-            xc_hypercall_bounce_post(xch, buf);
+    if ( len )
+        xc_hypercall_bounce_post(xch, buf);
 
     return rc;
 }
@@ -224,7 +223,7 @@ int xc_tmem_save(xc_interface *xch,
     xen_sysctl_tmem_client_t info;
 
     rc = xc_tmem_control(xch, 0, XEN_SYSCTL_TMEM_OP_SAVE_BEGIN,
-                         dom, live, 0, NULL);
+                         dom, 0 /* len*/ , live, NULL);
     if ( rc )
     {
         /* Nothing to save - no tmem enabled. */
@@ -239,7 +238,7 @@ int xc_tmem_save(xc_interface *xch,
 
     if ( xc_tmem_control(xch, 0 /* pool_id */,
                          XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO,
-                         dom /* cli_id */, sizeof(info) /* arg1 */, 0 /* arg2 */,
+                         dom /* cli_id */, sizeof(info), 0 /* arg */,
                          &info) < 0 )
         return -1;
 
@@ -396,7 +395,7 @@ int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
 
     if ( xc_tmem_control(xch, 0 /* pool_id */,
                          XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO,
-                         dom /* cli_id */, sizeof(info) /* arg1 */, 0 /* arg2 */,
+                         dom /* cli_id */, sizeof(info), 0 /* arg */,
                          &info) < 0 )
         return -1;
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2f50e61..25baca6 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -6082,7 +6082,7 @@ int libxl_tmem_set(libxl_ctx *ctx, uint32_t domid, char* name, uint32_t set)
 
     r = xc_tmem_control(ctx->xch, -1 /* pool_id */,
                         XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO,
-                        domid, sizeof(info), 0 /* arg2 */, &info);
+                        domid, sizeof(info), 0 /* arg */, &info);
     if (r < 0) {
         LOGE(ERROR, "Can not get tmem data!");
         rc = ERROR_FAIL;
@@ -6096,7 +6096,7 @@ int libxl_tmem_set(libxl_ctx *ctx, uint32_t domid, char* name, uint32_t set)
     }
     r = xc_tmem_control(ctx->xch, -1 /* pool_id */,
                         XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO,
-                        domid, sizeof(info), 0 /* arg2 */, &info);
+                        domid, sizeof(info), 0 /* arg */, &info);
     if (r < 0) {
         LOGE(ERROR, "Can not set tmem %s", name);
         rc = ERROR_FAIL;
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index a117737..39be1d5 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -1612,8 +1612,8 @@ static PyObject *pyxc_tmem_control(XcObject *self,
     int32_t pool_id;
     uint32_t subop;
     uint32_t cli_id;
-    uint32_t arg1;
-    uint32_t arg2;
+    uint32_t len;
+    uint32_t arg;
     char *buf;
     char _buffer[32768], *buffer = _buffer;
     int rc;
@@ -1621,13 +1621,13 @@ static PyObject *pyxc_tmem_control(XcObject *self,
     static char *kwd_list[] = { "pool_id", "subop", "cli_id", "arg1", "arg2", "buf", NULL };
 
     if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iiiiis", kwd_list,
-                        &pool_id, &subop, &cli_id, &arg1, &arg2, &buf) )
+                        &pool_id, &subop, &cli_id, &len, &arg, &buf) )
         return NULL;
 
-    if ( (subop == XEN_SYSCTL_TMEM_OP_LIST) && (arg1 > 32768) )
-        arg1 = 32768;
+    if ( (subop == XEN_SYSCTL_TMEM_OP_LIST) && (len > 32768) )
+        len = 32768;
 
-    if ( (rc = xc_tmem_control(self->xc_handle, pool_id, subop, cli_id, arg1, arg2, buffer)) < 0 )
+    if ( (rc = xc_tmem_control(self->xc_handle, pool_id, subop, cli_id, len, arg, buffer)) < 0 )
         return Py_BuildValue("i", rc);
 
     switch (subop) {
@@ -2506,8 +2506,8 @@ static PyMethodDef pyxc_methods[] = {
       " pool_id [int]: Identifier of the tmem pool (-1 == all).\n"
       " subop [int]: Supplementary Operation.\n"
       " cli_id [int]: Client identifier (-1 == all).\n"
-      " arg1 [int]: Argument.\n"
-      " arg2 [int]: Argument.\n"
+      " len [int]: Length of 'buf'.\n"
+      " arg [int]: Argument.\n"
       " buf [str]: Buffer.\n\n"
       "Returns: [int] 0 or [str] tmem info on success; exception on error.\n" },
 
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 997f2b0..294f0cd 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1651,7 +1651,7 @@ static int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
 }
 
 static int tmemc_save_subop(int cli_id, uint32_t pool_id,
-                        uint32_t subop, tmem_cli_va_param_t buf, uint32_t arg1)
+                        uint32_t subop, tmem_cli_va_param_t buf, uint32_t arg)
 {
     struct client *client = tmem_client_from_cli_id(cli_id);
     uint32_t p;
@@ -1672,7 +1672,7 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
 
         client->was_frozen = client->info.flags.u.frozen;
         client->info.flags.u.frozen = 1;
-        if ( arg1 != 0 )
+        if ( arg != 0 )
             client->info.flags.u.migrating = 1;
         rc = 0;
         break;
@@ -1844,22 +1844,22 @@ int do_tmem_control(struct xen_sysctl_tmem_op *op)
     case XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN:
     case XEN_SYSCTL_TMEM_OP_SAVE_END:
         ret = tmemc_save_subop(op->cli_id, pool_id, cmd,
-                               guest_handle_cast(op->u.buf, char), op->arg1);
+                               guest_handle_cast(op->u.buf, char), op->arg);
         break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE:
         ret = tmemc_save_get_next_page(op->cli_id, pool_id,
-                                       guest_handle_cast(op->u.buf, char), op->arg1);
+                                       guest_handle_cast(op->u.buf, char), op->len);
         break;
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_INV:
         ret = tmemc_save_get_next_inv(op->cli_id,
-                                      guest_handle_cast(op->u.buf, char), op->arg1);
+                                      guest_handle_cast(op->u.buf, char), op->len);
         break;
     case XEN_SYSCTL_TMEM_OP_RESTORE_PUT_PAGE:
-        ret = tmemc_restore_put_page(op->cli_id, pool_id, oidp, op->arg2,
-                                     guest_handle_cast(op->u.buf, char), op->arg1);
+        ret = tmemc_restore_put_page(op->cli_id, pool_id, oidp, op->arg,
+                                     guest_handle_cast(op->u.buf, char), op->len);
         break;
     case XEN_SYSCTL_TMEM_OP_RESTORE_FLUSH_PAGE:
-        ret = tmemc_restore_flush_page(op->cli_id, pool_id, oidp, op->arg2);
+        ret = tmemc_restore_flush_page(op->cli_id, pool_id, oidp, op->arg);
         break;
     default:
         ret = -1;
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index 6f1de9d..cfbc8b0 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -343,7 +343,7 @@ static int tmemc_get_client_info(int cli_id,
 }
 
 static int tmemc_save_subop(int cli_id, uint32_t pool_id, uint32_t subop,
-                            XEN_GUEST_HANDLE_PARAM(void) buf, uint32_t arg1)
+                            XEN_GUEST_HANDLE_PARAM(void) buf, uint32_t arg)
 {
     struct client *client = tmem_client_from_cli_id(cli_id);
     struct tmem_pool *pool = (client == NULL || pool_id >= MAX_POOLS_PER_DOMAIN)
@@ -397,11 +397,11 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
         ret = tmemc_freeze_pools(op->cli_id, cmd);
         break;
     case XEN_SYSCTL_TMEM_OP_FLUSH:
-        ret = tmemc_flush_mem(op->cli_id,op->arg1);
+        ret = tmemc_flush_mem(op->cli_id, op->arg);
         break;
     case XEN_SYSCTL_TMEM_OP_LIST:
         ret = tmemc_list(op->cli_id,
-                         guest_handle_cast(op->u.buf, char), op->arg1, op->arg2);
+                         guest_handle_cast(op->u.buf, char), op->len, op->arg);
         break;
     case XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO:
         ret = tmemc_set_client_info(op->cli_id,
@@ -418,7 +418,7 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES:
     case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID:
         ret = tmemc_save_subop(op->cli_id, pool_id, cmd,
-                               guest_handle_cast(op->u.buf, void), op->arg1);
+                               guest_handle_cast(op->u.buf, void), op->arg);
         break;
     default:
         ret = do_tmem_control(op);
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 36e2fd8..a9f95f8 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -809,8 +809,8 @@ struct xen_sysctl_tmem_op {
     uint32_t cli_id;    /* IN: client id, 0 for XEN_SYSCTL_TMEM_QUERY_FREEABLE_MB
                            for all others can be the domain id or
                            XEN_SYSCTL_TMEM_OP_ALL_CLIENTS for all. */
-    uint32_t arg1;      /* IN: If not applicable to command use 0. */
-    uint32_t arg2;      /* IN: If not applicable to command use 0. */
+    uint32_t len;       /* IN: length of 'buf'. If not applicable to use 0. */
+    uint32_t arg;       /* IN: If not applicable to command use 0. */
     uint32_t pad;       /* Padding so structure is the same under 32 and 64. */
     xen_tmem_oid_t oid; /* IN: If not applicable to command use 0s. */
     union {
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [PATCH v1 12/12] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS.
  2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
                   ` (10 preceding siblings ...)
  2016-09-28  9:42 ` [PATCH v1 11/12] tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg Konrad Rzeszutek Wilk
@ 2016-09-28  9:42 ` Konrad Rzeszutek Wilk
  2016-09-28 11:07   ` Wei Liu
  2016-09-28 13:11   ` Jan Beulich
  11 siblings, 2 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28  9:42 UTC (permalink / raw)
  To: xen-devel, konrad; +Cc: Wei Liu, Ian Jackson, Konrad Rzeszutek Wilk

These operations are used during the save process of migration.
Instead of doing 64 hypercalls lets do just one. We modify
the 'struct tmem_client' structure (used in
XEN_SYSCTL_TMEM_OP_[GET|SET]_CLIENT_INFO) to have an extra field
'nr_pools'. Armed with that the code slurping up pages from the
hypervisor can allocate a big enough structure (struct tmem_pool_info)
to contain all the active pools. And then just iterate over each
one and save it in the stream.

In the xc_tmem_[save|restore] we also added proper memory handling
of the 'buf' and 'pools'. Because of the loops and to make it as
easy as possible to review we add a goto label and for almost
all error conditions jump in it.

The include for inttypes is required for the PRId64 macro to
work (which is needed to compile this code under 32-bit).

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>

v1: First submission.
---
 tools/libxc/xc_tmem.c       | 159 +++++++++++++++++++++++++-------------------
 xen/common/tmem.c           |   3 +
 xen/common/tmem_control.c   |  80 +++++++++++++---------
 xen/include/public/sysctl.h |  33 ++++++++-
 4 files changed, 174 insertions(+), 101 deletions(-)

diff --git a/tools/libxc/xc_tmem.c b/tools/libxc/xc_tmem.c
index 07fa101..cfa3fb6 100644
--- a/tools/libxc/xc_tmem.c
+++ b/tools/libxc/xc_tmem.c
@@ -18,6 +18,7 @@
  */
 
 #include "xc_private.h"
+#include <inttypes.h>
 #include <assert.h>
 #include <xen/tmem.h>
 
@@ -216,11 +217,11 @@ int xc_tmem_save(xc_interface *xch,
 {
     int marker = field_marker;
     int i, j, rc;
-    uint32_t flags;
     uint32_t minusone = -1;
-    uint32_t pool_id;
     struct tmem_handle *h;
     xen_sysctl_tmem_client_t info;
+    xen_sysctl_tmem_pool_info_t *pools;
+    char *buf = NULL;
 
     rc = xc_tmem_control(xch, 0, XEN_SYSCTL_TMEM_OP_SAVE_BEGIN,
                          dom, 0 /* len*/ , live, NULL);
@@ -233,72 +234,82 @@ int xc_tmem_save(xc_interface *xch,
         return rc;
     }
 
-    if ( write_exact(io_fd, &marker, sizeof(marker)) )
-        return -1;
-
     if ( xc_tmem_control(xch, 0 /* pool_id */,
                          XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO,
                          dom /* cli_id */, sizeof(info), 0 /* arg */,
                          &info) < 0 )
         return -1;
 
-    if ( write_exact(io_fd, &info, sizeof(info)) )
+    /* Nothing to do. */
+    if ( !info.nr_pools )
+        return 0;
+
+    pools = calloc(info.nr_pools, sizeof(*pools));
+    if ( !pools )
         return -1;
+
+    rc = xc_tmem_control(xch, 0 /* pool_id is ignored. */,
+                         XEN_SYSCTL_TMEM_OP_GET_POOLS,
+                         dom /* cli_id */, sizeof(*pools) * info.nr_pools,
+                         0 /* arg */, pools);
+
+    if ( rc < 0 || (uint32_t)rc > info.nr_pools )
+        goto out_memory;
+
+    /* Update it - as we have less pools between the two hypercalls. */
+    info.nr_pools = (uint32_t)rc;
+
+    if ( write_exact(io_fd, &marker, sizeof(marker)) )
+        goto out_memory;
+
+    if ( write_exact(io_fd, &info, sizeof(info)) )
+        goto out_memory;
+
     if ( write_exact(io_fd, &minusone, sizeof(minusone)) )
-        return -1;
-    for ( i = 0; i < info.maxpools; i++ )
+        goto out_memory;
+
+    for ( i = 0; i < info.nr_pools; i++ )
     {
-        uint64_t uuid[2];
-        uint32_t n_pages;
         uint32_t pagesize;
-        char *buf = NULL;
         int bufsize = 0;
         int checksum = 0;
+        xen_sysctl_tmem_pool_info_t *pool = &pools[i];
 
-        /* get pool id, flags, pagesize, n_pages, uuid */
-        flags = xc_tmem_control(xch,i,XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS,dom,0,0,NULL);
-        if ( flags != -1 )
+        if ( pool->flags.raw != -1 )
         {
-            pool_id = i;
-            n_pages = xc_tmem_control(xch,i,XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES,dom,0,0,NULL);
-            if ( !(flags & TMEM_POOL_PERSIST) )
-                n_pages = 0;
-            (void)xc_tmem_control(xch,i,XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID,dom,sizeof(uuid),0,&uuid);
-            if ( write_exact(io_fd, &pool_id, sizeof(pool_id)) )
-                return -1;
-            if ( write_exact(io_fd, &flags, sizeof(flags)) )
-                return -1;
-            if ( write_exact(io_fd, &n_pages, sizeof(n_pages)) )
-                return -1;
-            if ( write_exact(io_fd, &uuid, sizeof(uuid)) )
-                return -1;
-            if ( n_pages == 0 )
+            if ( !pool->flags.u.persist )
+                pool->n_pages = 0;
+
+            if ( write_exact(io_fd, pool, sizeof(*pool)) )
+                goto out_memory;
+
+            if ( !pool->flags.u.persist )
                 continue;
 
-            pagesize = 1 << (((flags >> TMEM_POOL_PAGESIZE_SHIFT) &
-                              TMEM_POOL_PAGESIZE_MASK) + 12);
+            pagesize = 1 << (pool->flags.u.pagebits + 12);
             if ( pagesize > bufsize )
             {
                 bufsize = pagesize + sizeof(struct tmem_handle);
                 if ( (buf = realloc(buf,bufsize)) == NULL )
-                    return -1;
+                    goto out_memory;
             }
-            for ( j = n_pages; j > 0; j-- )
+            for ( j = pool->n_pages; j > 0; j-- )
             {
                 int ret;
-                if ( (ret = xc_tmem_control(xch, pool_id,
+                if ( (ret = xc_tmem_control(xch, pool->id,
                                             XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE, dom,
                                             bufsize, 0, buf)) > 0 )
                 {
                     h = (struct tmem_handle *)buf;
                     if ( write_exact(io_fd, &h->oid, sizeof(h->oid)) )
-                        return -1;
+                        goto out_memory;
+
                     if ( write_exact(io_fd, &h->index, sizeof(h->index)) )
-                        return -1;
+                        goto out_memory;
                     h++;
                     checksum += *(char *)h;
                     if ( write_exact(io_fd, h, pagesize) )
-                        return -1;
+                        goto out_memory;
                 } else if ( ret == 0 ) {
                     continue;
                 } else {
@@ -306,14 +317,22 @@ int xc_tmem_save(xc_interface *xch,
                     h = (struct tmem_handle *)buf;
                     h->oid.oid[0] = h->oid.oid[1] = h->oid.oid[2] = -1L;
                     if ( write_exact(io_fd, &h->oid, sizeof(h->oid)) )
+                    {
+ out_memory:
+                        free(pools);
+                        free(buf);
                         return -1;
+                    }
                     break;
                 }
             }
-            DPRINTF("saved %d tmem pages for dom=%d pool=%d, checksum=%x\n",
-                         n_pages-j,dom,pool_id,checksum);
+            DPRINTF("saved %"PRId64" tmem pages for dom=%d pool=%d, checksum=%x\n",
+                    pool->n_pages - j, dom, pool->id, checksum);
         }
     }
+    free(pools);
+    free(buf);
+
     /* pool list terminator */
     minusone = -1;
     if ( write_exact(io_fd, &minusone, sizeof(minusone)) )
@@ -382,14 +401,19 @@ static int xc_tmem_restore_new_pool(
 
 int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
 {
-    uint32_t pool_id;
     uint32_t minusone;
-    uint32_t flags;
     xen_sysctl_tmem_client_t info;
     int checksum = 0;
+    unsigned int i;
+    char *buf = NULL;
 
     if ( read_exact(io_fd, &info, sizeof(info)) )
         return -1;
+
+    /* We would never save if there weren't any pools! */
+    if ( !info.nr_pools )
+        return -1;
+
     if ( xc_tmem_control(xch,0,XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN,dom,0,0,NULL) < 0 )
         return -1;
 
@@ -401,62 +425,63 @@ int xc_tmem_restore(xc_interface *xch, int dom, int io_fd)
 
     if ( read_exact(io_fd, &minusone, sizeof(minusone)) )
         return -1;
-    while ( read_exact(io_fd, &pool_id, sizeof(pool_id)) == 0 && pool_id != -1 )
+
+    for ( i = 0; i < info.nr_pools; i++ )
     {
-        uint64_t uuid[2];
-        uint32_t n_pages;
-        char *buf = NULL;
         int bufsize = 0, pagesize;
         int j;
+        xen_sysctl_tmem_pool_info_t pool;
 
-        if ( read_exact(io_fd, &flags, sizeof(flags)) )
-            return -1;
-        if ( read_exact(io_fd, &n_pages, sizeof(n_pages)) )
-            return -1;
-        if ( read_exact(io_fd, &uuid, sizeof(uuid)) )
-            return -1;
-        if ( xc_tmem_restore_new_pool(xch, dom, pool_id,
-                                 flags, uuid[0], uuid[1]) < 0)
-            return -1;
-        if ( n_pages <= 0 )
+        if ( read_exact(io_fd, &pool, sizeof(pool)) )
+            goto out_memory;
+
+        if ( xc_tmem_restore_new_pool(xch, dom, pool.id, pool.flags.raw,
+                                      pool.uuid[0], pool.uuid[1]) < 0 )
+            goto out_memory;
+
+        if ( pool.n_pages <= 0 )
             continue;
 
-        pagesize = 1 << (((flags >> TMEM_POOL_PAGESIZE_SHIFT) &
-                              TMEM_POOL_PAGESIZE_MASK) + 12);
+        pagesize = 1 << (pool.flags.u.pagebits + 12);
         if ( pagesize > bufsize )
         {
             bufsize = pagesize;
             if ( (buf = realloc(buf,bufsize)) == NULL )
-                return -1;
+                goto out_memory;
         }
-        for ( j = n_pages; j > 0; j-- )
+        for ( j = pool.n_pages; j > 0; j-- )
         {
             struct xen_tmem_oid oid;
             uint32_t index;
             int rc;
+
             if ( read_exact(io_fd, &oid, sizeof(oid)) )
-                return -1;
+                goto out_memory;
+
             if ( oid.oid[0] == -1L && oid.oid[1] == -1L && oid.oid[2] == -1L )
                 break;
             if ( read_exact(io_fd, &index, sizeof(index)) )
-                return -1;
+                goto out_memory;
+
             if ( read_exact(io_fd, buf, pagesize) )
-                return -1;
+                goto out_memory;
+
             checksum += *buf;
-            if ( (rc = xc_tmem_control_oid(xch, pool_id,
+            if ( (rc = xc_tmem_control_oid(xch, pool.id,
                                            XEN_SYSCTL_TMEM_OP_RESTORE_PUT_PAGE, dom,
                                            bufsize, index, oid, buf)) <= 0 )
             {
                 DPRINTF("xc_tmem_restore: putting page failed, rc=%d\n",rc);
+ out_memory:
+                free(buf);
                 return -1;
             }
         }
-        if ( n_pages )
-            DPRINTF("restored %d tmem pages for dom=%d pool=%d, check=%x\n",
-                    n_pages-j,dom,pool_id,checksum);
+        if ( pool.n_pages )
+            DPRINTF("restored %"PRId64" tmem pages for dom=%d pool=%d, check=%x\n",
+                    pool.n_pages - j, dom, pool.id, checksum);
     }
-    if ( pool_id != -1 )
-        return -1;
+    free(buf);
 
     return 0;
 }
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 294f0cd..79b853d 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -880,6 +880,7 @@ static void client_flush(struct client *client)
             continue;
         pool_flush(pool, client->cli_id);
         client->pools[i] = NULL;
+        client->info.nr_pools--;
     }
     client_free(client);
 }
@@ -1430,6 +1431,7 @@ static int do_tmem_destroy_pool(uint32_t pool_id)
         return 0;
     client->pools[pool_id] = NULL;
     pool_flush(pool, client->cli_id);
+    client->info.nr_pools--;
     return 1;
 }
 
@@ -1592,6 +1594,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
 
 out:
     tmem_client_info("pool_id=%d\n", d_poolid);
+    client->info.nr_pools ++;
     return d_poolid;
 
 fail:
diff --git a/xen/common/tmem_control.c b/xen/common/tmem_control.c
index cfbc8b0..a9960b7 100644
--- a/xen/common/tmem_control.c
+++ b/xen/common/tmem_control.c
@@ -276,6 +276,8 @@ static int __tmemc_set_var(struct client *client,
     if ( info.maxpools > MAX_POOLS_PER_DOMAIN )
         return -ERANGE;
 
+    /* Ignore info.nr_pools. */
+
     if ( info.weight != client->info.weight )
     {
         old_weight = client->info.weight;
@@ -342,46 +344,63 @@ static int tmemc_get_client_info(int cli_id,
     return 0;
 }
 
-static int tmemc_save_subop(int cli_id, uint32_t pool_id, uint32_t subop,
-                            XEN_GUEST_HANDLE_PARAM(void) buf, uint32_t arg)
+static int tmemc_get_pool(int cli_id,
+                          XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_pool_info_t) pools,
+                          uint32_t len)
 {
     struct client *client = tmem_client_from_cli_id(cli_id);
-    struct tmem_pool *pool = (client == NULL || pool_id >= MAX_POOLS_PER_DOMAIN)
-                   ? NULL : client->pools[pool_id];
-    int rc = -1;
+    unsigned int i, idx;
+    int rc = 0;
+    unsigned int nr = len / sizeof(xen_sysctl_tmem_pool_info_t);
+
+    if ( len % sizeof(xen_sysctl_tmem_pool_info_t) )
+        return -EINVAL;
+
+    if ( nr >= MAX_POOLS_PER_DOMAIN )
+        return -E2BIG;
+
+    if ( !guest_handle_okay(pools, nr) )
+        return -EINVAL;
 
-    switch(subop)
+    if ( !client )
+        return -EINVAL;
+
+    for ( idx = 0, i = 0; i < MAX_POOLS_PER_DOMAIN; i++ )
     {
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS:
-         if ( pool == NULL )
-             break;
-         rc = (pool->persistent ? TMEM_POOL_PERSIST : 0) |
+        struct tmem_pool *pool = client->pools[i];
+        xen_sysctl_tmem_pool_info_t out;
+
+        if ( pool == NULL )
+            continue;
+
+        out.flags.raw = (pool->persistent ? TMEM_POOL_PERSIST : 0) |
               (pool->shared ? TMEM_POOL_SHARED : 0) |
               (POOL_PAGESHIFT << TMEM_POOL_PAGESIZE_SHIFT) |
               (TMEM_SPEC_VERSION << TMEM_POOL_VERSION_SHIFT);
-        break;
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES:
-         if ( pool == NULL )
-             break;
-        rc = _atomic_read(pool->pgp_count);
-        break;
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID:
-         if ( pool == NULL )
-             break;
-        rc = 0;
-        if ( copy_to_guest(guest_handle_cast(buf, void), pool->uuid, 2) )
+        out.n_pages = _atomic_read(pool->pgp_count);
+        out.uuid[0] = pool->uuid[0];
+        out.uuid[1] = pool->uuid[1];
+        out.id = i;
+
+        /* N.B. 'idx' != 'i'. */
+        if ( __copy_to_guest_offset(pools, idx, &out, 1) )
+        {
             rc = -EFAULT;
-        break;
-    default:
-        rc = -1;
+            break;
+        }
+        idx ++;
+        /* Don't try to put more than what was requested. */
+        if ( idx >= nr )
+            break;
     }
-    return rc;
+
+    /* And how many we have processed. */
+    return rc ? : idx;
 }
 
 int tmem_control(struct xen_sysctl_tmem_op *op)
 {
     int ret;
-    uint32_t pool_id = op->pool_id;
     uint32_t cmd = op->cmd;
 
     if ( op->pad != 0 )
@@ -414,11 +433,10 @@ int tmem_control(struct xen_sysctl_tmem_op *op)
         ret = tmemc_get_client_info(op->cli_id,
                                     guest_handle_cast(op->u.client, xen_sysctl_tmem_client_t));
         break;
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS:
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES:
-    case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID:
-        ret = tmemc_save_subop(op->cli_id, pool_id, cmd,
-                               guest_handle_cast(op->u.buf, void), op->arg);
+    case XEN_SYSCTL_TMEM_OP_GET_POOLS:
+        ret = tmemc_get_pool(op->cli_id,
+                             guest_handle_cast(op->u.pool, xen_sysctl_tmem_pool_info_t),
+                             op->len);
         break;
     default:
         ret = do_tmem_control(op);
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index a9f95f8..8899c50 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -760,9 +760,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
 #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
 #define XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO        11
 #define XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO        12
-#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS    16
-#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES   17
-#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID     18
+#define XEN_SYSCTL_TMEM_OP_GET_POOLS              16
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE     19
 #define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_INV      20
 #define XEN_SYSCTL_TMEM_OP_SAVE_END               21
@@ -790,6 +788,7 @@ struct tmem_client {
     uint32_t version;   /* If mismatched we will get XEN_ENOSPC. */
     uint32_t maxpools;  /* If greater than what hypervisor supports, will get
                            XEN_ERANGE. */
+    uint32_t nr_pools;  /* Current amount of pools. Ignored on SET*/
     union {             /* See TMEM_CLIENT_[COMPRESS,FROZEN] */
         uint32_t raw;
         struct {
@@ -803,6 +802,31 @@ struct tmem_client {
 typedef struct tmem_client xen_sysctl_tmem_client_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_tmem_client_t);
 
+/*
+ * XEN_SYSCTL_TMEM_OP_GET_POOLS uses the 'pool' array in
+ * xen_sysctl_tmem_op with this structure. The hypercall will
+ * return the number of entries in 'pool' or a negative value
+ * if an error was encountered.
+ */
+struct tmem_pool_info {
+    union {
+        uint32_t raw;
+        struct {
+            uint32_t persist:1,    /* See TMEM_POOL_PERSIST. */
+                     shared:1,     /* See TMEM_POOL_SHARED. */
+                     rsv:2,
+                     pagebits:8,   /* TMEM_POOL_PAGESIZE_[SHIFT,MASK]. */
+                     rsv2:12,
+                     version:8;    /* TMEM_POOL_VERSION_[SHIFT,MASK]. */
+        } u;
+    } flags;
+    uint32_t id;                  /* Less than tmem_client.maxpools. */
+    uint64_t n_pages;
+    uint64_t uuid[2];
+};
+typedef struct tmem_pool_info xen_sysctl_tmem_pool_info_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_tmem_pool_info_t);
+
 struct xen_sysctl_tmem_op {
     uint32_t cmd;       /* IN: XEN_SYSCTL_TMEM_OP_* . */
     int32_t pool_id;    /* IN: 0 by default unless _SAVE_*, RESTORE_* .*/
@@ -817,6 +841,9 @@ struct xen_sysctl_tmem_op {
         XEN_GUEST_HANDLE_64(char) buf; /* IN/OUT: Buffer to save/restore */
         XEN_GUEST_HANDLE_64(xen_sysctl_tmem_client_t) client; /* IN/OUT for */
                         /*  XEN_SYSCTL_TMEM_OP_[GET,SAVE]_CLIENT. */
+        XEN_GUEST_HANDLE_64(xen_sysctl_tmem_pool_info_t) pool; /* OUT for */
+                        /* XEN_SYSCTL_TMEM_OP_GET_POOLS. Must have 'len' */
+                        /* of them. */
     } u;
 };
 typedef struct xen_sysctl_tmem_op xen_sysctl_tmem_op_t;
-- 
2.4.11


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 09/12] tmem: Check version and maxpools when XEN_SYSCTL_TMEM_SET_CLIENT_INFO
  2016-09-28  9:42 ` [PATCH v1 09/12] tmem: Check version and maxpools when XEN_SYSCTL_TMEM_SET_CLIENT_INFO Konrad Rzeszutek Wilk
@ 2016-09-28 11:00   ` Wei Liu
  2016-09-28 12:58   ` Jan Beulich
  1 sibling, 0 replies; 37+ messages in thread
From: Wei Liu @ 2016-09-28 11:00 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Ian Jackson, Wei Liu

On Wed, Sep 28, 2016 at 05:42:23AM -0400, Konrad Rzeszutek Wilk wrote:
> is called. If they are different from what the hypervisor
> can support we will get the appropiate errors.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> 
> v1: First submission.
> ---
>  tools/libxc/xc_tmem.c       | 1 -

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 01/12] libxc/tmem/restore: Remove call to XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION
  2016-09-28  9:42 ` [PATCH v1 01/12] libxc/tmem/restore: Remove call to XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION Konrad Rzeszutek Wilk
@ 2016-09-28 11:00   ` Wei Liu
  0 siblings, 0 replies; 37+ messages in thread
From: Wei Liu @ 2016-09-28 11:00 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Ian Jackson, Wei Liu

On Wed, Sep 28, 2016 at 05:42:15AM -0400, Konrad Rzeszutek Wilk wrote:
> The only thing this hypercall returns is TMEM_SPEC_VERSION.
> 
> The comment around is also misleading - this call does not
> do any domain operation.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP]
  2016-09-28  9:42 ` [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP] Konrad Rzeszutek Wilk
@ 2016-09-28 11:00   ` Wei Liu
  2016-09-28 15:03     ` Konrad Rzeszutek Wilk
  2016-09-28 12:10   ` Jan Beulich
  1 sibling, 1 reply; 37+ messages in thread
From: Wei Liu @ 2016-09-28 11:00 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Ian Jackson, Wei Liu

On Wed, Sep 28, 2016 at 05:42:16AM -0400, Konrad Rzeszutek Wilk wrote:
> It is not used by anything.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 8197c14..847ec35 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -757,14 +757,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
>  #define XEN_SYSCTL_TMEM_OP_DESTROY                3
>  #define XEN_SYSCTL_TMEM_OP_LIST                   4
>  #define XEN_SYSCTL_TMEM_OP_SET_WEIGHT             5
> -#define XEN_SYSCTL_TMEM_OP_SET_CAP                6

I somehow have the impression that we should add a comment here saying
that this number is now reserved -- but the reality disagrees with me.
:-)

>  #define XEN_SYSCTL_TMEM_OP_SET_COMPRESS           7
>  #define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
>  #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION       11
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS      12
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT 13
> -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP    14
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS  15
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS    16
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES   17
> -- 
> 2.4.11
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the
  2016-09-28  9:42 ` [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the Konrad Rzeszutek Wilk
@ 2016-09-28 11:00   ` Wei Liu
  2016-09-28 12:56   ` Jan Beulich
  1 sibling, 0 replies; 37+ messages in thread
From: Wei Liu @ 2016-09-28 11:00 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Ian Jackson, Wei Liu

On Wed, Sep 28, 2016 at 05:42:22AM -0400, Konrad Rzeszutek Wilk wrote:
> the struct tmem_op.
> 
> No functional change. But it makes the code so much easier
> to read.
> 
> Note: We still have to do this awkward 'guest_handle_cast'
> otherwise it will not compile on ARM - which defines _two_
> of these macros (__guest_handle_64_xen_sysctl_tmem_client_t
> and __guest_handle_xen_sysctl_tmem_client_t). And if cast is
> not used then a compile error comes up as we use the wrong one.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> 
> v1: First submission.
> ---
>  tools/libxc/xc_tmem.c       |  4 +--

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 07/12] tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE]..
  2016-09-28  9:42 ` [PATCH v1 07/12] tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE] Konrad Rzeszutek Wilk
@ 2016-09-28 11:06   ` Wei Liu
  2016-09-28 12:50   ` Jan Beulich
  1 sibling, 0 replies; 37+ messages in thread
From: Wei Liu @ 2016-09-28 11:06 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Ian Jackson, Wei Liu

On Wed, Sep 28, 2016 at 05:42:21AM -0400, Konrad Rzeszutek Wilk wrote:
> Specifically:
> 
> XEN_SYSCTL_TMEM_OP_SET_[WEIGHT,COMPRESS] are now done via:
> 
>  XEN_SYSCTL_TMEM_SET_CLIENT_INFO
> 
> and XEN_SYSCTL_TMEM_OP_SAVE_GET_[VERSION,MAXPOOLS,
> CLIENT_WEIGHT, CLIENT_FLAGS] can now be retrieved via:
> 
>  XEN_SYSCTL_TMEM_GET_CLIENT_INFO
> 
> All this information is now in 'struct tmem_client' and
> that is what we pass around.
> 
> Hypervisor wise we had to do a bit of casting. The
> 'struct xen_sysctl_tmem_op'->buf variable is a pointer to
> char. Casting that using the guest_handle_cast to a structure
> (struct tmem_client) does not work. Hence we first have to
> cast it a void and then to xen_sysctl_tmem_client_t.
> This could be improved by having an union and in fact the
> patch titled:
> "tmem: Handle 'struct tmem_info' as a seperate field in the"
> does that. It could be squashed in here but that can make
> it harder to review.
> 
> On the toolstack, prior to this patch, the xc_tmem_control
> would use the bounce buffer only when arg1 was set and the cmd
> was to list. With the 'XEN_SYSCTL_TMEM_OP_SET_[WEIGHT|COMPRESS]'
> that made sense as the 'arg1' would have the value. However
> for the other ones (say XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID)
> the 'arg1' would be the length of the 'buf'. If this
> confusing don't despair, patch patch titled:
> tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg.
> takes care of that.
> 
> The acute reader of the toolstack code will discover that
> we only used the bounce buffer for LIST, not for any other
> subcommands that used 'buf'!?! Which means that the contents
> of 'buf' would never be copied back to the calleer 'buf'!
> 
> I am not sure how this could possibly work, perhaps Xen 4.1
> (when this was introduced) was more relaxed about the bounce buffer
> being enabled. Anyhow this fixes xc_tmem_control to do it for
> any subcommand that has 'arg1'.
> 
> Lastly some of the checks in xc_tmem_[restore|save] are removed
> as they can't ever be reached (not even sure how they could
> have been reached in the original submission). One of them
> is the check for the weight against -1 when in fact the
> hypervisor would never have provided that value.
> 
> Now the checks are simple - as the hypercall always returns
> ->version and ->maxpools (which is mirroring how it was done
> prior to this patch). But if one wants to check the if a guest
> has any tmem activity then the patch titled
> "tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_
> [FLAGS,NPAGES,UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS."
> adds an ->nr_pools to check for that.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> 
> v1: New submission.
> ---
>  tools/libxc/xc_tmem.c             | 72 ++++++++++++----------------
>  tools/libxl/libxl.c               | 26 +++++++---
>  tools/python/xen/lowlevel/xc/xc.c |  2 -

Acked-by: Wei Liu <wei.liu2@citrix.com>

I welcome the effort to clean up tmem but I don't have any knowledge how
it works, nor can I test your code, so I can only trust your judgement
on this.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 10/12] tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN]
  2016-09-28  9:42 ` [PATCH v1 10/12] tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN] Konrad Rzeszutek Wilk
@ 2016-09-28 11:06   ` Wei Liu
  2016-09-28 13:00   ` Jan Beulich
  1 sibling, 0 replies; 37+ messages in thread
From: Wei Liu @ 2016-09-28 11:06 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Ian Jackson, Wei Liu

On Wed, Sep 28, 2016 at 05:42:24AM -0400, Konrad Rzeszutek Wilk wrote:
> return values. For success they used to be 1 ([SAVE,RESTORE]_BEGIN),
> 0 if guest did not have any tmem (but only for SAVE_BEGIN), and
> -1 for any type of failure.
> 
> And SAVE_END (which you would think would mirror SAVE_BEGIN)
> had 0 for success and -1 if guest did not any tmem enabled for it.
> 
> This is confusing. Now the code will return 0 if the operation was
> success.  Various XEN_EXX values are returned if tmem is not enabled
> or the operation could not performed.
> 
> The xc_tmem.c code only needs one place to check - where we use
> SAVE_BEGIN. The place where RESTORE_BEGIN is used will have errno
> with the proper error value and return will be -1, so will still
> fail properly.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> 
> v1: First submission
> ---
>  tools/libxc/xc_tmem.c | 14 +++++++++++---

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 11/12] tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg.
  2016-09-28  9:42 ` [PATCH v1 11/12] tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg Konrad Rzeszutek Wilk
@ 2016-09-28 11:07   ` Wei Liu
  0 siblings, 0 replies; 37+ messages in thread
From: Wei Liu @ 2016-09-28 11:07 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Ian Jackson, Wei Liu

On Wed, Sep 28, 2016 at 05:42:25AM -0400, Konrad Rzeszutek Wilk wrote:
> That is what they are used for. Lets make it more clear.
> 
> Of all the various sub-commands, the only one that needed
> semantic change is XEN_SYSCTL_TMEM_OP_SAVE_BEGIN. That in the
> past used 'arg1', and now we are moving it to use 'arg'.
> Since that code is only used during migration which is tied
> to the toolstack it is OK to change it.
> 
> While at it, also fix xc_tmem_control_oid to properly handle
> the 'buf' and bounce it as appropiate.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> 
> v1: First submission.
> ---
>  tools/libxc/include/xenctrl.h     |  4 ++--
>  tools/libxc/xc_tmem.c             | 37 ++++++++++++++++++-------------------
>  tools/libxl/libxl.c               |  4 ++--
>  tools/python/xen/lowlevel/xc/xc.c | 16 ++++++++--------

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 12/12] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS.
  2016-09-28  9:42 ` [PATCH v1 12/12] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS Konrad Rzeszutek Wilk
@ 2016-09-28 11:07   ` Wei Liu
  2016-09-28 13:11   ` Jan Beulich
  1 sibling, 0 replies; 37+ messages in thread
From: Wei Liu @ 2016-09-28 11:07 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Ian Jackson, Wei Liu

On Wed, Sep 28, 2016 at 05:42:26AM -0400, Konrad Rzeszutek Wilk wrote:
> These operations are used during the save process of migration.
> Instead of doing 64 hypercalls lets do just one. We modify
> the 'struct tmem_client' structure (used in
> XEN_SYSCTL_TMEM_OP_[GET|SET]_CLIENT_INFO) to have an extra field
> 'nr_pools'. Armed with that the code slurping up pages from the
> hypervisor can allocate a big enough structure (struct tmem_pool_info)
> to contain all the active pools. And then just iterate over each
> one and save it in the stream.
> 
> In the xc_tmem_[save|restore] we also added proper memory handling
> of the 'buf' and 'pools'. Because of the loops and to make it as
> easy as possible to review we add a goto label and for almost
> all error conditions jump in it.
> 
> The include for inttypes is required for the PRId64 macro to
> work (which is needed to compile this code under 32-bit).
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> 
> v1: First submission.
> ---
>  tools/libxc/xc_tmem.c       | 159 +++++++++++++++++++++++++-------------------

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP]
  2016-09-28  9:42 ` [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP] Konrad Rzeszutek Wilk
  2016-09-28 11:00   ` Wei Liu
@ 2016-09-28 12:10   ` Jan Beulich
  2016-09-30 14:04     ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 12:10 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: Ian Jackson, Wei Liu, xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> It is not used by anything.

But that shouldn't be the only aspect. Are they also not useful for
anything?

> --- a/xen/common/tmem_control.c
> +++ b/xen/common/tmem_control.c
> @@ -103,9 +103,9 @@ static int tmemc_list_client(struct client *c, tmem_cli_va_param_t buf,
>      struct tmem_pool *p;
>      bool_t s;
>  
> -    n = scnprintf(info,BSIZE,"C=CI:%d,ww:%d,ca:%d,co:%d,fr:%d,"
> +    n = scnprintf(info,BSIZE,"C=CI:%d,ww:%d,co:%d,fr:%d,"
>          "Tc:%"PRIu64",Ge:%ld,Pp:%ld,Gp:%ld%c",
> -        c->cli_id, c->weight, c->cap, c->compress, c->frozen,
> +        c->cli_id, c->weight, c->compress, c->frozen,
>          c->total_cycles, c->succ_eph_gets, c->succ_pers_puts, c->succ_pers_gets,
>          use_long ? ',' : '\n');
>      if (use_long)
> @@ -273,11 +273,6 @@ static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
>          atomic_sub(old_weight,&tmem_global.client_weight_total);
>          atomic_add(client->weight,&tmem_global.client_weight_total);
>          break;
> -    case XEN_SYSCTL_TMEM_OP_SET_CAP:
> -        client->cap = arg1;
> -        tmem_client_info("tmem: cap set to %d for %s=%d\n",
> -                        arg1, tmem_cli_id_str, cli_id);
> -        break;
>      case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
>          if ( tmem_dedup_enabled() )
>          {
> @@ -341,11 +336,6 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
>              break;
>          rc = client->weight == -1 ? -2 : client->weight;
>          break;
> -    case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP:
> -        if ( client == NULL )
> -            break;
> -        rc = client->cap == -1 ? -2 : client->cap;
> -        break;
>      case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS:
>          if ( client == NULL )
>              break;

It looks like you're removing all accesses to the cap field. That would
suggest that you now want to also remove the field itself.

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -757,14 +757,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
>  #define XEN_SYSCTL_TMEM_OP_DESTROY                3
>  #define XEN_SYSCTL_TMEM_OP_LIST                   4
>  #define XEN_SYSCTL_TMEM_OP_SET_WEIGHT             5
> -#define XEN_SYSCTL_TMEM_OP_SET_CAP                6
>  #define XEN_SYSCTL_TMEM_OP_SET_COMPRESS           7
>  #define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
>  #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION       11
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS      12
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT 13
> -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP    14
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS  15
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS    16
>  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES   17

I think such removals should be accompanied by bumping
XEN_SYSCTL_INTERFACE_VERSION, albeit it's obviously not as
relevant as it would be when changing some structure's layout.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 03/12] tmem: Wrap tmem dedup code with CONFIG_TMEM_DEDUP
  2016-09-28  9:42 ` [PATCH v1 03/12] tmem: Wrap tmem dedup code with CONFIG_TMEM_DEDUP Konrad Rzeszutek Wilk
@ 2016-09-28 12:18   ` Jan Beulich
  0 siblings, 0 replies; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 12:18 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> This config is not defined anywhere but it makes it way
> easier to figure out what code to deal with.

And the functionality thus removed is
- dead code,
- broken,
- anything else?

> --- a/xen/common/tmem.c
> +++ b/xen/common/tmem.c
> @@ -71,10 +71,14 @@ struct tmem_page_descriptor {
>      pagesize_t size; /* 0 == PAGE_SIZE (pfp), -1 == data invalid,
>                      else compressed data (cdata). */
>      uint32_t index;
> +#ifdef CONFIG_TMEM_DEDUP
>      /* Must hold pcd_tree_rwlocks[firstbyte] to use pcd pointer/siblings. */
>      uint16_t firstbyte; /* NON_SHAREABLE->pfp  otherwise->pcd. */
> +#endif
>      bool_t eviction_attempted;  /* CHANGE TO lifetimes? (settable). */
> +#ifdef CONFIG_TMEM_DEDUP
>      struct list_head pcd_siblings;
> +#endif

Perhaps worth moving the former field down, so you end up with
just one #ifdef?

> @@ -1085,6 +1101,8 @@ static struct client *client_create(domid_t cli_id)
>  
>      client->cli_id = cli_id;
>      client->compress = tmem_compression_enabled();
> +#ifdef CONFIG_TMEM_DEDUP
> +#endif

???

> @@ -2362,29 +2397,41 @@ unsigned long tmem_freeable_pages(void)
>  /* Called at hypervisor startup. */
>  static int __init init_tmem(void)
>  {
> -    int i;
>      if ( !tmem_enabled() )
>          return 0;
>  
> +#ifdef CONFIG_TMEM_DEDUP
>      if ( tmem_dedup_enabled() )
> +    {
> +        unsigned int i;
> +
>          for (i = 0; i < 256; i++ )
>          {
>              pcd_tree_roots[i] = RB_ROOT;
>              rwlock_init(&pcd_tree_rwlocks[i]);
>          }
> -
> +    }
> +#endif
>      if ( !tmem_mempool_init() )
>          return 0;
>  
>      if ( tmem_init() )
>      {
>          printk("tmem: initialized comp=%d dedup=%d tze=%d\n",
> -            tmem_compression_enabled(), tmem_dedup_enabled(), tmem_tze_enabled());
> +               tmem_compression_enabled(),
> +#ifdef CONFIG_TMEM_DEDUP
> +               tmem_dedup_enabled(),
> +#else
> +               0,
> +#endif

Why can't tmem_dedup_enabled() simply return constant zero when
CONFIG_TMEM_DEDUP is undefined?

> --- a/xen/common/tmem_xen.c
> +++ b/xen/common/tmem_xen.c
> @@ -20,8 +20,10 @@ boolean_param("tmem", opt_tmem);
>  bool_t __read_mostly opt_tmem_compress = 0;
>  boolean_param("tmem_compress", opt_tmem_compress);
>  
> +#ifdef CONFIG_TMEM_DEDUP
>  bool_t __read_mostly opt_tmem_dedup = 0;
>  boolean_param("tmem_dedup", opt_tmem_dedup);
> +#endif

This option's documentation entry would seem to need some
adjustment then too, albeit it's not really clear what would be best
here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 04/12] tmem: Wrap tmem tze code with CONFIG_TMEM_TZE
  2016-09-28  9:42 ` [PATCH v1 04/12] tmem: Wrap tmem tze code with CONFIG_TMEM_TZE Konrad Rzeszutek Wilk
@ 2016-09-28 12:19   ` Jan Beulich
  0 siblings, 0 replies; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 12:19 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> . which is actually dependent on CONFIG_TMEM_DEDUP

Same comments as for patch 3.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 05/12] tmem: Delete deduplication (and tze) code.
  2016-09-28  9:42 ` [PATCH v1 05/12] tmem: Delete deduplication (and tze) code Konrad Rzeszutek Wilk
@ 2016-09-28 12:34   ` Jan Beulich
  2016-09-28 15:05     ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 12:34 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> Couple of reasons:
>  - It can lead to security issues (see row-hammer, KSM and such
>    attacks).
>  - Code is quite complex.
>  - Deduplication is good if the pages themselves are the same
>    but that is hardly guaranteed.
>  - We got some gains (if pages are deduped) but at the cost of
>    making code less maintainable.
>  - tze depends on deduplication code.
> 
> As such, deleting it.

So why the prior two patches then? That way you've got more to
delete here than without.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 06/12] tmem: Move client weight, frozen, live_migrating, and compress
  2016-09-28  9:42 ` [PATCH v1 06/12] tmem: Move client weight, frozen, live_migrating, and compress Konrad Rzeszutek Wilk
@ 2016-09-28 12:39   ` Jan Beulich
  0 siblings, 0 replies; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 12:39 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -785,6 +785,20 @@ struct tmem_handle {
>      xen_tmem_oid_t oid;
>  };
>  
> +struct tmem_client {

I guess there's a reason for this to get added to the public header,
but if it really is meant to go here the structure tag should gain a
xen_ prefix.

> +    uint32_t version;
> +    uint32_t maxpools;

Except for their initialization in client_create() these fields are
unused. What's the deal?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 07/12] tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE]..
  2016-09-28  9:42 ` [PATCH v1 07/12] tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE] Konrad Rzeszutek Wilk
  2016-09-28 11:06   ` Wei Liu
@ 2016-09-28 12:50   ` Jan Beulich
  1 sibling, 0 replies; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 12:50 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: Ian Jackson, Wei Liu, xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> Hypervisor wise we had to do a bit of casting. The
> 'struct xen_sysctl_tmem_op'->buf variable is a pointer to
> char. Casting that using the guest_handle_cast to a structure
> (struct tmem_client) does not work. Hence we first have to
> cast it a void and then to xen_sysctl_tmem_client_t.
> This could be improved by having an union and in fact the
> patch titled:
> "tmem: Handle 'struct tmem_info' as a seperate field in the"
> does that. It could be squashed in here but that can make
> it harder to review.

Perhaps appropriate parts of that should then be made a prereq
patch for this one?

> --- a/xen/common/tmem_control.c
> +++ b/xen/common/tmem_control.c
> @@ -258,43 +258,58 @@ static int tmemc_list(domid_t cli_id, tmem_cli_va_param_t buf, uint32_t len,
>      return 0;
>  }
>  
> -static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
> +static int __tmemc_set_var(struct client *client, uint32_t subop,
> +                           XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) buf)
>  {
>      domid_t cli_id = client->cli_id;
>      uint32_t old_weight;
> +    xen_sysctl_tmem_client_t info = { };
>  
> -    switch (subop)
> +    ASSERT(client);
> +
> +    if ( subop != XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO )
> +    {
> +        tmem_client_warn("tmem: unknown subop %d for tmemc_set_var\n", subop);
> +        return -1;
> +    }
> +    if ( copy_from_guest(&info, buf, 1) )
> +        return -EFAULT;
> +
> +    if ( info.weight != client->info.weight )

Ahead of this - no need to check at least the version field (which I
think would have been more obvious if the field got added in this
patch rather than in the earlier one)?

> +    case XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO:
> +        rc = 0;
> +        info = guest_handle_cast(buf, xen_sysctl_tmem_client_t);
> +        if ( client )
> +        {
> +            if ( copy_to_guest(info, &client->info, 1) )
> +                rc = -EFAULT;
> +        }
> +        else
> +        {
> +            xen_sysctl_tmem_client_t generic = { .version = TMEM_SPEC_VERSION,
> +                                                 .maxpools = MAX_POOLS_PER_DOMAIN };

static const?

> +typedef struct tmem_client xen_sysctl_tmem_client_t;

Please have typedef name and structure tag match (except
obviously for the trailing _t); in particular I'm not sure about
the need for the _sysctl infix.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the
  2016-09-28  9:42 ` [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the Konrad Rzeszutek Wilk
  2016-09-28 11:00   ` Wei Liu
@ 2016-09-28 12:56   ` Jan Beulich
  2016-09-30 14:36     ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 12:56 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: Ian Jackson, Wei Liu, xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> Note: We still have to do this awkward 'guest_handle_cast'
> otherwise it will not compile on ARM - which defines _two_
> of these macros (__guest_handle_64_xen_sysctl_tmem_client_t
> and __guest_handle_xen_sysctl_tmem_client_t). And if cast is
> not used then a compile error comes up as we use the wrong one.

This seems suspicious, but it's hard to judge without knowing what
exactly the errors were.

> --- a/xen/common/tmem_control.c
> +++ b/xen/common/tmem_control.c
> @@ -258,7 +258,7 @@ static int tmemc_list(domid_t cli_id, tmem_cli_va_param_t buf, uint32_t len,
>      return 0;
>  }
>  
> -static int __tmemc_set_var(struct client *client, uint32_t subop,
> +static int __tmemc_set_var(struct client *client,
>                             XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) buf)
>  {
>      domid_t cli_id = client->cli_id;
> @@ -267,11 +267,6 @@ static int __tmemc_set_var(struct client *client, uint32_t subop,
>  
>      ASSERT(client);
>  
> -    if ( subop != XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO )
> -    {
> -        tmem_client_warn("tmem: unknown subop %d for tmemc_set_var\n", subop);
> -        return -1;
> -    }
>      if ( copy_from_guest(&info, buf, 1) )
>          return -EFAULT;

The adjustments above look pretty unrelated to the purpose of the
patch, but well - you're the maintainer of this code.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 09/12] tmem: Check version and maxpools when XEN_SYSCTL_TMEM_SET_CLIENT_INFO
  2016-09-28  9:42 ` [PATCH v1 09/12] tmem: Check version and maxpools when XEN_SYSCTL_TMEM_SET_CLIENT_INFO Konrad Rzeszutek Wilk
  2016-09-28 11:00   ` Wei Liu
@ 2016-09-28 12:58   ` Jan Beulich
  1 sibling, 0 replies; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 12:58 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: Ian Jackson, Wei Liu, xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> --- a/xen/common/tmem_control.c
> +++ b/xen/common/tmem_control.c
> @@ -270,6 +270,12 @@ static int __tmemc_set_var(struct client *client,
>      if ( copy_from_guest(&info, buf, 1) )
>          return -EFAULT;
>  
> +    if ( info.version != TMEM_SPEC_VERSION )
> +        return -EOPNOTSUPP;
> +
> +    if ( info.maxpools > MAX_POOLS_PER_DOMAIN )
> +        return -ERANGE;

Ah, here we go. But ...

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -787,8 +787,9 @@ struct tmem_handle {
>   * during migration.
>   */
>  struct tmem_client {
> -    uint32_t version;
> -    uint32_t maxpools;
> +    uint32_t version;   /* If mismatched we will get XEN_ENOSPC. */
> +    uint32_t maxpools;  /* If greater than what hypervisor supports, will get
> +                           XEN_ERANGE. */

... while the latter comment matches the code, the former doesn't.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 10/12] tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN]
  2016-09-28  9:42 ` [PATCH v1 10/12] tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN] Konrad Rzeszutek Wilk
  2016-09-28 11:06   ` Wei Liu
@ 2016-09-28 13:00   ` Jan Beulich
  1 sibling, 0 replies; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 13:00 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: Ian Jackson, Wei Liu, xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
>      case XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN:
>          if ( client == NULL && (client = client_create(cli_id)) != NULL )
> -            return 1;
> +            rc = 0;
> +        else
> +            rc = -EEXIST;
>          break;

I don't think the same error code should be used for both a
pre-existing client and creation failure.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 12/12] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS.
  2016-09-28  9:42 ` [PATCH v1 12/12] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS Konrad Rzeszutek Wilk
  2016-09-28 11:07   ` Wei Liu
@ 2016-09-28 13:11   ` Jan Beulich
  1 sibling, 0 replies; 37+ messages in thread
From: Jan Beulich @ 2016-09-28 13:11 UTC (permalink / raw)
  To: konrad, Konrad Rzeszutek Wilk; +Cc: Ian Jackson, Wei Liu, xen-devel

>>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> @@ -1592,6 +1594,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
>  
>  out:
>      tmem_client_info("pool_id=%d\n", d_poolid);
> +    client->info.nr_pools ++;

Stray blank?

> --- a/xen/common/tmem_control.c
> +++ b/xen/common/tmem_control.c
> @@ -276,6 +276,8 @@ static int __tmemc_set_var(struct client *client,
>      if ( info.maxpools > MAX_POOLS_PER_DOMAIN )
>          return -ERANGE;
>  
> +    /* Ignore info.nr_pools. */
> +
>      if ( info.weight != client->info.weight )
>      {
>          old_weight = client->info.weight;
> @@ -342,46 +344,63 @@ static int tmemc_get_client_info(int cli_id,
>      return 0;
>  }
>  
> -static int tmemc_save_subop(int cli_id, uint32_t pool_id, uint32_t subop,
> -                            XEN_GUEST_HANDLE_PARAM(void) buf, uint32_t arg)
> +static int tmemc_get_pool(int cli_id,
> +                          
> XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_pool_info_t) pools,
> +                          uint32_t len)
>  {
>      struct client *client = tmem_client_from_cli_id(cli_id);
> -    struct tmem_pool *pool = (client == NULL || pool_id >= 
> MAX_POOLS_PER_DOMAIN)
> -                   ? NULL : client->pools[pool_id];
> -    int rc = -1;
> +    unsigned int i, idx;
> +    int rc = 0;
> +    unsigned int nr = len / sizeof(xen_sysctl_tmem_pool_info_t);
> +
> +    if ( len % sizeof(xen_sysctl_tmem_pool_info_t) )
> +        return -EINVAL;
> +
> +    if ( nr >= MAX_POOLS_PER_DOMAIN )
> +        return -E2BIG;

>= seems one off here.

> +    if ( !guest_handle_okay(pools, nr) )
> +        return -EINVAL;
>  
> -    switch(subop)
> +    if ( !client )
> +        return -EINVAL;
> +
> +    for ( idx = 0, i = 0; i < MAX_POOLS_PER_DOMAIN; i++ )
>      {
> -    case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS:
> -         if ( pool == NULL )
> -             break;
> -         rc = (pool->persistent ? TMEM_POOL_PERSIST : 0) |
> +        struct tmem_pool *pool = client->pools[i];
> +        xen_sysctl_tmem_pool_info_t out;
> +
> +        if ( pool == NULL )
> +            continue;
> +
> +        out.flags.raw = (pool->persistent ? TMEM_POOL_PERSIST : 0) |
>                (pool->shared ? TMEM_POOL_SHARED : 0) |
>                (POOL_PAGESHIFT << TMEM_POOL_PAGESIZE_SHIFT) |
>                (TMEM_SPEC_VERSION << TMEM_POOL_VERSION_SHIFT);
> -        break;
> -    case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES:
> -         if ( pool == NULL )
> -             break;
> -        rc = _atomic_read(pool->pgp_count);
> -        break;
> -    case XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID:
> -         if ( pool == NULL )
> -             break;
> -        rc = 0;
> -        if ( copy_to_guest(guest_handle_cast(buf, void), pool->uuid, 2) )
> +        out.n_pages = _atomic_read(pool->pgp_count);
> +        out.uuid[0] = pool->uuid[0];
> +        out.uuid[1] = pool->uuid[1];
> +        out.id = i;
> +
> +        /* N.B. 'idx' != 'i'. */
> +        if ( __copy_to_guest_offset(pools, idx, &out, 1) )
> +        {
>              rc = -EFAULT;
> -        break;
> -    default:
> -        rc = -1;
> +            break;
> +        }
> +        idx ++;

Stray blank again.

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -760,9 +760,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
>  #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
>  #define XEN_SYSCTL_TMEM_OP_GET_CLIENT_INFO        11
>  #define XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO        12
> -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS    16
> -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES   17
> -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID     18
> +#define XEN_SYSCTL_TMEM_OP_GET_POOLS              16

The latest here - when you re-assign meaning - bumping
XEN_SYSCTL_INTERFACE_VERSION becomes unavoidable.

> @@ -803,6 +802,31 @@ struct tmem_client {
>  typedef struct tmem_client xen_sysctl_tmem_client_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_sysctl_tmem_client_t);
>  
> +/*
> + * XEN_SYSCTL_TMEM_OP_GET_POOLS uses the 'pool' array in
> + * xen_sysctl_tmem_op with this structure. The hypercall will
> + * return the number of entries in 'pool' or a negative value
> + * if an error was encountered.
> + */
> +struct tmem_pool_info {
> +    union {
> +        uint32_t raw;
> +        struct {
> +            uint32_t persist:1,    /* See TMEM_POOL_PERSIST. */
> +                     shared:1,     /* See TMEM_POOL_SHARED. */
> +                     rsv:2,
> +                     pagebits:8,   /* TMEM_POOL_PAGESIZE_[SHIFT,MASK]. */
> +                     rsv2:12,
> +                     version:8;    /* TMEM_POOL_VERSION_[SHIFT,MASK]. */
> +        } u;
> +    } flags;
> +    uint32_t id;                  /* Less than tmem_client.maxpools. */
> +    uint64_t n_pages;
> +    uint64_t uuid[2];

uint64_aligned_t

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP]
  2016-09-28 11:00   ` Wei Liu
@ 2016-09-28 15:03     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28 15:03 UTC (permalink / raw)
  To: Wei Liu; +Cc: xen-devel, Ian Jackson

On Wed, Sep 28, 2016 at 12:00:14PM +0100, Wei Liu wrote:
> On Wed, Sep 28, 2016 at 05:42:16AM -0400, Konrad Rzeszutek Wilk wrote:
> > It is not used by anything.
> > 
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> Acked-by: Wei Liu <wei.liu2@citrix.com>
> 
> > diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> > index 8197c14..847ec35 100644
> > --- a/xen/include/public/sysctl.h
> > +++ b/xen/include/public/sysctl.h
> > @@ -757,14 +757,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
> >  #define XEN_SYSCTL_TMEM_OP_DESTROY                3
> >  #define XEN_SYSCTL_TMEM_OP_LIST                   4
> >  #define XEN_SYSCTL_TMEM_OP_SET_WEIGHT             5
> > -#define XEN_SYSCTL_TMEM_OP_SET_CAP                6
> 
> I somehow have the impression that we should add a comment here saying
> that this number is now reserved -- but the reality disagrees with me.
> :-)

I was thinking that initially too, but then these subcommands are in
domctl and they are not in the tmem public API so we are free to muck
around.

So I figured I would reuse them as well and such.
> 
> >  #define XEN_SYSCTL_TMEM_OP_SET_COMPRESS           7
> >  #define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION       11
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS      12
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT 13
> > -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP    14
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS  15
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS    16
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES   17
> > -- 
> > 2.4.11
> > 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 05/12] tmem: Delete deduplication (and tze) code.
  2016-09-28 12:34   ` Jan Beulich
@ 2016-09-28 15:05     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-28 15:05 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel

On Wed, Sep 28, 2016 at 06:34:19AM -0600, Jan Beulich wrote:
> >>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> > Couple of reasons:
> >  - It can lead to security issues (see row-hammer, KSM and such
> >    attacks).
> >  - Code is quite complex.
> >  - Deduplication is good if the pages themselves are the same
> >    but that is hardly guaranteed.
> >  - We got some gains (if pages are deduped) but at the cost of
> >    making code less maintainable.
> >  - tze depends on deduplication code.
> > 
> > As such, deleting it.
> 
> So why the prior two patches then? That way you've got more to
> delete here than without.

Um. I thought it would be easier to understand which parts of 
the code are it by first putting a bunch of #ifdefs around it.

But the end result is the same - code gets removed. I will squash
those two patches in this one.

> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP]
  2016-09-28 12:10   ` Jan Beulich
@ 2016-09-30 14:04     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-30 14:04 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Ian Jackson, Wei Liu, xen-devel

On Wed, Sep 28, 2016 at 06:10:58AM -0600, Jan Beulich wrote:
> >>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> > It is not used by anything.
> 
> But that shouldn't be the only aspect. Are they also not useful for
> anything?

As far as I can see it was meant to complement the 'weight'. But the
code just hangs around. I can re-introduce it if there is a need for it
and make it visible via the  XEN_SYSCTL_TMEM_SET_CLIENT_INFO (introduced
in patch #7).

> 
> > --- a/xen/common/tmem_control.c
> > +++ b/xen/common/tmem_control.c
> > @@ -103,9 +103,9 @@ static int tmemc_list_client(struct client *c, tmem_cli_va_param_t buf,
> >      struct tmem_pool *p;
> >      bool_t s;
> >  
> > -    n = scnprintf(info,BSIZE,"C=CI:%d,ww:%d,ca:%d,co:%d,fr:%d,"
> > +    n = scnprintf(info,BSIZE,"C=CI:%d,ww:%d,co:%d,fr:%d,"
> >          "Tc:%"PRIu64",Ge:%ld,Pp:%ld,Gp:%ld%c",
> > -        c->cli_id, c->weight, c->cap, c->compress, c->frozen,
> > +        c->cli_id, c->weight, c->compress, c->frozen,
> >          c->total_cycles, c->succ_eph_gets, c->succ_pers_puts, c->succ_pers_gets,
> >          use_long ? ',' : '\n');
> >      if (use_long)
> > @@ -273,11 +273,6 @@ static int __tmemc_set_var(struct client *client, uint32_t subop, uint32_t arg1)
> >          atomic_sub(old_weight,&tmem_global.client_weight_total);
> >          atomic_add(client->weight,&tmem_global.client_weight_total);
> >          break;
> > -    case XEN_SYSCTL_TMEM_OP_SET_CAP:
> > -        client->cap = arg1;
> > -        tmem_client_info("tmem: cap set to %d for %s=%d\n",
> > -                        arg1, tmem_cli_id_str, cli_id);
> > -        break;
> >      case XEN_SYSCTL_TMEM_OP_SET_COMPRESS:
> >          if ( tmem_dedup_enabled() )
> >          {
> > @@ -341,11 +336,6 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
> >              break;
> >          rc = client->weight == -1 ? -2 : client->weight;
> >          break;
> > -    case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP:
> > -        if ( client == NULL )
> > -            break;
> > -        rc = client->cap == -1 ? -2 : client->cap;
> > -        break;
> >      case XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS:
> >          if ( client == NULL )
> >              break;
> 
> It looks like you're removing all accesses to the cap field. That would
> suggest that you now want to also remove the field itself.

Yes! The patch titled "06/12] tmem: Move client
weight,frozen,live_migrating, and compress" did it, but it should have
been done here. Thanks!
> 
> > --- a/xen/include/public/sysctl.h
> > +++ b/xen/include/public/sysctl.h
> > @@ -757,14 +757,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
> >  #define XEN_SYSCTL_TMEM_OP_DESTROY                3
> >  #define XEN_SYSCTL_TMEM_OP_LIST                   4
> >  #define XEN_SYSCTL_TMEM_OP_SET_WEIGHT             5
> > -#define XEN_SYSCTL_TMEM_OP_SET_CAP                6
> >  #define XEN_SYSCTL_TMEM_OP_SET_COMPRESS           7
> >  #define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION       11
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS      12
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT 13
> > -#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP    14
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS  15
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS    16
> >  #define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES   17
> 
> I think such removals should be accompanied by bumping
> XEN_SYSCTL_INTERFACE_VERSION, albeit it's obviously not as
> relevant as it would be when changing some structure's layout.

The patch series actually does not alter the layout per say. I think
it would be most justified in "11/12] tmem/xc_tmem_control: Rename
'arg1' to 'len' and 'arg2' to arg." as that:

"Of all the various sub-commands, the only one that needed
semantic change is XEN_SYSCTL_TMEM_OP_SAVE_BEGIN. That in the
past used 'arg1', and now we are moving it to use 'arg'."

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the
  2016-09-28 12:56   ` Jan Beulich
@ 2016-09-30 14:36     ` Konrad Rzeszutek Wilk
  2016-09-30 14:56       ` Jan Beulich
  0 siblings, 1 reply; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-30 14:36 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Ian Jackson, Wei Liu, xen-devel

On Wed, Sep 28, 2016 at 06:56:40AM -0600, Jan Beulich wrote:
> >>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> > Note: We still have to do this awkward 'guest_handle_cast'
> > otherwise it will not compile on ARM - which defines _two_
> > of these macros (__guest_handle_64_xen_sysctl_tmem_client_t
> > and __guest_handle_xen_sysctl_tmem_client_t). And if cast is
> > not used then a compile error comes up as we use the wrong one.
> 
> This seems suspicious, but it's hard to judge without knowing what
> exactly the errors were.

tmem_control.c: In function ‘tmem_control’:
tmem_control.c:426:9: error: incompatible type for argument 2 of
‘tmemc_set_client_info’
         ret = tmemc_set_client_info(op->cli_id, op->u.client);
         ^
tmem_control.c:302:12: note: expected
‘__guest_handle_xen_sysctl_tmem_client_t’ but argument is of type
‘__guest_handle_64_xen_sysctl_tmem_client_t’
 static int tmemc_set_client_info(domid_t cli_id,
            ^
tmem_control.c:432:9: error: incompatible type for argument 2 of
‘tmemc_get_client_info’
         ret = tmemc_get_client_info(op->cli_id, op->u.client);
         ^

> 
> > --- a/xen/common/tmem_control.c
> > +++ b/xen/common/tmem_control.c
> > @@ -258,7 +258,7 @@ static int tmemc_list(domid_t cli_id, tmem_cli_va_param_t buf, uint32_t len,
> >      return 0;
> >  }
> >  
> > -static int __tmemc_set_var(struct client *client, uint32_t subop,
> > +static int __tmemc_set_var(struct client *client,
> >                             XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) buf)
> >  {
> >      domid_t cli_id = client->cli_id;
> > @@ -267,11 +267,6 @@ static int __tmemc_set_var(struct client *client, uint32_t subop,
> >  
> >      ASSERT(client);
> >  
> > -    if ( subop != XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO )
> > -    {
> > -        tmem_client_warn("tmem: unknown subop %d for tmemc_set_var\n", subop);
> > -        return -1;
> > -    }
> >      if ( copy_from_guest(&info, buf, 1) )
> >          return -EFAULT;
> 
> The adjustments above look pretty unrelated to the purpose of the
> patch, but well - you're the maintainer of this code.

<smacks himself in the head> That is indeed wrong. This should have
been done in the previous patch: "tmem/libxc: Squash
XEN_SYSCTL_TMEM_OP_[SET|SAVE].."


> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the
  2016-09-30 14:36     ` Konrad Rzeszutek Wilk
@ 2016-09-30 14:56       ` Jan Beulich
  2016-09-30 16:51         ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 37+ messages in thread
From: Jan Beulich @ 2016-09-30 14:56 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Ian Jackson, Wei Liu, xen-devel

>>> On 30.09.16 at 16:36, <konrad@kernel.org> wrote:
> On Wed, Sep 28, 2016 at 06:56:40AM -0600, Jan Beulich wrote:
>> >>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
>> > Note: We still have to do this awkward 'guest_handle_cast'
>> > otherwise it will not compile on ARM - which defines _two_
>> > of these macros (__guest_handle_64_xen_sysctl_tmem_client_t
>> > and __guest_handle_xen_sysctl_tmem_client_t). And if cast is
>> > not used then a compile error comes up as we use the wrong one.
>> 
>> This seems suspicious, but it's hard to judge without knowing what
>> exactly the errors were.
> 
> tmem_control.c: In function ‘tmem_control’:
> tmem_control.c:426:9: error: incompatible type for argument 2 of
> ‘tmemc_set_client_info’
>          ret = tmemc_set_client_info(op->cli_id, op->u.client);
>          ^
> tmem_control.c:302:12: note: expected
> ‘__guest_handle_xen_sysctl_tmem_client_t’ but argument is of type
> ‘__guest_handle_64_xen_sysctl_tmem_client_t’
>  static int tmemc_set_client_info(domid_t cli_id,
>             ^

Looks like you want to pass around a 64-bit handle then?

>> > --- a/xen/common/tmem_control.c
>> > +++ b/xen/common/tmem_control.c
>> > @@ -258,7 +258,7 @@ static int tmemc_list(domid_t cli_id, tmem_cli_va_param_t buf, uint32_t len,
>> >      return 0;
>> >  }
>> >  
>> > -static int __tmemc_set_var(struct client *client, uint32_t subop,
>> > +static int __tmemc_set_var(struct client *client,
>> >                             XEN_GUEST_HANDLE_PARAM(xen_sysctl_tmem_client_t) buf)
>> >  {
>> >      domid_t cli_id = client->cli_id;
>> > @@ -267,11 +267,6 @@ static int __tmemc_set_var(struct client *client, uint32_t subop,
>> >  
>> >      ASSERT(client);
>> >  
>> > -    if ( subop != XEN_SYSCTL_TMEM_OP_SET_CLIENT_INFO )
>> > -    {
>> > -        tmem_client_warn("tmem: unknown subop %d for tmemc_set_var\n", subop);
>> > -        return -1;
>> > -    }
>> >      if ( copy_from_guest(&info, buf, 1) )
>> >          return -EFAULT;
>> 
>> The adjustments above look pretty unrelated to the purpose of the
>> patch, but well - you're the maintainer of this code.
> 
> <smacks himself in the head>

Don't be that harsh to yourself.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the
  2016-09-30 14:56       ` Jan Beulich
@ 2016-09-30 16:51         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 37+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-09-30 16:51 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Ian Jackson, Wei Liu, xen-devel

On Fri, Sep 30, 2016 at 08:56:03AM -0600, Jan Beulich wrote:
> >>> On 30.09.16 at 16:36, <konrad@kernel.org> wrote:
> > On Wed, Sep 28, 2016 at 06:56:40AM -0600, Jan Beulich wrote:
> >> >>> On 28.09.16 at 11:42, <konrad.wilk@oracle.com> wrote:
> >> > Note: We still have to do this awkward 'guest_handle_cast'
> >> > otherwise it will not compile on ARM - which defines _two_
> >> > of these macros (__guest_handle_64_xen_sysctl_tmem_client_t
> >> > and __guest_handle_xen_sysctl_tmem_client_t). And if cast is
> >> > not used then a compile error comes up as we use the wrong one.
> >> 
> >> This seems suspicious, but it's hard to judge without knowing what
> >> exactly the errors were.
> > 
> > tmem_control.c: In function ‘tmem_control’:
> > tmem_control.c:426:9: error: incompatible type for argument 2 of
> > ‘tmemc_set_client_info’
> >          ret = tmemc_set_client_info(op->cli_id, op->u.client);
> >          ^
> > tmem_control.c:302:12: note: expected
> > ‘__guest_handle_xen_sysctl_tmem_client_t’ but argument is of type
> > ‘__guest_handle_64_xen_sysctl_tmem_client_t’
> >  static int tmemc_set_client_info(domid_t cli_id,
> >             ^
> 
> Looks like you want to pass around a 64-bit handle then?


<nods> Which I found is XEN_GUEST_HANDLE, not XEN_GUEST_HANDLE_PARAM.

.. snip..
> >> >      if ( copy_from_guest(&info, buf, 1) )
> >> >          return -EFAULT;
> >> 
> >> The adjustments above look pretty unrelated to the purpose of the
> >> patch, but well - you're the maintainer of this code.
> > 
> > <smacks himself in the head>
> 
> Don't be that harsh to yourself.

:-)

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2016-09-30 16:51 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-28  9:42 [PATCH v1] Tmem cleanups/improvements for v4.8 Konrad Rzeszutek Wilk
2016-09-28  9:42 ` [PATCH v1 01/12] libxc/tmem/restore: Remove call to XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION Konrad Rzeszutek Wilk
2016-09-28 11:00   ` Wei Liu
2016-09-28  9:42 ` [PATCH v1 02/12] tmem: Retire XEN_SYSCTL_TMEM_OP_[SET_CAP|SAVE_GET_CLIENT_CAP] Konrad Rzeszutek Wilk
2016-09-28 11:00   ` Wei Liu
2016-09-28 15:03     ` Konrad Rzeszutek Wilk
2016-09-28 12:10   ` Jan Beulich
2016-09-30 14:04     ` Konrad Rzeszutek Wilk
2016-09-28  9:42 ` [PATCH v1 03/12] tmem: Wrap tmem dedup code with CONFIG_TMEM_DEDUP Konrad Rzeszutek Wilk
2016-09-28 12:18   ` Jan Beulich
2016-09-28  9:42 ` [PATCH v1 04/12] tmem: Wrap tmem tze code with CONFIG_TMEM_TZE Konrad Rzeszutek Wilk
2016-09-28 12:19   ` Jan Beulich
2016-09-28  9:42 ` [PATCH v1 05/12] tmem: Delete deduplication (and tze) code Konrad Rzeszutek Wilk
2016-09-28 12:34   ` Jan Beulich
2016-09-28 15:05     ` Konrad Rzeszutek Wilk
2016-09-28  9:42 ` [PATCH v1 06/12] tmem: Move client weight, frozen, live_migrating, and compress Konrad Rzeszutek Wilk
2016-09-28 12:39   ` Jan Beulich
2016-09-28  9:42 ` [PATCH v1 07/12] tmem/libxc: Squash XEN_SYSCTL_TMEM_OP_[SET|SAVE] Konrad Rzeszutek Wilk
2016-09-28 11:06   ` Wei Liu
2016-09-28 12:50   ` Jan Beulich
2016-09-28  9:42 ` [PATCH v1 08/12] tmem: Handle 'struct tmem_info' as a seperate field in the Konrad Rzeszutek Wilk
2016-09-28 11:00   ` Wei Liu
2016-09-28 12:56   ` Jan Beulich
2016-09-30 14:36     ` Konrad Rzeszutek Wilk
2016-09-30 14:56       ` Jan Beulich
2016-09-30 16:51         ` Konrad Rzeszutek Wilk
2016-09-28  9:42 ` [PATCH v1 09/12] tmem: Check version and maxpools when XEN_SYSCTL_TMEM_SET_CLIENT_INFO Konrad Rzeszutek Wilk
2016-09-28 11:00   ` Wei Liu
2016-09-28 12:58   ` Jan Beulich
2016-09-28  9:42 ` [PATCH v1 10/12] tmem: Unify XEN_SYSCTL_TMEM_OP_[[SAVE_[BEGIN|END]|RESTORE_BEGIN] Konrad Rzeszutek Wilk
2016-09-28 11:06   ` Wei Liu
2016-09-28 13:00   ` Jan Beulich
2016-09-28  9:42 ` [PATCH v1 11/12] tmem/xc_tmem_control: Rename 'arg1' to 'len' and 'arg2' to arg Konrad Rzeszutek Wilk
2016-09-28 11:07   ` Wei Liu
2016-09-28  9:42 ` [PATCH v1 12/12] tmem: Batch and squash XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_[FLAGS, NPAGES, UUID] in one sub-call: XEN_SYSCTL_TMEM_OP_GET_POOLS Konrad Rzeszutek Wilk
2016-09-28 11:07   ` Wei Liu
2016-09-28 13:11   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.