All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files
@ 2018-10-08 23:37 Stefano Stabellini
  2018-10-08 23:37 ` [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
                   ` (8 more replies)
  0 siblings, 9 replies; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-08 23:37 UTC (permalink / raw)
  To: xen-devel; +Cc: ian.jackson, julien.grall, sstabellini, wei.liu2, blackskygg

Hi,

This series implements a new xl config entry. Users can use the new
config entry to statically setup shared memory areas among VMs that
don't have grant table support so that they could communicate with each
other through the static shared memory areas.

It was originally developed by Zhongze, I am just updating the last few
issued that were address during the last round of reviews in January. I
tested the feature on ARM and works fine.

Cheers,

Stefano


Main changes in v8:
- remove "add xen,dmabuf nodes" patch
- add "map reserved-memory regions as normal memory in dom0" patch
- send new device tree binding request to devicetree.org



The following changes since commit 85b00385827e4e061b2ff38b549c03d0f1e66b6a:

  xen/sched: Drop set_current_state() (2018-10-08 18:34:55 +0100)

are available in the git repository at:

  http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git share_mem-v8

for you to fetch changes up to fc00e34ce9924b5915deacd881ae7cd6888510ba:

  xen/arm: map reserved-memory regions as normal memory in dom0 (2018-10-08 16:34:09 -0700)

----------------------------------------------------------------
Stefano Stabellini (2):
      xen/arm: export shared memory regions as reserved-memory on device tree
      xen/arm: map reserved-memory regions as normal memory in dom0

Zhongze Liu (6):
      xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
      libxl: introduce a new structure to represent static shared memory regions
      libxl: support mapping static shared memory areas during domain creation
      libxl: support unmapping static shared memory areas during domain destruction
      libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
      docs: documentation about static shared memory regions

 docs/man/xl-static-shm-configuration.pod.5 | 264 +++++++++++++++
 docs/man/xl.cfg.pod.5.in                   |   8 +
 docs/misc/xenstore-paths.markdown          |  47 +++
 tools/flask/policy/modules/xen.if          |   2 +
 tools/libxl/Makefile                       |   5 +-
 tools/libxl/libxl.h                        |   8 +
 tools/libxl/libxl_arch.h                   |   8 +-
 tools/libxl/libxl_arm.c                    |  76 ++++-
 tools/libxl/libxl_create.c                 |  27 ++
 tools/libxl/libxl_dom.c                    |   2 +-
 tools/libxl/libxl_domain.c                 |   5 +
 tools/libxl/libxl_internal.h               |  16 +
 tools/libxl/libxl_sshm.c                   | 512 +++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl                |  32 +-
 tools/libxl/libxl_x86.c                    |  21 +-
 tools/libxl/libxlu_sshm.c                  | 206 ++++++++++++
 tools/libxl/libxlutil.h                    |   6 +
 tools/xl/xl_parse.c                        |  25 +-
 xen/arch/arm/domain_build.c                |   7 +
 xen/arch/arm/mm.c                          |   7 +-
 xen/include/public/memory.h                |   8 +
 xen/include/xsm/dummy.h                    |  14 +
 xen/include/xsm/xsm.h                      |   6 +
 xen/xsm/dummy.c                            |   1 +
 xen/xsm/flask/hooks.c                      |   9 +
 xen/xsm/flask/policy/access_vectors        |   5 +
 26 files changed, 1315 insertions(+), 12 deletions(-)
 create mode 100644 docs/man/xl-static-shm-configuration.pod.5
 create mode 100644 tools/libxl/libxl_sshm.c
 create mode 100644 tools/libxl/libxlu_sshm.c

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
@ 2018-10-08 23:37 ` Stefano Stabellini
  2018-10-09  7:12   ` Jan Beulich
  2018-10-08 23:37 ` [PATCH v8 2/8] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-08 23:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, ian.jackson,
	Tim Deegan, julien.grall, Jan Beulich, Daniel De Graaf

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

The existing XENMAPSPACE_gmfn_foreign subop of XENMEM_add_to_physmap forbids
a Dom0 to map memory pages from one DomU to another, which restricts some useful
yet not dangerous use cases -- such as sharing pages among DomU's so that they
can do shm-based communication.

This patch introduces XENMAPSPACE_gmfn_share to address this inconvenience,
which is mostly the same as XENMAPSPACE_gmfn_foreign but has its own xsm check.

Specifically, the patch:

* Introduces a new av permission MMU__SHARE_MEM to denote if two domains can
  share memory by using the new subop;
* Introduces xsm_map_gmfn_share() to check if (current) has proper permission
  over (t) AND MMU__SHARE_MEM is allowed between (d) and (t);
* Modify the default xen.te to allow MMU__SHARE_MEM for normal domains that
  allow grant mapping/event channels.

The new subop is marked unsupported for x86 because calling p2m_add_foregin
on two DomU's is currently not supported on x86.

This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: xen-devel@lists.xen.org
---
Changes in v8:
- typo

Changes in v7:
- add additional checks
- update comments to reflect that

Changes in v5:
- fix coding style
- remove useless x86 hypervisor message for the unimplemented op
- code style
- improve/add comments
---
 tools/flask/policy/modules/xen.if   |  2 ++
 xen/arch/arm/mm.c                   |  7 ++++++-
 xen/include/public/memory.h         |  8 ++++++++
 xen/include/xsm/dummy.h             | 14 ++++++++++++++
 xen/include/xsm/xsm.h               |  6 ++++++
 xen/xsm/dummy.c                     |  1 +
 xen/xsm/flask/hooks.c               |  9 +++++++++
 xen/xsm/flask/policy/access_vectors |  5 +++++
 8 files changed, 51 insertions(+), 1 deletion(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 4e06cfc..b0ab089 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -128,6 +128,8 @@ define(`domain_comms', `
 	domain_event_comms($1, $2)
 	allow $1 $2:grant { map_read map_write copy unmap };
 	allow $2 $1:grant { map_read map_write copy unmap };
+	allow $1 $2:mmu share_mem;
+	allow $2 $1:mmu share_mem;
 ')
 
 # domain_self_comms(domain)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 7a06a33..7b7869f 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1245,6 +1245,7 @@ int xenmem_add_to_physmap_one(
 
         break;
     case XENMAPSPACE_gmfn_foreign:
+    case XENMAPSPACE_gmfn_share:
     {
         struct domain *od;
         p2m_type_t p2mt;
@@ -1259,7 +1260,11 @@ int xenmem_add_to_physmap_one(
             return -EINVAL;
         }
 
-        rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+        if ( space == XENMAPSPACE_gmfn_foreign )
+            rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+        else
+            rc = xsm_map_gmfn_share(XSM_TARGET, d, od);
+
         if ( rc )
         {
             rcu_unlock_domain(od);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 8fc27ce..631d10e 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -227,6 +227,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_mapping_t);
                                       Stage-2 using the Normal Memory
                                       Inner/Outer Write-Back Cacheable
                                       memory attribute. */
+#define XENMAPSPACE_gmfn_share   6 /* GMFN from another dom,
+                                      XENMEM_add_to_physmap_batch (and
+                                      currently ARM) only. Unlike
+                                      XENMAPSPACE_gmfn_foreign, it
+                                      requires current to have mapping
+                                      privileges instead of the
+                                      destination domain. */
+
 /* ` } */
 
 /*
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index b0ac1f6..dd99593 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -535,6 +535,20 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
     return xsm_default_action(action, d, t);
 }
 
+/*
+ * Be aware that this is not an exact default equivalence of its flask
+ * variant which also checks if @d and @t "are allowed to share memory
+ * pages", for now, we don't have a proper default equivalence of such a
+ * check.
+ */
+static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
+                                         struct domain *t)
+{
+    XSM_ASSERT_ACTION(XSM_TARGET);
+    return xsm_default_action(XSM_TARGET, current->domain, d) ?:
+           xsm_default_action(action, current->domain, t);
+}
+
 static XSM_INLINE int xsm_hvm_param(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 3d67962..96edfeb 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -86,6 +86,7 @@ struct xsm_operations {
     int (*add_to_physmap) (struct domain *d1, struct domain *d2);
     int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
     int (*map_gmfn_foreign) (struct domain *d, struct domain *t);
+    int (*map_gmfn_share) (struct domain *d, struct domain *t);
     int (*claim_pages) (struct domain *d);
 
     int (*console_io) (struct domain *d, int cmd);
@@ -376,6 +377,11 @@ static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, str
     return xsm_ops->map_gmfn_foreign(d, t);
 }
 
+static inline int xsm_map_gmfn_share (xsm_default_t def, struct domain *d, struct domain *t)
+{
+    return xsm_ops->map_gmfn_share(d, t);
+}
+
 static inline int xsm_claim_pages(xsm_default_t def, struct domain *d)
 {
     return xsm_ops->claim_pages(d);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 3290d04..db5f3ec 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -124,6 +124,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, add_to_physmap);
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
+    set_to_dummy_if_null(ops, map_gmfn_share);
 
     set_to_dummy_if_null(ops, vm_event_control);
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 6da2773..bff48f8 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1192,6 +1192,14 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
     return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
 }
 
+static int flask_map_gmfn_share(struct domain *d, struct domain *t)
+{
+    if ( current_has_perm(d, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) )
+        return rc;
+    return current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) ?:
+           domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM);
+}
+
 static int flask_hvm_param(struct domain *d, unsigned long op)
 {
     u32 perm;
@@ -1816,6 +1824,7 @@ static struct xsm_operations flask_ops = {
     .add_to_physmap = flask_add_to_physmap,
     .remove_from_physmap = flask_remove_from_physmap,
     .map_gmfn_foreign = flask_map_gmfn_foreign,
+    .map_gmfn_share = flask_map_gmfn_share,
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
     .get_device_group = flask_get_device_group,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index d01a7a0..6fe4c8e 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -381,6 +381,11 @@ class mmu
 # Allow a privileged domain to install a map of a page it does not own.  Used
 # for stub domain device models with the PV framebuffer.
     target_hack
+# Checked when using XENMEM_add_to_physmap with XENMAPSPACE_gmfn_share
+# to share memory between two domains:
+#  source = domain whose memory is being shared
+#  target = client domain
+    share_mem
 }
 
 # control of the paging_domctl split by subop
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v8 2/8] libxl: introduce a new structure to represent static shared memory regions
  2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
  2018-10-08 23:37 ` [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
@ 2018-10-08 23:37 ` Stefano Stabellini
  2018-10-30 15:20   ` Ian Jackson
  2018-10-08 23:37 ` [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-08 23:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add a new structure to the IDL family to represent static shared memory regions
as proposed in the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).

And deleted some trailing white spaces.

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v8:
- move LIBXL_HAVE_SSHM up and add comment
- fix typo
- remove LIBXL_SSHM_ID_MAXLEN

Changes in v5:
- fix typos
- add LIBXL_HAVE_SSHM
- replace end with size
---
 tools/libxl/libxl.h         |  8 ++++++++
 tools/libxl/libxl_types.idl | 32 ++++++++++++++++++++++++++++++--
 2 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 2cfc1b0..18b29d7 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -377,6 +377,11 @@
 #define LIBXL_HAVE_EXTENDED_VKB 1
 
 /*
+ * LIBXL_HAVE_SSHM indicates that libxl supports static shared memory regions.
+ */
+#define LIBXL_HAVE_SSHM 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2461,6 +2466,9 @@ int libxl_fd_set_nonblock(libxl_ctx *ctx, int fd, int nonblock);
 int libxl_qemu_monitor_command(libxl_ctx *ctx, uint32_t domid,
                                const char *command_line, char **output);
 
+/* Constant for libxl_static_shm */
+#define LIBXL_SSHM_RANGE_UNKNOWN UINT64_MAX
+
 #include <libxl_event.h>
 
 #endif /* LIBXL_H */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 3b8f967..c3498ad 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -565,10 +565,10 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("keymap",           string),
                                        ("sdl",              libxl_sdl_info),
                                        ("spice",            libxl_spice_info),
-                                       
+
                                        ("gfx_passthru",     libxl_defbool),
                                        ("gfx_passthru_kind", libxl_gfx_passthru_kind),
-                                       
+
                                        ("serial",           string),
                                        ("boot",             string),
                                        ("usb",              libxl_defbool),
@@ -903,6 +903,33 @@ libxl_device_vsnd = Struct("device_vsnd", [
     ("pcms", Array(libxl_vsnd_pcm, "num_vsnd_pcms"))
     ])
 
+libxl_sshm_cachepolicy = Enumeration("sshm_cachepolicy", [
+    (-1, "UNKNOWN"),
+    (0,  "ARM_NORMAL"),  # ARM policies should be < 32
+    (32,  "X86_NORMAL"), # X86 policies should be >= 32
+    ], init_val = "LIBXL_SSHM_CACHE_POLICY_UNKNOWN")
+
+libxl_sshm_prot = Enumeration("sshm_prot", [
+    (-1, "UNKNOWN"),
+    (3,  "RW"),
+    ], init_val = "LIBXL_SSHM_PROT_UNKNOWN")
+
+libxl_sshm_role = Enumeration("sshm_role", [
+    (-1, "UNKNOWN"),
+    (0,  "MASTER"),
+    (1,  "SLAVE"),
+    ], init_val = "LIBXL_SSHM_ROLE_UNKNOWN")
+
+libxl_static_shm = Struct("static_shm", [
+    ("id", string),
+    ("offset", uint64, {'init_val': 'LIBXL_SSHM_RANGE_UNKNOWN'}),
+    ("begin", uint64, {'init_val': 'LIBXL_SSHM_RANGE_UNKNOWN'}),
+    ("size", uint64, {'init_val': 'LIBXL_SSHM_RANGE_UNKNOWN'}),
+    ("prot", libxl_sshm_prot, {'init_val': 'LIBXL_SSHM_PROT_UNKNOWN'}),
+    ("cache_policy", libxl_sshm_cachepolicy, {'init_val': 'LIBXL_SSHM_CACHEPOLICY_UNKNOWN'}),
+    ("role", libxl_sshm_role, {'init_val': 'LIBXL_SSHM_ROLE_UNKNOWN'}),
+])
+
 libxl_domain_config = Struct("domain_config", [
     ("c_info", libxl_domain_create_info),
     ("b_info", libxl_domain_build_info),
@@ -924,6 +951,7 @@ libxl_domain_config = Struct("domain_config", [
     ("channels", Array(libxl_device_channel, "num_channels")),
     ("usbctrls", Array(libxl_device_usbctrl, "num_usbctrls")),
     ("usbdevs", Array(libxl_device_usbdev, "num_usbdevs")),
+    ("sshms", Array(libxl_static_shm, "num_sshms")),
 
     ("on_poweroff", libxl_action_on_shutdown),
     ("on_reboot", libxl_action_on_shutdown),
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation
  2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
  2018-10-08 23:37 ` [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
  2018-10-08 23:37 ` [PATCH v8 2/8] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
@ 2018-10-08 23:37 ` Stefano Stabellini
  2018-10-24 13:26   ` Oleksandr Tyshchenko
  2018-10-30 15:36   ` Ian Jackson
  2018-10-08 23:37 ` [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-08 23:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add libxl__sshm_add to map shared pages from one DomU to another, The mapping
process involves the following steps:

  * Set defaults and check for further errors in the static_shm configs:
    overlapping areas, invalid ranges, duplicated master domain,
    not page aligned, no master domain etc.
  * Use xc_domain_add_to_physmap_batch to map the shared pages to slaves
  * When some of the pages can't be successfully mapped, roll back any
    successfully mapped pages so that the system stays in a consistent state.
  * Write information about static shared memory areas into the appropriate
    xenstore paths and set the refcount of the shared region accordingly.

Temporarily mark this as unsupported on x86 because calling p2m_add_foreign on
two domU's is currently not allowd on x86 (see the comments in
x86/mm/p2m.c:p2m_add_foreign for more details).

This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix typos
- add comments
- add value checks (including alignment checks) in sshm_setdefaults
---
 tools/libxl/Makefile         |   3 +-
 tools/libxl/libxl_arch.h     |   6 +
 tools/libxl/libxl_arm.c      |  15 ++
 tools/libxl/libxl_create.c   |  27 +++
 tools/libxl/libxl_internal.h |  14 ++
 tools/libxl/libxl_sshm.c     | 405 +++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_x86.c      |  19 ++
 7 files changed, 488 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_sshm.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 6da342e..53af186 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -140,7 +140,8 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_vtpm.o libxl_nic.o libxl_disk.o libxl_console.o \
 			libxl_cpupool.o libxl_mem.o libxl_sched.o libxl_tmem.o \
 			libxl_9pfs.o libxl_domain.o libxl_vdispl.o \
-			libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o $(LIBXL_OBJS-y)
+			libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o libxl_sshm.o \
+			$(LIBXL_OBJS-y)
 LIBXL_OBJS += libxl_genid.o
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 930570e..63c26cc 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -73,6 +73,12 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              const libxl_domain_build_info *info,
                              uint64_t *out);
 
+_hidden
+bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info);
+
+_hidden
+int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 25dc3de..054ad58 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -1138,6 +1138,21 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
     libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_PVH);
 }
 
+bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
+{
+    return true;
+}
+
+int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
+{
+    if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
+        sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_ARM_NORMAL;
+    if (sshm->cache_policy >= LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
+        return ERROR_INVAL;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 320dbed..45ae9e4 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -513,6 +513,14 @@ int libxl__domain_build(libxl__gc *gc,
         ret = ERROR_INVAL;
         goto out;
     }
+
+    /* The p2m has been setup, we could map the static shared memory now. */
+    ret = libxl__sshm_add(gc, domid, d_config->sshms, d_config->num_sshms);
+    if (ret != 0) {
+        LOG(ERROR, "failed to map static shared memory");
+        goto out;
+    }
+
     ret = libxl__build_post(gc, domid, info, state, vments, localents);
 out:
     return ret;
@@ -949,6 +957,25 @@ static void initiate_domain_create(libxl__egc *egc,
         goto error_out;
     }
 
+    if (d_config->num_sshms != 0 &&
+        !libxl__arch_domain_support_sshm(&d_config->b_info)) {
+        LOGD(ERROR, domid, "static_shm is not supported by this domain type.");
+        ret = ERROR_INVAL;
+        goto error_out;
+    }
+
+    for (i = 0; i < d_config->num_sshms; ++i) {
+        ret = libxl__sshm_setdefault(gc, domid, &d_config->sshms[i]);
+        if (ret) {
+            LOGD(ERROR, domid, "Unable to set defaults for static shm");
+            goto error_out;
+        }
+    }
+
+    ret = libxl__sshm_check_overlap(gc, domid,
+                                    d_config->sshms, d_config->num_sshms);
+    if (ret) goto error_out;
+
     ret = libxl__domain_make(gc, d_config, &dcs->build_state, &domid);
     if (ret) {
         LOGD(ERROR, domid, "cannot make domain: %d", ret);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 43947b1..6f31a3d 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -4434,6 +4434,20 @@ static inline const char *libxl__qemu_qmp_path(libxl__gc *gc, int domid)
 #endif
 
 /*
+ * Set up static shared ram pages for HVM domains to communicate
+ *
+ * This function should only be called after the memory map is constructed
+ * and before any further memory access.
+ */
+_hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
+                            libxl_static_shm *sshm, int len);
+
+_hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
+                                      libxl_static_shm *sshms, int len);
+_hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
+                                   libxl_static_shm *sshm);
+
+/*
  * Local variables:
  * mode: C
  * c-basic-offset: 4
diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
new file mode 100644
index 0000000..f61b80c
--- /dev/null
+++ b/tools/libxl/libxl_sshm.c
@@ -0,0 +1,405 @@
+#include "libxl_osdeps.h"
+#include "libxl_internal.h"
+#include "libxl_arch.h"
+
+#define SSHM_PATH(id) GCSPRINTF("/libxl/static_shm/%s", id)
+
+#define SSHM_ERROR(domid, sshmid, f, ...)                               \
+    LOGD(ERROR, domid, "static_shm id = %s: " f, sshmid, ##__VA_ARGS__)
+
+
+/* Set default values for libxl_static_shm */
+int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
+                           libxl_static_shm *sshm)
+{
+    int rc;
+
+    if (sshm->role != LIBXL_SSHM_ROLE_SLAVE &&
+        sshm->role != LIBXL_SSHM_ROLE_MASTER)
+        return ERROR_INVAL;
+    if (sshm->begin & ~XC_PAGE_MASK ||
+        sshm->size & ~XC_PAGE_MASK ||
+        (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN &&
+        sshm->offset & ~XC_PAGE_MASK)) {
+        SSHM_ERROR(domid, sshm->id,
+                   "begin/size/offset is not a multiple of 4K");
+        return ERROR_INVAL;
+    }
+
+    /* role-specific checks */
+    if (sshm->role == LIBXL_SSHM_ROLE_SLAVE) {
+        if (sshm->offset == LIBXL_SSHM_RANGE_UNKNOWN)
+            sshm->offset = 0;
+        if (sshm->cache_policy != LIBXL_SSHM_CACHEPOLICY_UNKNOWN) {
+            SSHM_ERROR(domid, sshm->id,
+                       "cache_policy is only applicable to master domains");
+            rc = ERROR_INVAL;
+            goto out;
+        }
+    } else {
+        if (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN) {
+            SSHM_ERROR(domid, sshm->id,
+                       "offset is only applicable to slave domains");
+            rc = ERROR_INVAL;
+            goto out;
+        }
+
+        rc = libxl__arch_domain_sshm_cachepolicy_setdefault(sshm);
+        if (rc) {
+            SSHM_ERROR(domid, sshm->id,
+                       "cache policy not supported on this platform");
+            goto out;
+        }
+    }
+
+    rc = 0;
+out:
+    return rc;
+}
+
+/* Comparator for sorting sshm ranges by sshm->begin */
+static int sshm_range_cmp(const void *a, const void *b)
+{
+    libxl_static_shm *const *sshma = a, *const *sshmb = b;
+    return (*sshma)->begin > (*sshmb)->begin ? 1 : -1;
+}
+
+/* Check if the sshm slave configs in @sshm overlap */
+int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
+                                     libxl_static_shm *sshms, int len)
+{
+
+    const libxl_static_shm **slave_sshms = NULL;
+    int num_slaves;
+    int i;
+
+    if (!len) return 0;
+
+    slave_sshms = libxl__calloc(gc, len, sizeof(slave_sshms[0]));
+    num_slaves = 0;
+    for (i = 0; i < len; ++i) {
+        if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
+            slave_sshms[num_slaves++] = sshms + i;
+    }
+    qsort(slave_sshms, num_slaves, sizeof(slave_sshms[0]), sshm_range_cmp);
+
+    for (i = 0; i < num_slaves - 1; ++i) {
+        if (slave_sshms[i+1]->begin <
+            slave_sshms[i]->begin + slave_sshms[i]->size) {
+            SSHM_ERROR(domid, slave_sshms[i+1]->id, "slave ranges overlap.");
+            return ERROR_INVAL;
+        }
+    }
+
+    return 0;
+}
+
+/*   libxl__sshm_do_map -- map pages into slave's physmap
+ *
+ *   This functions maps
+ *     master gfn: [@msshm->begin + @sshm->offset,
+ *                  @msshm->begin + @msshm->size + @sshm->offset)
+ *   into
+ *     slave gfn: [@sshm->begin, @sshm->begin + @sshm->size)
+ *
+ *   The gfns of the pages that are successfully mapped will be stored
+ *   in @mapped, and the number of the gfns will be stored in @nmapped.
+ *
+ *   The caller has to guarantee that all the values are page-aligned.
+ */
+static int libxl__sshm_do_map(libxl__gc *gc, uint32_t mid, uint32_t sid,
+                              libxl_static_shm *sshm, libxl_static_shm *msshm,
+                              xen_pfn_t *mapped, unsigned int *nmapped)
+{
+    int rc;
+    int i;
+    xen_pfn_t num_mpages, num_spages, num_success, offset;
+    int *errs;
+    xen_ulong_t *idxs;
+    xen_pfn_t *gpfns;
+
+    num_mpages = (msshm->size) >> XC_PAGE_SHIFT;
+    num_spages = (sshm->size) >> XC_PAGE_SHIFT;
+    offset = sshm->offset >> XC_PAGE_SHIFT;
+
+    /* Check range. Test offset < mpages first to avoid overflow */
+    if ((offset >= num_mpages) || (num_mpages - offset < num_spages)) {
+        SSHM_ERROR(sid, sshm->id, "exceeds master's address space.");
+        rc = ERROR_INVAL;
+        goto out;
+    }
+
+    /* fill out the gfn's and do the mapping */
+    errs = libxl__calloc(gc, num_spages, sizeof(int));
+    idxs = libxl__calloc(gc, num_spages, sizeof(xen_ulong_t));
+    gpfns = libxl__calloc(gc, num_spages, sizeof(xen_pfn_t));
+    for (i = 0; i < num_spages; i++) {
+        idxs[i] = (msshm->begin >> XC_PAGE_SHIFT) + offset + i;
+        gpfns[i]= (sshm->begin >> XC_PAGE_SHIFT) + i;
+    }
+    rc = xc_domain_add_to_physmap_batch(CTX->xch,
+                                        sid, mid,
+                                        XENMAPSPACE_gmfn_share,
+                                        num_spages,
+                                        idxs, gpfns, errs);
+
+    num_success = 0;
+    for (i = 0; i < num_spages; i++) {
+        if (errs[i]) {
+            SSHM_ERROR(sid, sshm->id,
+                       "can't map at address 0x%"PRIx64".",
+                       gpfns[i] << XC_PAGE_SHIFT);
+            rc = ERROR_FAIL;
+        } else {
+            mapped[num_success++] = gpfns[i];
+        }
+    }
+    *nmapped = num_success;
+    if (rc) goto out;
+
+    rc = 0;
+out:
+    return rc;
+}
+
+/* Xenstore ops are protected by a transaction */
+static int libxl__sshm_incref(libxl__gc *gc, xs_transaction_t xt,
+                              const char *sshm_path)
+{
+    int rc, count;
+    const char *count_path, *count_string;
+
+    count_path = GCSPRINTF("%s/usercnt", sshm_path);
+    rc = libxl__xs_read_checked(gc, xt, count_path, &count_string);
+    if (rc) goto out;
+    count = atoi(count_string);
+
+    count_string = GCSPRINTF("%d", count+1);
+    rc = libxl__xs_write_checked(gc, xt, count_path, count_string);
+    if (rc) goto out;
+
+    rc = 0;
+out:
+    return rc;
+}
+
+static int libxl__sshm_add_slave(libxl__gc *gc, uint32_t domid,
+                                 libxl_static_shm *sshm)
+{
+    int rc, i;
+    const char *sshm_path, *slave_path;
+    const char *dom_path, *dom_sshm_path, *dom_role_path;
+    const char *xs_value;
+    char *ents[9];
+    libxl_static_shm master_sshm;
+    uint32_t master_domid;
+    xen_pfn_t *mapped;
+    unsigned int nmapped = 0;
+    xs_transaction_t xt = XBT_NULL;
+    bool isretry;
+
+    sshm_path = SSHM_PATH(sshm->id);
+    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, sshm_path, domid);
+    dom_path = libxl__xs_get_dompath(gc, domid);
+    /* the domain should be in xenstore by now */
+    assert(dom_path);
+    dom_sshm_path = GCSPRINTF("%s/static_shm/%s", dom_path, sshm->id);
+    dom_role_path = GCSPRINTF("%s/role", dom_sshm_path);
+
+    /* prepare the slave xenstore entries */
+    ents[0] = "begin";
+    ents[1] = GCSPRINTF("0x%"PRIx64, sshm->begin);
+    ents[2] = "size";
+    ents[3] = GCSPRINTF("0x%"PRIx64, sshm->size);
+    ents[4] = "offset";
+    ents[5] = GCSPRINTF("0x%"PRIx64, sshm->offset);
+    ents[6] = "prot";
+    ents[7] = libxl__strdup(gc, libxl_sshm_prot_to_string(sshm->prot));
+    ents[8] = NULL;
+
+    mapped = libxl__calloc(gc, sshm->size >> XC_PAGE_SHIFT, sizeof(xen_pfn_t));
+
+    isretry = false;
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &xt);
+        if (rc) goto out;
+
+        if (!libxl__xs_read(gc, xt, sshm_path)) {
+            SSHM_ERROR(domid, sshm->id, "no master found.");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        /* every ID can appear in each domain at most once */
+        if (libxl__xs_read(gc, xt, dom_sshm_path)) {
+            SSHM_ERROR(domid, sshm->id,
+                       "domain tried to map the same ID twice.");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        /* look at the master info and see if we could do the mapping */
+        rc = libxl__xs_read_checked(gc, xt,
+                                    GCSPRINTF("%s/prot", sshm_path),
+                                    &xs_value);
+        if (rc) goto out;
+        libxl_sshm_prot_from_string(xs_value, &master_sshm.prot);
+
+        rc = libxl__xs_read_checked(gc, xt,
+                                    GCSPRINTF("%s/begin", sshm_path),
+                                    &xs_value);
+        if (rc) goto out;
+        master_sshm.begin = strtoull(xs_value, NULL, 16);
+
+        rc = libxl__xs_read_checked(gc, xt,
+                                    GCSPRINTF("%s/size", sshm_path),
+                                    &xs_value);
+        if (rc) goto out;
+        master_sshm.size = strtoull(xs_value, NULL, 16);
+
+        rc = libxl__xs_read_checked(gc, xt,
+                                    GCSPRINTF("%s/master", sshm_path),
+                                    &xs_value);
+        if (rc) goto out;
+        master_domid = strtoull(xs_value, NULL, 16);
+
+        if (sshm->prot == LIBXL_SSHM_PROT_UNKNOWN)
+            sshm->prot = master_sshm.prot;
+
+        /* check if the slave is asking too much permission */
+        if (master_sshm.prot < sshm->prot) {
+            SSHM_ERROR(domid, sshm->id, "slave is asking too much permission.");
+            rc = ERROR_INVAL;
+            goto out;
+        }
+
+        /* all checks passed, do the job */
+        if (!isretry) {
+            rc = libxl__sshm_do_map(gc, master_domid, domid,
+                                    sshm, &master_sshm,
+                                    mapped, &nmapped);
+            if (rc) goto out;
+        }
+
+        /* write the result to xenstore and commit */
+        rc = libxl__xs_write_checked(gc, xt, dom_role_path, "slave");
+        if (rc) goto out;
+        rc = libxl__xs_writev(gc, xt, slave_path, ents);
+        if (rc) goto out;
+        rc = libxl__sshm_incref(gc, xt, sshm_path);
+        if (rc) goto out;
+
+        rc = libxl__xs_transaction_commit(gc, &xt);
+        if (!rc) break;
+        if (rc < 0) goto out;
+        isretry = true;
+    }
+
+    rc = 0;
+out:
+    if (rc) {
+        /* roll back successfully mapped pages */
+        SSHM_ERROR(domid, sshm->id, "failed to map some pages, cancelling.");
+        for (i = 0; i < nmapped; i++) {
+            xc_domain_remove_from_physmap(CTX->xch, domid, mapped[i]);
+        }
+    }
+
+    libxl__xs_transaction_abort(gc, &xt);
+
+    return rc;
+}
+
+static int libxl__sshm_add_master(libxl__gc *gc, uint32_t domid,
+                                  libxl_static_shm *sshm)
+{
+    int rc;
+    const char *sshm_path, *dom_path, *dom_role_path;
+    char *ents[13];
+    struct xs_permissions noperm;
+    xs_transaction_t xt = XBT_NULL;
+
+    sshm_path = SSHM_PATH(sshm->id);
+    dom_path = libxl__xs_get_dompath(gc, domid);
+    /* the domain should be in xenstore by now */
+    assert(dom_path);
+    dom_role_path = GCSPRINTF("%s/static_shm/%s/role", dom_path, sshm->id);
+
+    /* prepare the xenstore entries */
+    ents[0] = "master";
+    ents[1] = GCSPRINTF("%"PRIu32, domid);
+    ents[2] = "begin";
+    ents[3] = GCSPRINTF("0x%"PRIx64, sshm->begin);
+    ents[4] = "size";
+    ents[5] = GCSPRINTF("0x%"PRIx64, sshm->size);
+    ents[6] = "prot";
+    ents[7] = libxl__strdup(gc, libxl_sshm_prot_to_string(sshm->prot));
+    ents[8] = "cache_policy";
+    ents[9] = libxl__strdup(gc,
+                            libxl_sshm_cachepolicy_to_string(sshm->cache_policy));
+    ents[10] = "usercnt";
+    ents[11] = "1";
+    ents[12] = NULL;
+
+    /* could only be accessed by Dom0 */
+    noperm.id = 0;
+    noperm.perms = XS_PERM_NONE;
+
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &xt);
+        if (rc) goto out;
+
+        if (!libxl__xs_read(gc, xt, sshm_path)) {
+            /* every ID can appear in each domain at most once */
+            if (libxl__xs_read(gc, xt, dom_role_path)) {
+                SSHM_ERROR(domid, sshm->id,
+                           "domain tried to map the same ID twice.");
+                rc = ERROR_FAIL;
+                goto out;
+            }
+            rc = libxl__xs_write_checked(gc, xt, dom_role_path, "master");
+            if (rc) goto out;;
+
+            libxl__xs_mknod(gc, xt, sshm_path, &noperm, 1);
+            libxl__xs_writev(gc, xt, sshm_path, ents);
+        } else {
+            SSHM_ERROR(domid, sshm->id, "can only have one master.");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        rc = libxl__xs_transaction_commit(gc, &xt);
+        if (!rc) break;
+        if (rc < 0) goto out;
+    }
+
+    rc = 0;
+out:
+    libxl__xs_transaction_abort(gc, &xt);
+    return rc;
+}
+
+int libxl__sshm_add(libxl__gc *gc,  uint32_t domid,
+                    libxl_static_shm *sshms, int len)
+{
+    int rc, i;
+
+    for (i = 0; i < len; ++i) {
+        if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE) {
+            rc = libxl__sshm_add_slave(gc, domid, sshms+i);
+        } else {
+            rc = libxl__sshm_add_master(gc, domid, sshms+i);
+        }
+        if (rc)  return rc;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 8b6759c..3e70a69 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -619,6 +619,25 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
     libxl_defbool_setdefault(&b_info->acpi, true);
 }
 
+bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
+{
+    /* FIXME: Mark this as unsupported for calling p2m_add_foreign on two
+     * DomU's is currently not allowd on x86, see the comments in
+     * x86/mm/p2m.c: p2m_add_foreign.
+     */
+     return false;
+}
+
+int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
+{
+    if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
+        sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_X86_NORMAL;
+    if (sshm->cache_policy < LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
+        return ERROR_INVAL;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction
  2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (2 preceding siblings ...)
  2018-10-08 23:37 ` [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
@ 2018-10-08 23:37 ` Stefano Stabellini
  2018-10-24 11:35   ` Oleksandr Tyshchenko
                     ` (2 more replies)
  2018-10-08 23:37 ` [PATCH v8 5/8] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
                   ` (4 subsequent siblings)
  8 siblings, 3 replies; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-08 23:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add libxl__sshm_del to unmap static shared memory areas mapped by
libxl__sshm_add during domain creation. The unmapping process is:

* For a master: decrease the refcount of the sshm region, if the refcount
  reaches 0, cleanup the whole sshm path.

* For a slave:
  1) unmap the shared pages, and cleanup related xs entries. If the
     system works normally, all the shared pages will be unmapped, so there
     won't be page leaks. In case of errors, the unmapping process will go
     on and unmap all the other pages that can be unmapped, so the other
     pages won't be leaked, either.
  2) Decrease the refcount of the sshm region, if the refcount reaches
     0, cleanup the whole sshm path.

This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org

---
Changes in v5:
- fix typos
- add comments
- cannot move unmap before xenstore transaction because it needs to
  retrieve begin/size values from xenstore
---
 tools/libxl/libxl_domain.c   |   5 ++
 tools/libxl/libxl_internal.h |   2 +
 tools/libxl/libxl_sshm.c     | 107 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 114 insertions(+)

diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
index 3377bba..3f7ffa6 100644
--- a/tools/libxl/libxl_domain.c
+++ b/tools/libxl/libxl_domain.c
@@ -1060,6 +1060,11 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
         goto out;
     }
 
+    rc = libxl__sshm_del(gc, domid);
+    if (rc) {
+        LOGD(ERROR, domid, "Deleting static shm failed.");
+    }
+
     if (libxl__device_pci_destroy_all(gc, domid) < 0)
         LOGD(ERROR, domid, "Pci shutdown failed");
     rc = xc_domain_pause(ctx->xch, domid);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 6f31a3d..e86d356 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -4442,6 +4442,8 @@ static inline const char *libxl__qemu_qmp_path(libxl__gc *gc, int domid)
 _hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
                             libxl_static_shm *sshm, int len);
 
+_hidden int libxl__sshm_del(libxl__gc *gc, uint32_t domid);
+
 _hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
                                       libxl_static_shm *sshms, int len);
 _hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
index f61b80c..9672056 100644
--- a/tools/libxl/libxl_sshm.c
+++ b/tools/libxl/libxl_sshm.c
@@ -94,6 +94,113 @@ int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
     return 0;
 }
 
+/*
+ * Decrease the refcount of an sshm. When refcount reaches 0,
+ * clean up the whole sshm path.
+ * Xenstore operations are done within the same transaction.
+ */
+static void libxl__sshm_decref(libxl__gc *gc, xs_transaction_t xt,
+                               const char *sshm_path)
+{
+    int count;
+    const char *count_path, *count_string;
+
+    count_path = GCSPRINTF("%s/usercnt", sshm_path);
+    if (libxl__xs_read_checked(gc, xt, count_path, &count_string))
+        return;
+    count = atoi(count_string);
+
+    if (--count == 0) {
+        libxl__xs_path_cleanup(gc, xt, sshm_path);
+        return;
+    }
+
+    count_string = GCSPRINTF("%d", count);
+    libxl__xs_write_checked(gc, xt, count_path, count_string);
+
+    return;
+}
+
+static void libxl__sshm_do_unmap(libxl__gc *gc, uint32_t domid, const char *id,
+                                 uint64_t begin, uint64_t size)
+{
+    uint64_t end;
+    begin >>= XC_PAGE_SHIFT;
+    size >>= XC_PAGE_SHIFT;
+    end = begin + size;
+    for (; begin < end; ++begin) {
+        if (xc_domain_remove_from_physmap(CTX->xch, domid, begin)) {
+            SSHM_ERROR(domid, id,
+                       "unable to unmap shared page at 0x%"PRIx64".",
+                       begin);
+        }
+    }
+}
+
+static void libxl__sshm_del_slave(libxl__gc *gc, xs_transaction_t xt,
+                                  uint32_t domid, const char *id, bool isretry)
+{
+    const char *slave_path, *begin_str, *size_str;
+    uint64_t begin, size;
+
+    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, SSHM_PATH(id), domid);
+
+    begin_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/begin", slave_path));
+    size_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/size", slave_path));
+    begin = strtoull(begin_str, NULL, 16);
+    size = strtoull(size_str, NULL, 16);
+
+    libxl__sshm_do_unmap(gc, domid, id, begin, size);
+    libxl__xs_path_cleanup(gc, xt, slave_path);
+}
+
+/* Delete static_shm entries in the xensotre. */
+int libxl__sshm_del(libxl__gc *gc,  uint32_t domid)
+{
+    int rc, i;
+    bool isretry;
+    xs_transaction_t xt = XBT_NULL;
+    const char *dom_path, *dom_sshm_path, *role;
+    char **sshm_ents;
+    unsigned int sshm_num;
+
+    dom_path = libxl__xs_get_dompath(gc, domid);
+    dom_sshm_path = GCSPRINTF("%s/static_shm", dom_path);
+
+    isretry = false;
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &xt);
+        if (rc) goto out;
+
+        if (libxl__xs_read(gc, xt, dom_sshm_path)) {
+            sshm_ents = libxl__xs_directory(gc, xt, dom_sshm_path, &sshm_num);
+            if (!sshm_ents) continue;
+
+            for (i = 0; i < sshm_num; ++i) {
+                role = libxl__xs_read(gc, xt,
+                                      GCSPRINTF("%s/%s/role",
+                                                dom_sshm_path,
+                                                sshm_ents[i]));
+                assert(role);
+                if (!strncmp(role, "slave", 5))
+                    libxl__sshm_del_slave(gc, xt, domid, sshm_ents[i], isretry);
+
+                libxl__sshm_decref(gc, xt, SSHM_PATH(sshm_ents[i]));
+            }
+        }
+
+        rc = libxl__xs_transaction_commit(gc, &xt);
+        if (!rc) break;
+        if (rc < 0) goto out;
+        isretry = true;
+    }
+
+    rc = 0;
+out:
+    libxl__xs_transaction_abort(gc, &xt);
+    return rc;
+}
+
 /*   libxl__sshm_do_map -- map pages into slave's physmap
  *
  *   This functions maps
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v8 5/8] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
  2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (3 preceding siblings ...)
  2018-10-08 23:37 ` [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
@ 2018-10-08 23:37 ` Stefano Stabellini
  2018-10-08 23:37 ` [PATCH v8 6/8] docs: documentation about static shared memory regions Stefano Stabellini
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-08 23:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add the parsing utils for the newly introduced libxl_static_sshm struct
to the libxl/libxlu_* family. And add realated parsing code in xl to
parse the struct from xl config files. This is for the proposal "Allow
setting up shared memory areas between VMs from xl config file" (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- remove alignment checks, they were moved to libxl
---
 tools/libxl/Makefile      |   2 +-
 tools/libxl/libxlu_sshm.c | 206 ++++++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxlutil.h   |   6 ++
 tools/xl/xl_parse.c       |  25 +++++-
 4 files changed, 237 insertions(+), 2 deletions(-)
 create mode 100644 tools/libxl/libxlu_sshm.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 53af186..f3de189 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -177,7 +177,7 @@ AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
 AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
-	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o
+	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o libxlu_sshm.o
 $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 
 $(TEST_PROG_OBJS) _libxl.api-for-check: CFLAGS += $(CFLAGS_libxentoollog) $(CFLAGS_libxentoolcore)
diff --git a/tools/libxl/libxlu_sshm.c b/tools/libxl/libxlu_sshm.c
new file mode 100644
index 0000000..7c3e512
--- /dev/null
+++ b/tools/libxl/libxlu_sshm.c
@@ -0,0 +1,206 @@
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxlu_internal.h"
+#include "xenctrl.h"
+
+#include <ctype.h>
+
+#define PARAM_RE(EXPR) "^\\s*" EXPR "\\s*(,|$)"
+#define WORD_RE         "([_a-zA-Z0-9]+)"
+#define EQU_RE         PARAM_RE(WORD_RE "\\s*=\\s*" WORD_RE)
+
+#define RET_INVAL(msg, curr_str)  do {              \
+        xlu__sshm_err(cfg, msg, curr_str);          \
+        rc = EINVAL;                                \
+        goto out;                                   \
+    } while(0)
+
+/* set a member in libxl_static_shm and report an error if it's respecified,
+ * @curr_str indicates the head of the remaining string. */
+#define SET_VAL(var, name, type, value, curr_str)  do {                 \
+        if ((var) != LIBXL_SSHM_##type##_UNKNOWN && (var) != value) {   \
+            RET_INVAL("\"" name "\" respecified", curr_str);            \
+        }                                                               \
+        (var) = value;                                                  \
+    } while(0)
+
+
+static void xlu__sshm_err(XLU_Config *cfg, const char *msg,
+                          const char *curr_str) {
+    fprintf(cfg->report,
+            "%s: config parsing error in shared_memory: %s at '%s'\n",
+            cfg->config_source, msg, curr_str);
+}
+
+static int parse_prot(XLU_Config *cfg, char *str, libxl_sshm_prot *prot)
+{
+    int rc;
+    libxl_sshm_prot new_prot;
+
+    if (!strcmp(str, "rw")) {
+        new_prot = LIBXL_SSHM_PROT_RW;
+    } else {
+        RET_INVAL("invalid permission flags", str);
+    }
+
+    SET_VAL(*prot, "permission flags", PROT, new_prot, str);
+
+    rc = 0;
+
+ out:
+    return rc;
+}
+
+static int parse_cachepolicy(XLU_Config *cfg, char *str,
+                             libxl_sshm_cachepolicy *policy)
+{
+    int rc;
+    libxl_sshm_cachepolicy new_policy;
+
+    if (!strcmp(str, "ARM_normal")) {
+        new_policy = LIBXL_SSHM_CACHEPOLICY_ARM_NORMAL;
+    } else if (!strcmp(str, "x86_normal")) {
+        new_policy = LIBXL_SSHM_CACHEPOLICY_X86_NORMAL;
+    } else {
+        RET_INVAL("invalid cache policy", str);
+    }
+
+    SET_VAL(*policy, "cache policy", CACHEPOLICY, new_policy, str);
+    rc = 0;
+
+ out:
+    return rc;
+}
+
+/* handle key = value pairs */
+static int handle_equ(XLU_Config *cfg, char *key, char *val,
+                      libxl_static_shm *sshm)
+{
+    int rc;
+
+    if (!strcmp(key, "id")) {
+        if (sshm->id && !strcmp(sshm->id, val)) {
+            RET_INVAL("id respecified", val);
+        }
+
+        sshm->id = strdup(val);
+        if (!sshm->id) {
+            fprintf(stderr, "sshm parser out of memory\n");
+            rc = ENOMEM;
+            goto out;
+        }
+    } else if (!strcmp(key, "role")) {
+        libxl_sshm_role new_role;
+
+        if (!strcmp("master", val)) {
+            new_role = LIBXL_SSHM_ROLE_MASTER;
+        } else if (!strcmp("slave", val)) {
+            new_role = LIBXL_SSHM_ROLE_SLAVE;
+        } else {
+            RET_INVAL("invalid role", val);
+        }
+
+        SET_VAL(sshm->role, "role", ROLE, new_role, val);
+    } else if (!strcmp(key, "begin") ||
+               !strcmp(key, "size") ||
+               !strcmp(key, "offset")) {
+        char *endptr;
+        int base = 10;
+        uint64_t new_addr;
+
+        /* Could be in hex form. Note that we don't need to check the length here,
+         * for val[] is NULL-terminated */
+        if (val[0] == '0' && val[1] == 'x') { base = 16; }
+        new_addr = strtoull(val, &endptr, base);
+        if (errno == ERANGE || *endptr)
+            RET_INVAL("invalid begin/size/offset", val);
+
+        if (key[0] == 'b') {
+            SET_VAL(sshm->begin, "beginning address", RANGE, new_addr, val);
+        } else if(key[0] == 's'){
+            SET_VAL(sshm->size, "size", RANGE, new_addr, val);
+        } else {
+            SET_VAL(sshm->offset, "offset", RANGE, new_addr, val);
+        }
+    } else if (!strcmp(key, "prot")) {
+        rc = parse_prot(cfg, val, &sshm->prot);
+        if (rc) { goto out; }
+    } else if (!strcmp(key, "cache_policy")) {
+        rc = parse_cachepolicy(cfg, val, &sshm->cache_policy);
+        if (rc) { goto out; }
+    } else {
+        RET_INVAL("invalid option", key);
+    }
+
+    rc = 0;
+
+ out:
+    return rc;
+}
+
+int xlu_sshm_parse(XLU_Config *cfg, const char *spec,
+                   libxl_static_shm *sshm)
+{
+    int rc;
+    regex_t equ_rec;
+    char *buf2 = NULL, *ptr = NULL;
+    regmatch_t pmatch[3];
+
+    rc = regcomp(&equ_rec, EQU_RE, REG_EXTENDED);
+    if (rc) {
+        fprintf(stderr, "sshm parser failed to initialize\n");
+        goto out;
+    }
+
+    buf2 = ptr = strdup(spec);
+    if (!buf2) {
+        fprintf(stderr, "sshm parser out of memory\n");
+        rc = ENOMEM;
+        goto out;
+    }
+
+    /* main parsing loop */
+    while (true) {
+        if (!*ptr) { break; }
+        if (regexec(&equ_rec, ptr, 3, pmatch, 0))
+            RET_INVAL("unrecognized token", ptr);
+
+        ptr[pmatch[1].rm_eo] = '\0';
+        ptr[pmatch[2].rm_eo] = '\0';
+        rc = handle_equ(cfg, ptr + pmatch[1].rm_so,
+                        ptr + pmatch[2].rm_so, sshm);
+        if (rc) { goto out; }
+
+        ptr += pmatch[0].rm_eo;
+    }
+
+    if (*ptr) { RET_INVAL("invalid syntax", ptr); }
+
+    /* do some early checks */
+    if (!sshm->id) {
+        RET_INVAL("id not specified", spec);
+    }
+    if (sshm->begin == LIBXL_SSHM_RANGE_UNKNOWN) {
+        RET_INVAL("begin address not specified", spec);
+    }
+    if (sshm->size == LIBXL_SSHM_RANGE_UNKNOWN) {
+        RET_INVAL("size not specified", spec);
+    }
+    if (sshm->prot == LIBXL_SSHM_PROT_UNKNOWN) {
+        sshm->prot = LIBXL_SSHM_PROT_RW;
+    }
+
+    rc = 0;
+
+ out:
+    if (buf2) { free(buf2); }
+    regfree(&equ_rec);
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
index e81b644..ee39cb5 100644
--- a/tools/libxl/libxlutil.h
+++ b/tools/libxl/libxlutil.h
@@ -118,6 +118,12 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str);
 int xlu_vif_parse_rate(XLU_Config *cfg, const char *rate,
                        libxl_device_nic *nic);
 
+/*
+ * static shared memory specification parsing
+ */
+int xlu_sshm_parse(XLU_Config *cfg, const char *spec,
+                   libxl_static_shm *sshm);
+
 #endif /* LIBXLUTIL_H */
 
 /*
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 0bda281..a23edec 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1209,7 +1209,8 @@ void parse_config_data(const char *config_source,
     long l, vcpus = 0;
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *vtpms,
-                   *usbctrls, *usbdevs, *p9devs, *vdispls, *pvcallsifs_devs;
+                   *usbctrls, *usbdevs, *p9devs, *vdispls, *pvcallsifs_devs,
+                   *sshms;
     XLU_ConfigList *channels, *ioports, *irqs, *iomem, *viridian, *dtdevs,
                    *mca_caps;
     int num_ioports, num_irqs, num_iomem, num_cpus, num_viridian, num_mca_caps;
@@ -1919,6 +1920,28 @@ void parse_config_data(const char *config_source,
         }
     }
 
+    if (!xlu_cfg_get_list (config, "static_shm", &sshms, 0, 0)) {
+        d_config->num_sshms = 0;
+        d_config->sshms = NULL;
+        while ((buf = xlu_cfg_get_listitem (sshms, d_config->num_sshms)) != NULL) {
+            libxl_static_shm *sshm;
+            char *buf2 = strdup(buf);
+            int ret;
+
+            sshm = ARRAY_EXTEND_INIT_NODEVID(d_config->sshms,
+                                             d_config->num_sshms,
+                                             libxl_static_shm_init);
+            ret = xlu_sshm_parse(config, buf2, sshm);
+            if (ret) {
+                fprintf(stderr,
+                        "xl: Invalid argument for static_shm: %s", buf2);
+                exit(EXIT_FAILURE);
+            }
+
+            free(buf2);
+        }
+    }
+
     if (!xlu_cfg_get_list(config, "p9", &p9devs, 0, 0)) {
         libxl_device_p9 *p9;
         char *security_model = NULL;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v8 6/8] docs: documentation about static shared memory regions
  2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (4 preceding siblings ...)
  2018-10-08 23:37 ` [PATCH v8 5/8] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
@ 2018-10-08 23:37 ` Stefano Stabellini
  2018-10-30 15:17   ` Ian Jackson
  2018-10-08 23:37 ` [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-08 23:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add docs to document the motivation, usage, use cases and other
relevant information about the static shared memory feature.

This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file". See:

  https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org

---
Changes in v6:
- add clarifications on memory allocation

Changes in v5:
- fix typos
---
 docs/man/xl-static-shm-configuration.pod.5 | 264 +++++++++++++++++++++++++++++
 docs/man/xl.cfg.pod.5.in                   |   8 +
 docs/misc/xenstore-paths.markdown          |  47 +++++
 3 files changed, 319 insertions(+)
 create mode 100644 docs/man/xl-static-shm-configuration.pod.5

diff --git a/docs/man/xl-static-shm-configuration.pod.5 b/docs/man/xl-static-shm-configuration.pod.5
new file mode 100644
index 0000000..81ff3f1
--- /dev/null
+++ b/docs/man/xl-static-shm-configuration.pod.5
@@ -0,0 +1,264 @@
+=head1 NAME
+
+xl-static-shm-configuration - XL Static Shared Memory Configuration Syntax
+
+
+(B<NOTE>: This is currently only available to ARM guests.)
+
+=head1 DESCRIPTION
+
+The static_shm option allows users to statically setup shared memory regions
+among a group of VMs, enabling guests without grant table support to do
+shm-based communication.
+
+Every shared region is:
+
+=over 4
+
+* Uniquely identified by a string that is no longer than 128 characters, which
+is called an B<identifier> in this document.
+
+* Backed by exactly one domain, which is called a B<master> domain, and all
+the other domains who are also sharing this region are called B<slave>s.
+
+=back
+
+=head1 SYNTAX
+
+This document specifies syntax of the static shared memory configuration in
+the xl config file. It has the following form:
+
+    static_shm = [ "SSHM_SPEC", "SSHM_SPEC", ... ]
+
+where each C<SSHM_SPEC> is in this form:
+
+    [<key>=<value>,]*
+
+Valid examples of C<SSHM_SPEC> are:
+
+    id=ID1, begin=0x100000, size=0x100000, role=master, cache_policy=x86_normal
+    id=ID1, offset = 0, begin=0x500000, size=0x100000, role=slave, prot=rw
+    id=ID2, begin=0x300000, size=0x100000, role=master
+    id=ID2, offset = 0x10000, begin=0x690000, size=0x110000, role=slave
+    id=ID2, offset = 0x10000, begin=0x690000, size=0x110000, role=slave
+
+These might be specified in the domain config file like this:
+
+    static_shm = ["id=ID2, offset = 0x10000, begin=0x690000, size=0x110000,
+role=slave"]
+
+
+More formally, the string is a series of comma-separated keyword/value
+pairs. Each parameter may be specified at most once. Default values apply if
+the parameter is not specified.
+
+=head1 Parameters
+
+=over 4
+
+=item B<id>
+
+=over 4
+
+=item Description
+
+The unique identifier of the shared memory region.
+
+Every identifier could appear only once in each xl config file.
+
+=item Supported values
+
+A string that contains alphanumerics and "_"s, and is no longer than 128
+characters.
+
+=item Default value
+
+None, this parameter is mandatory.
+
+=back
+
+=item B<begin>/B<size>
+
+=over 4
+
+=item Description
+
+The boundaries of the shared memory area.
+
+=item Supported values
+
+Same with B<offset>.
+
+=item Default Value
+
+None, this parameter is mandatory.
+
+=back
+
+=item B<offset>
+
+=over 4
+
+=item Description
+
+Can only appear when B<role> = slave. If set, the address mapping will not
+start from the beginning the backing memory region, but from the middle
+(B<offset> bytes away from the beginning) of it. See the graph below:
+
+With B<offset> = 0, the mapping will look like:
+
+  backing memory region:  #########################################
+                          |                       |
+                          |                       |
+                          |                       |
+                          V                       V
+  slave's shared region:  #########################
+
+With B<offset> > 0:
+
+  backing memory region:   #########################################
+                           |<-- offset -->||                       |
+                                           |                       |
+                                           |                       |
+                                           V                       V
+  slave's memory region:                   #########################
+
+=item Supported values
+
+Decimals or hexadecimals with a prefix "0x", and should be the multiple of the
+hypervisor page granularity (currently 4K on both ARM and x86).
+
+=item Default value
+
+0x0
+
+=back
+
+=item B<role>
+
+=over 4
+
+=item Description
+
+The backing area would be taken from one domain, which we will mark as
+the "master domain", and this domain should be created prior to any
+other slave domains that depend on it. The master's shared memory range
+is NOT allocated in addition to its regular memory. Hence, it is usually
+a good idea to choose a subrange of the regular guest memory allocation,
+which starts at GUEST_RAM0_BASE, see xen/include/public/arch-arm.h.
+
+The "slave domain" maps the memory of the master. The address of said
+mapping should not be overlapping with the normal memory allocation of
+the slave domain.
+
+This argument specifies the role of this domain.
+
+=item Supported values
+
+master, slave
+
+=item Default value
+
+slave
+
+=back
+
+=item B<prot>
+
+=over 4
+
+=item Description
+
+When B<role> = master, this means the largest set of stage-2 permission flags
+that can be granted to the slave domains. When B<role> = slave, this means the
+stage-2 permission flags of the shared memory area.
+
+=item Supported values
+
+Currently only 'rw' is supported.
+
+=item Default value
+
+rw
+
+=back
+
+=item B<cache_policy>
+
+=over 4
+
+=item Description
+
+The stage-2 cacheability/shareability attributes of the shared memory area.
+This can only appear when B<role> = master.
+
+=item Supported values
+
+Currently, only the following policy is supported:
+
+=over 4
+
+=item B<ARM_normal>
+
+Only applicable to ARM guests. This would mean Inner and Outer Write-Back
+Cacheable, and Inner Shareable.
+
+=back
+
+=item Default value
+
+ARM_normal
+
+=back
+
+=back
+
+=head1 TYPICAL USAGE
+
+A typical procedure of setting up a shared mem region among several VMs is:
+
+=over 4
+
+1. Add a static_shm option to the master domain's xl config file, assign an
+B<ID> to it and mark it's B<role> as master, and set up the boundaries, prot
+flag, and B<cache_policy> appropriately.
+
+2. Add a static_shm option to every slave domain's xl config file, set
+their B<ID> to the same value as the master's, and set up the B<offset>,
+boundaries and prot flag appropriately.
+
+3. Create the master domain.
+
+4. Create the slaves.
+
+=back
+
+Remember that the master domain must be created before any slave domains could
+be created, for the slaves depend on the memory pages backed by their master.
+
+=head1 Example
+
+Suppose that we have 3 domains: vm1~vm3. And we want to setup two shared
+regions, say, ID1 and ID2, among the three domains, with the following address
+mapping:
+
+   ID1: (vm1 : 0x100000~0x200000) <=====> (vm2 : 0x500000~0x600000)
+   ID2: (vm1 : 0x310000~0x400000) <=====> (vm3 : 0x690000~0x800000)
+
+According to the syntax defined above, the xl config files of the three domains
+should contains the following content:
+
+In xl config file of vm1:
+  static_shm = [ "id=ID1, begin=0x100000, size=0x100000, role=master,
+cache_policy=x86_normal, prot=rw",
+"id=ID2, begin=0x300000, size=0x100000, role=master" ]
+
+In xl config file of vm2:
+  static_shm = [ "id=ID1, offset=0, begin=0x500000, size=0x100000,
+role=slave, prot=rw" ]
+
+In xl config file of vm3:
+  static_shm = [ "id=ID2, offset=0x10000, begin=0x690000,
+size=0x110000, role=slave" ]
+
+After that, just create vm1 first, and then create vm2 and vm3 in any order.
diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index b1c0be1..360237f 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -278,6 +278,14 @@ memory=8096 will report significantly less memory available for use
 than a system with maxmem=8096 memory=8096 due to the memory overhead
 of having to track the unused pages.
 
+=item B<static_shm=[ "SSHM_SPEC", "SSHM_SPEC", ... ]>
+
+Specifies the static shared memory regions of this guest. Static shared
+memory regions enables guests to communicate with each other through
+one or more shared memory regions, even without grant table support.
+Currently, this only works on ARM guests.
+See L<xl-static-shm-configuration(5)> for more details.
+
 =back
 
 =head3 Guest Virtual NUMA Configuration
diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
index 33d2819..6ebc324 100644
--- a/docs/misc/xenstore-paths.markdown
+++ b/docs/misc/xenstore-paths.markdown
@@ -174,6 +174,14 @@ than this amount of RAM.
 
 The size of the video RAM this domain is configured with.
 
+#### ~/static_shm/[_a-zA-Z0-9]+/role = ("master"|"slave") []
+
+(Note: Currently, this will only appear on ARM guests.)
+
+The role of this domain in the static shared memory region whose id matches
+the `[_a-zA-Z0-9]+` part in the path. (Described in the manpage
+**xl-static-shm-configuration(5)**).
+
 #### ~/device/suspend/event-channel = ""|EVTCHN [w]
 
 The domain's suspend event channel. The toolstack will create this
@@ -556,6 +564,45 @@ type. The name of each backend directory is the same as the backend type
 
 Contains the PIDs of the device models running on the domain.
 
+#### /libxl/static_shm/[_a-zA-Z0-9]+/* []
+
+(Note: Currently, this will only appear on ARM guests.)
+
+The following paths contain backing memory parameters of a static shared memory
+whose id matches the `[_a-zA-Z0-9]+` part in the path. Their formats and
+meanings are the same as those in an xl config file, described in the manpage
+**xl-static-shm-configuration(5)**.
+
+* begin/size: the boundary of the backing memory region.
+* prot: the largest set of stage-2 permission flags that can be granted to
+  the slave domains.
+* cache_policy: the stage-2 cacheability/shareability attributes of the backing
+  memory region.
+
+The following paths contain run-time information about the static shared memory
+region.
+
+* master: the domid of the backing domain.
+* slaves: information about the slaves that are sharing the region, see
+  ** /libxl/static_shm/[_a-zA-Z0-9]+/slaves/$DOMID/* ** below.
+* usercnt: An integer. This is the reference count of the backing memory region,
+  including the master domain itself. When this value reachies 0, the backing
+  memory region will be freed.
+
+#### /libxl/staitc_shm/[_a-zA-Z0-9]+/slaves/$DOMID/* []
+
+(Note: Currently, this will only appear on ARM guests.)
+
+The following paths contain static shared memory region parameters of a slave
+domain. Their formats and meanings are the same as those in xl config files,
+described in the manpage **xl-static-shm-configuration(5)**.
+
+* begin/size: the boundary of the shared memory region.
+* prot: the stage-2 permission flags of the shared memory area.
+* offset: when mapping the backing memory region to the slave's memory space,
+  the mapping will start from offset bytes after the beginning of the backing
+  memory region.
+
 ## Virtual Machine Paths
 
 The /vm/$UUID namespace is used by toolstacks to store various
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (5 preceding siblings ...)
  2018-10-08 23:37 ` [PATCH v8 6/8] docs: documentation about static shared memory regions Stefano Stabellini
@ 2018-10-08 23:37 ` Stefano Stabellini
  2018-10-24 11:57   ` Oleksandr Andrushchenko
  2018-10-30 15:58   ` Ian Jackson
  2018-10-08 23:37 ` [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
  2018-10-19 21:06 ` [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
  8 siblings, 2 replies; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-08 23:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

Shared memory regions need to be advertised to the guest. Fortunately, a
device tree binding for special memory regions already exist:
reserved-memory.

Add a reserved-memory node for each shared memory region, for both
masters and slaves.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
Changes in v8:
- code style
- id is added to device tree

Changes in v7:
- change node name to xen-shmem
- add compatible property
- add id property
---
 tools/libxl/libxl_arch.h |  2 +-
 tools/libxl/libxl_arm.c  | 61 +++++++++++++++++++++++++++++++++++++++++++++---
 tools/libxl/libxl_dom.c  |  2 +-
 tools/libxl/libxl_x86.c  |  2 +-
 4 files changed, 61 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 63c26cc..417e710 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -36,7 +36,7 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
 /* setup arch specific hardware description, i.e. DTB on ARM */
 _hidden
 int libxl__arch_domain_init_hw_description(libxl__gc *gc,
-                                           libxl_domain_build_info *info,
+                                           libxl_domain_config *d_config,
                                            libxl__domain_build_state *state,
                                            struct xc_dom_image *dom);
 /* finalize arch specific hardware description. */
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 054ad58..c1624c0 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -436,6 +436,58 @@ static int make_memory_nodes(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+static int make_reserved_nodes(libxl__gc *gc, void *fdt,
+                               libxl_domain_config *d_config)
+{
+    int res, i;
+    const char *name;
+
+    if (d_config->num_sshms == 0)
+        return 0;
+
+    res = fdt_begin_node(fdt, "reserved-memory");
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "#address-cells", GUEST_ROOT_ADDRESS_CELLS);
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "#size-cells", GUEST_ROOT_SIZE_CELLS);
+    if (res) return res;
+
+    res = fdt_property(fdt, "ranges", NULL, 0);
+    if (res) return res;
+
+    for (i = 0; i < d_config->num_sshms; i++) {
+        uint64_t start = d_config->sshms[i].begin;
+
+        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
+            start += d_config->sshms[i].offset;
+        name = GCSPRINTF("xen-shmem@%"PRIx64, start);
+
+        res = fdt_begin_node(fdt, name);
+        if (res) return res;
+
+        res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS,
+                                GUEST_ROOT_SIZE_CELLS, 1, start,
+                                d_config->sshms[i].size);
+        if (res) return res;
+
+        res = fdt_property_compat(gc, fdt, 1, "xen,shared-memory");
+        if (res) return res;
+
+        res = fdt_property_string(fdt, "id", d_config->sshms[i].id);
+        if (res) return res;
+
+        res = fdt_end_node(fdt);
+        if (res) return res;
+    }
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+}
+
 static int make_gicv2_node(libxl__gc *gc, void *fdt,
                            uint64_t gicd_base, uint64_t gicd_size,
                            uint64_t gicc_base, uint64_t gicc_size)
@@ -811,10 +863,11 @@ static int copy_partial_fdt(libxl__gc *gc, void *fdt, void *pfdt)
 
 #define FDT_MAX_SIZE (1<<20)
 
-static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_build_info *info,
+static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
                               libxl__domain_build_state *state,
                               struct xc_dom_image *dom)
 {
+    libxl_domain_build_info *info = &d_config->b_info;
     void *fdt = NULL;
     void *pfdt = NULL;
     int rc, res;
@@ -897,6 +950,7 @@ next_resize:
         FDT( make_psci_node(gc, fdt) );
 
         FDT( make_memory_nodes(gc, fdt, dom) );
+        FDT( make_reserved_nodes(gc, fdt, d_config) );
 
         switch (info->arch_arm.gic_version) {
         case LIBXL_GIC_VERSION_V2:
@@ -946,12 +1000,13 @@ out:
 }
 
 int libxl__arch_domain_init_hw_description(libxl__gc *gc,
-                                           libxl_domain_build_info *info,
+                                           libxl_domain_config *d_config,
                                            libxl__domain_build_state *state,
                                            struct xc_dom_image *dom)
 {
     int rc;
     uint64_t val;
+    libxl_domain_build_info *info = &d_config->b_info;
 
     if (info->type != LIBXL_DOMAIN_TYPE_PVH) {
         LOG(ERROR, "Unsupported Arm guest type %s",
@@ -971,7 +1026,7 @@ int libxl__arch_domain_init_hw_description(libxl__gc *gc,
     if (rc)
         return rc;
 
-    rc = libxl__prepare_dtb(gc, info, state, dom);
+    rc = libxl__prepare_dtb(gc, d_config, state, dom);
     if (rc) goto out;
 
     if (!libxl_defbool_val(info->acpi)) {
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 8a8a32c..2dc7696 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -706,7 +706,7 @@ static int libxl__build_dom(libxl__gc *gc, uint32_t domid,
         LOG(ERROR, "xc_dom_parse_image failed");
         goto out;
     }
-    if ( (ret = libxl__arch_domain_init_hw_description(gc, info, state, dom)) != 0 ) {
+    if ( (ret = libxl__arch_domain_init_hw_description(gc, d_config, state, dom)) != 0 ) {
         LOGE(ERROR, "libxl__arch_domain_init_hw_description failed");
         goto out;
     }
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 3e70a69..2b7e86a 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -367,7 +367,7 @@ int libxl__arch_extra_memory(libxl__gc *gc,
 }
 
 int libxl__arch_domain_init_hw_description(libxl__gc *gc,
-                                           libxl_domain_build_info *info,
+                                           libxl_domain_config *d_config,
                                            libxl__domain_build_state *state,
                                            struct xc_dom_image *dom)
 {
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0
  2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (6 preceding siblings ...)
  2018-10-08 23:37 ` [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
@ 2018-10-08 23:37 ` Stefano Stabellini
  2018-10-22  9:57   ` Julien Grall
  2018-10-19 21:06 ` [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
  8 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-08 23:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

reserved-memory regions should be mapped as normal memory. At the
moment, they get remapped as device memory because Xen doesn't know
better. Add an explicit check for it.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
 xen/arch/arm/domain_build.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index f552154..c7df4cf 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1307,6 +1307,13 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
                "WARNING: Path %s is reserved, skip the node as we may re-use the path.\n",
                path);
 
+    /*
+     * reserved-memory ranges should be mapped as normal memory in the
+     * p2m.
+     */
+    if ( !strcmp(dt_node_name(node), "reserved-memory") )
+        p2mt = p2m_ram_rw;
+
     res = handle_device(d, node, p2mt);
     if ( res)
         return res;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-10-08 23:37 ` [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
@ 2018-10-09  7:12   ` Jan Beulich
  2018-11-06 22:42     ` Stefano Stabellini
  0 siblings, 1 reply; 46+ messages in thread
From: Jan Beulich @ 2018-10-09  7:12 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Tim Deegan, Wei Liu, Sky Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Stefano Stabellini, Ian Jackson,
	xen-devel, Julien Grall, Daniel de Graaf

>>> On 09.10.18 at 01:37, <sstabellini@kernel.org> wrote:
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -535,6 +535,20 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
>      return xsm_default_action(action, d, t);
>  }
>  
> +/*
> + * Be aware that this is not an exact default equivalence of its flask
> + * variant which also checks if @d and @t "are allowed to share memory
> + * pages", for now, we don't have a proper default equivalence of such a
> + * check.
> + */
> +static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
> +                                         struct domain *t)
> +{
> +    XSM_ASSERT_ACTION(XSM_TARGET);
> +    return xsm_default_action(XSM_TARGET, current->domain, d) ?:
> +           xsm_default_action(action, current->domain, t);
> +}

Does this (specifically xsm/dummy.c)) build with XSM enabled?
Afaict "action" is going to be an undefined symbol in that case.

> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -1192,6 +1192,14 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
>      return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
>  }
>  
> +static int flask_map_gmfn_share(struct domain *d, struct domain *t)
> +{
> +    if ( current_has_perm(d, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) )
> +        return rc;
> +    return current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) ?:
> +           domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM);
> +}
> +

Same here, for "rc". It looks to me as if the first two lines of the function
body were wrongly left in place anyway. But please could you at least
build-test with XSM enabled when you make changes to XSM code?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files
  2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (7 preceding siblings ...)
  2018-10-08 23:37 ` [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
@ 2018-10-19 21:06 ` Stefano Stabellini
  2018-10-24 14:15   ` Oleksandr Tyshchenko
  8 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-19 21:06 UTC (permalink / raw)
  Cc: sstabellini, wei.liu2, blackskygg, ian.jackson, xen-devel, julien.grall

Hi Wei,

Any chances you can review this series soon? I would love for it to be
merged before the 4.12 code freeze.

FYI I am making progress upstreaming the device tree binding, they asked
for a very minor change, which I am happy to include in the next version
of the series:

https://marc.info/?l=devicetree&m=153989826510707&w=2


On Mon, 8 Oct 2018, Stefano Stabellini wrote:
> Hi,
> 
> This series implements a new xl config entry. Users can use the new
> config entry to statically setup shared memory areas among VMs that
> don't have grant table support so that they could communicate with each
> other through the static shared memory areas.
> 
> It was originally developed by Zhongze, I am just updating the last few
> issued that were address during the last round of reviews in January. I
> tested the feature on ARM and works fine.
> 
> Cheers,
> 
> Stefano
> 
> 
> Main changes in v8:
> - remove "add xen,dmabuf nodes" patch
> - add "map reserved-memory regions as normal memory in dom0" patch
> - send new device tree binding request to devicetree.org
> 
> 
> 
> The following changes since commit 85b00385827e4e061b2ff38b549c03d0f1e66b6a:
> 
>   xen/sched: Drop set_current_state() (2018-10-08 18:34:55 +0100)
> 
> are available in the git repository at:
> 
>   http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git share_mem-v8
> 
> for you to fetch changes up to fc00e34ce9924b5915deacd881ae7cd6888510ba:
> 
>   xen/arm: map reserved-memory regions as normal memory in dom0 (2018-10-08 16:34:09 -0700)
> 
> ----------------------------------------------------------------
> Stefano Stabellini (2):
>       xen/arm: export shared memory regions as reserved-memory on device tree
>       xen/arm: map reserved-memory regions as normal memory in dom0
> 
> Zhongze Liu (6):
>       xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
>       libxl: introduce a new structure to represent static shared memory regions
>       libxl: support mapping static shared memory areas during domain creation
>       libxl: support unmapping static shared memory areas during domain destruction
>       libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
>       docs: documentation about static shared memory regions
> 
>  docs/man/xl-static-shm-configuration.pod.5 | 264 +++++++++++++++
>  docs/man/xl.cfg.pod.5.in                   |   8 +
>  docs/misc/xenstore-paths.markdown          |  47 +++
>  tools/flask/policy/modules/xen.if          |   2 +
>  tools/libxl/Makefile                       |   5 +-
>  tools/libxl/libxl.h                        |   8 +
>  tools/libxl/libxl_arch.h                   |   8 +-
>  tools/libxl/libxl_arm.c                    |  76 ++++-
>  tools/libxl/libxl_create.c                 |  27 ++
>  tools/libxl/libxl_dom.c                    |   2 +-
>  tools/libxl/libxl_domain.c                 |   5 +
>  tools/libxl/libxl_internal.h               |  16 +
>  tools/libxl/libxl_sshm.c                   | 512 +++++++++++++++++++++++++++++
>  tools/libxl/libxl_types.idl                |  32 +-
>  tools/libxl/libxl_x86.c                    |  21 +-
>  tools/libxl/libxlu_sshm.c                  | 206 ++++++++++++
>  tools/libxl/libxlutil.h                    |   6 +
>  tools/xl/xl_parse.c                        |  25 +-
>  xen/arch/arm/domain_build.c                |   7 +
>  xen/arch/arm/mm.c                          |   7 +-
>  xen/include/public/memory.h                |   8 +
>  xen/include/xsm/dummy.h                    |  14 +
>  xen/include/xsm/xsm.h                      |   6 +
>  xen/xsm/dummy.c                            |   1 +
>  xen/xsm/flask/hooks.c                      |   9 +
>  xen/xsm/flask/policy/access_vectors        |   5 +
>  26 files changed, 1315 insertions(+), 12 deletions(-)
>  create mode 100644 docs/man/xl-static-shm-configuration.pod.5
>  create mode 100644 tools/libxl/libxl_sshm.c
>  create mode 100644 tools/libxl/libxlu_sshm.c
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0
  2018-10-08 23:37 ` [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
@ 2018-10-22  9:57   ` Julien Grall
  2018-11-07  0:32     ` Stefano Stabellini
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2018-10-22  9:57 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: ian.jackson, wei.liu2, blackskygg, Stefano Stabellini

Hi,

On 09/10/2018 00:37, Stefano Stabellini wrote:
> reserved-memory regions should be mapped as normal memory.

This is already the case with p2m_mmio_direct_c. The hardware domain 
should have full control on the resulting attributes via its stage-1 
mappings. So what's wrong with that p2m type?

Cheers,

>  At the
> moment, they get remapped as device memory because Xen doesn't know
> better. Add an explicit check for it.
> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> ---
>   xen/arch/arm/domain_build.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index f552154..c7df4cf 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1307,6 +1307,13 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo,
>                  "WARNING: Path %s is reserved, skip the node as we may re-use the path.\n",
>                  path);
>   
> +    /*
> +     * reserved-memory ranges should be mapped as normal memory in the
> +     * p2m.
> +     */
> +    if ( !strcmp(dt_node_name(node), "reserved-memory") )
> +        p2mt = p2m_ram_rw;
> +
>       res = handle_device(d, node, p2mt);
>       if ( res)
>           return res;
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction
  2018-10-08 23:37 ` [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
@ 2018-10-24 11:35   ` Oleksandr Tyshchenko
  2018-10-24 11:48   ` Oleksandr Andrushchenko
  2018-10-30 15:55   ` Ian Jackson
  2 siblings, 0 replies; 46+ messages in thread
From: Oleksandr Tyshchenko @ 2018-10-24 11:35 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: stefanos, Wei Liu, blackskygg, Ian Jackson, Xen Devel, Julien Grall

Hi, Stefano

On Tue, Oct 9, 2018 at 2:39 AM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> From: Zhongze Liu <blackskygg@gmail.com>
>
> Author: Zhongze Liu <blackskygg@gmail.com>
>
> Add libxl__sshm_del to unmap static shared memory areas mapped by
> libxl__sshm_add during domain creation. The unmapping process is:
>
> * For a master: decrease the refcount of the sshm region, if the refcount
>   reaches 0, cleanup the whole sshm path.
>
> * For a slave:
>   1) unmap the shared pages, and cleanup related xs entries. If the
>      system works normally, all the shared pages will be unmapped, so there
>      won't be page leaks. In case of errors, the unmapping process will go
>      on and unmap all the other pages that can be unmapped, so the other
>      pages won't be leaked, either.
>   2) Decrease the refcount of the sshm region, if the refcount reaches
>      0, cleanup the whole sshm path.
>
> This is for the proposal "Allow setting up shared memory areas between VMs
> from xl config file" (see [1]).
>
> [1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
>
> Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: xen-devel@lists.xen.org
>
> ---
> Changes in v5:
> - fix typos
> - add comments
> - cannot move unmap before xenstore transaction because it needs to
>   retrieve begin/size values from xenstore
> ---
>  tools/libxl/libxl_domain.c   |   5 ++
>  tools/libxl/libxl_internal.h |   2 +
>  tools/libxl/libxl_sshm.c     | 107 +++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 114 insertions(+)
>
> diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
> index 3377bba..3f7ffa6 100644
> --- a/tools/libxl/libxl_domain.c
> +++ b/tools/libxl/libxl_domain.c
> @@ -1060,6 +1060,11 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
>          goto out;
>      }
>
> +    rc = libxl__sshm_del(gc, domid);
> +    if (rc) {
> +        LOGD(ERROR, domid, "Deleting static shm failed.");
> +    }
> +
>      if (libxl__device_pci_destroy_all(gc, domid) < 0)
>          LOGD(ERROR, domid, "Pci shutdown failed");
>      rc = xc_domain_pause(ctx->xch, domid);
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 6f31a3d..e86d356 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -4442,6 +4442,8 @@ static inline const char *libxl__qemu_qmp_path(libxl__gc *gc, int domid)
>  _hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
>                              libxl_static_shm *sshm, int len);
>
> +_hidden int libxl__sshm_del(libxl__gc *gc, uint32_t domid);
> +
>  _hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
>                                        libxl_static_shm *sshms, int len);
>  _hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
> diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
> index f61b80c..9672056 100644
> --- a/tools/libxl/libxl_sshm.c
> +++ b/tools/libxl/libxl_sshm.c
> @@ -94,6 +94,113 @@ int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
>      return 0;
>  }
>
> +/*
> + * Decrease the refcount of an sshm. When refcount reaches 0,
> + * clean up the whole sshm path.
> + * Xenstore operations are done within the same transaction.
> + */
> +static void libxl__sshm_decref(libxl__gc *gc, xs_transaction_t xt,
> +                               const char *sshm_path)
> +{
> +    int count;
> +    const char *count_path, *count_string;
> +
> +    count_path = GCSPRINTF("%s/usercnt", sshm_path);
> +    if (libxl__xs_read_checked(gc, xt, count_path, &count_string))
> +        return;
> +    count = atoi(count_string);
> +
> +    if (--count == 0) {
> +        libxl__xs_path_cleanup(gc, xt, sshm_path);
> +        return;
> +    }
> +
> +    count_string = GCSPRINTF("%d", count);
> +    libxl__xs_write_checked(gc, xt, count_path, count_string);
> +
> +    return;
> +}
> +
> +static void libxl__sshm_do_unmap(libxl__gc *gc, uint32_t domid, const char *id,
> +                                 uint64_t begin, uint64_t size)
> +{
> +    uint64_t end;
> +    begin >>= XC_PAGE_SHIFT;
> +    size >>= XC_PAGE_SHIFT;
> +    end = begin + size;
> +    for (; begin < end; ++begin) {
> +        if (xc_domain_remove_from_physmap(CTX->xch, domid, begin)) {
> +            SSHM_ERROR(domid, id,
> +                       "unable to unmap shared page at 0x%"PRIx64".",
> +                       begin);
> +        }
> +    }
> +}
> +
> +static void libxl__sshm_del_slave(libxl__gc *gc, xs_transaction_t xt,
> +                                  uint32_t domid, const char *id, bool isretry)

What is the purpose of passing "isretry" here? I don't see it being
used unlike for libxl__sshm_add_slave().
If this flag is not needed it can also be removed from libxl__sshm_del().

> +{
> +    const char *slave_path, *begin_str, *size_str;
> +    uint64_t begin, size;
> +
> +    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, SSHM_PATH(id), domid);
> +
> +    begin_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/begin", slave_path));
> +    size_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/size", slave_path));
> +    begin = strtoull(begin_str, NULL, 16);
> +    size = strtoull(size_str, NULL, 16);
> +
> +    libxl__sshm_do_unmap(gc, domid, id, begin, size);
> +    libxl__xs_path_cleanup(gc, xt, slave_path);
> +}
> +
> +/* Delete static_shm entries in the xensotre. */
> +int libxl__sshm_del(libxl__gc *gc,  uint32_t domid)
> +{
> +    int rc, i;
> +    bool isretry;
> +    xs_transaction_t xt = XBT_NULL;
> +    const char *dom_path, *dom_sshm_path, *role;
> +    char **sshm_ents;
> +    unsigned int sshm_num;
> +
> +    dom_path = libxl__xs_get_dompath(gc, domid);
> +    dom_sshm_path = GCSPRINTF("%s/static_shm", dom_path);
> +
> +    isretry = false;
> +    for (;;) {
> +        rc = libxl__xs_transaction_start(gc, &xt);
> +        if (rc) goto out;
> +
> +        if (libxl__xs_read(gc, xt, dom_sshm_path)) {
> +            sshm_ents = libxl__xs_directory(gc, xt, dom_sshm_path, &sshm_num);
> +            if (!sshm_ents) continue;
> +
> +            for (i = 0; i < sshm_num; ++i) {
> +                role = libxl__xs_read(gc, xt,
> +                                      GCSPRINTF("%s/%s/role",
> +                                                dom_sshm_path,
> +                                                sshm_ents[i]));
> +                assert(role);
> +                if (!strncmp(role, "slave", 5))
> +                    libxl__sshm_del_slave(gc, xt, domid, sshm_ents[i], isretry);
> +
> +                libxl__sshm_decref(gc, xt, SSHM_PATH(sshm_ents[i]));
> +            }
> +        }
> +
> +        rc = libxl__xs_transaction_commit(gc, &xt);
> +        if (!rc) break;
> +        if (rc < 0) goto out;
> +        isretry = true;
> +    }
> +
> +    rc = 0;
> +out:
> +    libxl__xs_transaction_abort(gc, &xt);
> +    return rc;
> +}
> +
>  /*   libxl__sshm_do_map -- map pages into slave's physmap
>   *
>   *   This functions maps
> --
> 1.9.1
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction
  2018-10-08 23:37 ` [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
  2018-10-24 11:35   ` Oleksandr Tyshchenko
@ 2018-10-24 11:48   ` Oleksandr Andrushchenko
  2018-10-24 13:32     ` Oleksandr Tyshchenko
  2018-10-30 15:55   ` Ian Jackson
  2 siblings, 1 reply; 46+ messages in thread
From: Oleksandr Andrushchenko @ 2018-10-24 11:48 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: ian.jackson, Stefano Stabellini, julien.grall, wei.liu2, blackskygg

On 10/09/2018 02:37 AM, Stefano Stabellini wrote:
> From: Zhongze Liu <blackskygg@gmail.com>
>
> Author: Zhongze Liu <blackskygg@gmail.com>
>
> Add libxl__sshm_del to unmap static shared memory areas mapped by
> libxl__sshm_add during domain creation. The unmapping process is:
>
> * For a master: decrease the refcount of the sshm region, if the refcount
>    reaches 0, cleanup the whole sshm path.
>
> * For a slave:
>    1) unmap the shared pages, and cleanup related xs entries. If the
>       system works normally, all the shared pages will be unmapped, so there
>       won't be page leaks. In case of errors, the unmapping process will go
>       on and unmap all the other pages that can be unmapped, so the other
>       pages won't be leaked, either.
>    2) Decrease the refcount of the sshm region, if the refcount reaches
>       0, cleanup the whole sshm path.
>
> This is for the proposal "Allow setting up shared memory areas between VMs
> from xl config file" (see [1]).
>
> [1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
>
> Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: xen-devel@lists.xen.org
>
> ---
> Changes in v5:
> - fix typos
> - add comments
> - cannot move unmap before xenstore transaction because it needs to
>    retrieve begin/size values from xenstore
> ---
>   tools/libxl/libxl_domain.c   |   5 ++
>   tools/libxl/libxl_internal.h |   2 +
>   tools/libxl/libxl_sshm.c     | 107 +++++++++++++++++++++++++++++++++++++++++++
>   3 files changed, 114 insertions(+)
>
> diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
> index 3377bba..3f7ffa6 100644
> --- a/tools/libxl/libxl_domain.c
> +++ b/tools/libxl/libxl_domain.c
> @@ -1060,6 +1060,11 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
>           goto out;
>       }
>   
> +    rc = libxl__sshm_del(gc, domid);
> +    if (rc) {
> +        LOGD(ERROR, domid, "Deleting static shm failed.");
> +    }
Do you need brackets here?
> +
>       if (libxl__device_pci_destroy_all(gc, domid) < 0)
>           LOGD(ERROR, domid, "Pci shutdown failed");
>       rc = xc_domain_pause(ctx->xch, domid);
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 6f31a3d..e86d356 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -4442,6 +4442,8 @@ static inline const char *libxl__qemu_qmp_path(libxl__gc *gc, int domid)
>   _hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
>                               libxl_static_shm *sshm, int len);
>   
> +_hidden int libxl__sshm_del(libxl__gc *gc, uint32_t domid);
> +
>   _hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
>                                         libxl_static_shm *sshms, int len);
>   _hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
> diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
> index f61b80c..9672056 100644
> --- a/tools/libxl/libxl_sshm.c
> +++ b/tools/libxl/libxl_sshm.c
> @@ -94,6 +94,113 @@ int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
>       return 0;
>   }
>   
> +/*
> + * Decrease the refcount of an sshm. When refcount reaches 0,
> + * clean up the whole sshm path.
> + * Xenstore operations are done within the same transaction.
> + */
> +static void libxl__sshm_decref(libxl__gc *gc, xs_transaction_t xt,
> +                               const char *sshm_path)
> +{
> +    int count;
> +    const char *count_path, *count_string;
> +
> +    count_path = GCSPRINTF("%s/usercnt", sshm_path);
> +    if (libxl__xs_read_checked(gc, xt, count_path, &count_string))
> +        return;
> +    count = atoi(count_string);
> +
> +    if (--count == 0) {
> +        libxl__xs_path_cleanup(gc, xt, sshm_path);
> +        return;
> +    }
> +
> +    count_string = GCSPRINTF("%d", count);
> +    libxl__xs_write_checked(gc, xt, count_path, count_string);
> +
> +    return;
> +}
> +
> +static void libxl__sshm_do_unmap(libxl__gc *gc, uint32_t domid, const char *id,
> +                                 uint64_t begin, uint64_t size)
> +{
> +    uint64_t end;
> +    begin >>= XC_PAGE_SHIFT;
> +    size >>= XC_PAGE_SHIFT;
> +    end = begin + size;
> +    for (; begin < end; ++begin) {
> +        if (xc_domain_remove_from_physmap(CTX->xch, domid, begin)) {
> +            SSHM_ERROR(domid, id,
> +                       "unable to unmap shared page at 0x%"PRIx64".",
> +                       begin);
> +        }
> +    }
> +}
> +
> +static void libxl__sshm_del_slave(libxl__gc *gc, xs_transaction_t xt,
> +                                  uint32_t domid, const char *id, bool isretry)
isretry is not used, please remove
> +{
> +    const char *slave_path, *begin_str, *size_str;
> +    uint64_t begin, size;
> +
> +    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, SSHM_PATH(id), domid);
> +
> +    begin_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/begin", slave_path));
> +    size_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/size", slave_path));
> +    begin = strtoull(begin_str, NULL, 16);
> +    size = strtoull(size_str, NULL, 16);
> +
> +    libxl__sshm_do_unmap(gc, domid, id, begin, size);
> +    libxl__xs_path_cleanup(gc, xt, slave_path);
> +}
> +
> +/* Delete static_shm entries in the xensotre. */
> +int libxl__sshm_del(libxl__gc *gc,  uint32_t domid)
> +{
> +    int rc, i;
> +    bool isretry;
> +    xs_transaction_t xt = XBT_NULL;
> +    const char *dom_path, *dom_sshm_path, *role;
> +    char **sshm_ents;
> +    unsigned int sshm_num;
> +
> +    dom_path = libxl__xs_get_dompath(gc, domid);
> +    dom_sshm_path = GCSPRINTF("%s/static_shm", dom_path);
> +
> +    isretry = false;
see above, isretry is not used
> +    for (;;) {
> +        rc = libxl__xs_transaction_start(gc, &xt);
> +        if (rc) goto out;
> +
> +        if (libxl__xs_read(gc, xt, dom_sshm_path)) {
> +            sshm_ents = libxl__xs_directory(gc, xt, dom_sshm_path, &sshm_num);
> +            if (!sshm_ents) continue;
> +
> +            for (i = 0; i < sshm_num; ++i) {
> +                role = libxl__xs_read(gc, xt,
> +                                      GCSPRINTF("%s/%s/role",
> +                                                dom_sshm_path,
> +                                                sshm_ents[i]));
> +                assert(role);
> +                if (!strncmp(role, "slave", 5))
> +                    libxl__sshm_del_slave(gc, xt, domid, sshm_ents[i], isretry);
> +
> +                libxl__sshm_decref(gc, xt, SSHM_PATH(sshm_ents[i]));
> +            }
> +        }
> +
> +        rc = libxl__xs_transaction_commit(gc, &xt);
> +        if (!rc) break;
> +        if (rc < 0) goto out;
> +        isretry = true;
> +    }
> +
> +    rc = 0;
> +out:
> +    libxl__xs_transaction_abort(gc, &xt);
> +    return rc;
> +}
> +
>   /*   libxl__sshm_do_map -- map pages into slave's physmap
>    *
>    *   This functions maps


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-10-08 23:37 ` [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
@ 2018-10-24 11:57   ` Oleksandr Andrushchenko
  2018-10-30 15:58   ` Ian Jackson
  1 sibling, 0 replies; 46+ messages in thread
From: Oleksandr Andrushchenko @ 2018-10-24 11:57 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: ian.jackson, Stefano Stabellini, julien.grall, wei.liu2, blackskygg

On 10/09/2018 02:37 AM, Stefano Stabellini wrote:
> Shared memory regions need to be advertised to the guest. Fortunately, a
> device tree binding for special memory regions already exist:
> reserved-memory.
>
> Add a reserved-memory node for each shared memory region, for both
> masters and slaves.
>
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> ---
> Changes in v8:
> - code style
> - id is added to device tree
>
> Changes in v7:
> - change node name to xen-shmem
> - add compatible property
> - add id property
> ---
>   tools/libxl/libxl_arch.h |  2 +-
>   tools/libxl/libxl_arm.c  | 61 +++++++++++++++++++++++++++++++++++++++++++++---
>   tools/libxl/libxl_dom.c  |  2 +-
>   tools/libxl/libxl_x86.c  |  2 +-
>   4 files changed, 61 insertions(+), 6 deletions(-)
>
> diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
> index 63c26cc..417e710 100644
> --- a/tools/libxl/libxl_arch.h
> +++ b/tools/libxl/libxl_arch.h
> @@ -36,7 +36,7 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
>   /* setup arch specific hardware description, i.e. DTB on ARM */
>   _hidden
>   int libxl__arch_domain_init_hw_description(libxl__gc *gc,
> -                                           libxl_domain_build_info *info,
> +                                           libxl_domain_config *d_config,
>                                              libxl__domain_build_state *state,
>                                              struct xc_dom_image *dom);
>   /* finalize arch specific hardware description. */
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 054ad58..c1624c0 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -436,6 +436,58 @@ static int make_memory_nodes(libxl__gc *gc, void *fdt,
>       return 0;
>   }
>   
> +static int make_reserved_nodes(libxl__gc *gc, void *fdt,
> +                               libxl_domain_config *d_config)
> +{
> +    int res, i;
> +    const char *name;
> +
> +    if (d_config->num_sshms == 0)
> +        return 0;
> +
> +    res = fdt_begin_node(fdt, "reserved-memory");
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#address-cells", GUEST_ROOT_ADDRESS_CELLS);
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#size-cells", GUEST_ROOT_SIZE_CELLS);
> +    if (res) return res;
> +
> +    res = fdt_property(fdt, "ranges", NULL, 0);
> +    if (res) return res;
> +
> +    for (i = 0; i < d_config->num_sshms; i++) {
> +        uint64_t start = d_config->sshms[i].begin;
> +
> +        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
> +            start += d_config->sshms[i].offset;
> +        name = GCSPRINTF("xen-shmem@%"PRIx64, start);
> +
> +        res = fdt_begin_node(fdt, name);
> +        if (res) return res;
> +
> +        res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS,
> +                                GUEST_ROOT_SIZE_CELLS, 1, start,
> +                                d_config->sshms[i].size);
> +        if (res) return res;
> +
> +        res = fdt_property_compat(gc, fdt, 1, "xen,shared-memory");
Shouldn't this be "shared-memory-v1" as you document it in [1]?
> +        if (res) return res;
> +
> +        res = fdt_property_string(fdt, "id", d_config->sshms[i].id);
> +        if (res) return res;
> +
> +        res = fdt_end_node(fdt);
> +        if (res) return res;
> +    }
> +
> +    res = fdt_end_node(fdt);
> +    if (res) return res;
> +
> +    return 0;
> +}
> +
>   static int make_gicv2_node(libxl__gc *gc, void *fdt,
>                              uint64_t gicd_base, uint64_t gicd_size,
>                              uint64_t gicc_base, uint64_t gicc_size)
> @@ -811,10 +863,11 @@ static int copy_partial_fdt(libxl__gc *gc, void *fdt, void *pfdt)
>   
>   #define FDT_MAX_SIZE (1<<20)
>   
> -static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_build_info *info,
> +static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
>                                 libxl__domain_build_state *state,
>                                 struct xc_dom_image *dom)
>   {
> +    libxl_domain_build_info *info = &d_config->b_info;
>       void *fdt = NULL;
>       void *pfdt = NULL;
>       int rc, res;
> @@ -897,6 +950,7 @@ next_resize:
>           FDT( make_psci_node(gc, fdt) );
>   
>           FDT( make_memory_nodes(gc, fdt, dom) );
> +        FDT( make_reserved_nodes(gc, fdt, d_config) );
>   
>           switch (info->arch_arm.gic_version) {
>           case LIBXL_GIC_VERSION_V2:
> @@ -946,12 +1000,13 @@ out:
>   }
>   
>   int libxl__arch_domain_init_hw_description(libxl__gc *gc,
> -                                           libxl_domain_build_info *info,
> +                                           libxl_domain_config *d_config,
>                                              libxl__domain_build_state *state,
>                                              struct xc_dom_image *dom)
>   {
>       int rc;
>       uint64_t val;
> +    libxl_domain_build_info *info = &d_config->b_info;
>   
>       if (info->type != LIBXL_DOMAIN_TYPE_PVH) {
>           LOG(ERROR, "Unsupported Arm guest type %s",
> @@ -971,7 +1026,7 @@ int libxl__arch_domain_init_hw_description(libxl__gc *gc,
>       if (rc)
>           return rc;
>   
> -    rc = libxl__prepare_dtb(gc, info, state, dom);
> +    rc = libxl__prepare_dtb(gc, d_config, state, dom);
>       if (rc) goto out;
>   
>       if (!libxl_defbool_val(info->acpi)) {
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 8a8a32c..2dc7696 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -706,7 +706,7 @@ static int libxl__build_dom(libxl__gc *gc, uint32_t domid,
>           LOG(ERROR, "xc_dom_parse_image failed");
>           goto out;
>       }
> -    if ( (ret = libxl__arch_domain_init_hw_description(gc, info, state, dom)) != 0 ) {
> +    if ( (ret = libxl__arch_domain_init_hw_description(gc, d_config, state, dom)) != 0 ) {
>           LOGE(ERROR, "libxl__arch_domain_init_hw_description failed");
>           goto out;
>       }
> diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
> index 3e70a69..2b7e86a 100644
> --- a/tools/libxl/libxl_x86.c
> +++ b/tools/libxl/libxl_x86.c
> @@ -367,7 +367,7 @@ int libxl__arch_extra_memory(libxl__gc *gc,
>   }
>   
>   int libxl__arch_domain_init_hw_description(libxl__gc *gc,
> -                                           libxl_domain_build_info *info,
> +                                           libxl_domain_config *d_config,
>                                              libxl__domain_build_state *state,
>                                              struct xc_dom_image *dom)
>   {
[1] 
https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg28000.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation
  2018-10-08 23:37 ` [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
@ 2018-10-24 13:26   ` Oleksandr Tyshchenko
  2018-10-30 15:36   ` Ian Jackson
  1 sibling, 0 replies; 46+ messages in thread
From: Oleksandr Tyshchenko @ 2018-10-24 13:26 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: stefanos, Wei Liu, blackskygg, Ian Jackson, Xen Devel, Julien Grall

Hi, Stefano

On Tue, Oct 9, 2018 at 2:39 AM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> From: Zhongze Liu <blackskygg@gmail.com>
>
> Author: Zhongze Liu <blackskygg@gmail.com>
>
> Add libxl__sshm_add to map shared pages from one DomU to another, The mapping
> process involves the following steps:
>
>   * Set defaults and check for further errors in the static_shm configs:
>     overlapping areas, invalid ranges, duplicated master domain,
>     not page aligned, no master domain etc.
>   * Use xc_domain_add_to_physmap_batch to map the shared pages to slaves
>   * When some of the pages can't be successfully mapped, roll back any
>     successfully mapped pages so that the system stays in a consistent state.
>   * Write information about static shared memory areas into the appropriate
>     xenstore paths and set the refcount of the shared region accordingly.
>
> Temporarily mark this as unsupported on x86 because calling p2m_add_foreign on
> two domU's is currently not allowd on x86 (see the comments in
> x86/mm/p2m.c:p2m_add_foreign for more details).
>
> This is for the proposal "Allow setting up shared memory areas between VMs
> from xl config file" (see [1]).
>
> [1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
>
> Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> Cc: xen-devel@lists.xen.org
> ---
> Changes in v5:
> - fix typos
> - add comments
> - add value checks (including alignment checks) in sshm_setdefaults
> ---
>  tools/libxl/Makefile         |   3 +-
>  tools/libxl/libxl_arch.h     |   6 +
>  tools/libxl/libxl_arm.c      |  15 ++
>  tools/libxl/libxl_create.c   |  27 +++
>  tools/libxl/libxl_internal.h |  14 ++
>  tools/libxl/libxl_sshm.c     | 405 +++++++++++++++++++++++++++++++++++++++++++
>  tools/libxl/libxl_x86.c      |  19 ++
>  7 files changed, 488 insertions(+), 1 deletion(-)
>  create mode 100644 tools/libxl/libxl_sshm.c
>
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 6da342e..53af186 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -140,7 +140,8 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
>                         libxl_vtpm.o libxl_nic.o libxl_disk.o libxl_console.o \
>                         libxl_cpupool.o libxl_mem.o libxl_sched.o libxl_tmem.o \
>                         libxl_9pfs.o libxl_domain.o libxl_vdispl.o \
> -                       libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o $(LIBXL_OBJS-y)
> +                       libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o libxl_sshm.o \
> +                       $(LIBXL_OBJS-y)
>  LIBXL_OBJS += libxl_genid.o
>  LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
>
> diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
> index 930570e..63c26cc 100644
> --- a/tools/libxl/libxl_arch.h
> +++ b/tools/libxl/libxl_arch.h
> @@ -73,6 +73,12 @@ int libxl__arch_extra_memory(libxl__gc *gc,
>                               const libxl_domain_build_info *info,
>                               uint64_t *out);
>
> +_hidden
> +bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info);
> +
> +_hidden
> +int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm);
> +
>  #if defined(__i386__) || defined(__x86_64__)
>
>  #define LAPIC_BASE_ADDRESS  0xfee00000
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 25dc3de..054ad58 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -1138,6 +1138,21 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
>      libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_PVH);
>  }
>
> +bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
> +{
> +    return true;
> +}
> +
> +int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
> +{
> +    if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
> +        sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_ARM_NORMAL;
> +    if (sshm->cache_policy >= LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
> +        return ERROR_INVAL;
> +
> +    return 0;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 320dbed..45ae9e4 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -513,6 +513,14 @@ int libxl__domain_build(libxl__gc *gc,
>          ret = ERROR_INVAL;
>          goto out;
>      }
> +
> +    /* The p2m has been setup, we could map the static shared memory now. */
> +    ret = libxl__sshm_add(gc, domid, d_config->sshms, d_config->num_sshms);
> +    if (ret != 0) {
> +        LOG(ERROR, "failed to map static shared memory");
> +        goto out;
> +    }
> +
>      ret = libxl__build_post(gc, domid, info, state, vments, localents);
>  out:
>      return ret;
> @@ -949,6 +957,25 @@ static void initiate_domain_create(libxl__egc *egc,
>          goto error_out;
>      }
>
> +    if (d_config->num_sshms != 0 &&
> +        !libxl__arch_domain_support_sshm(&d_config->b_info)) {
> +        LOGD(ERROR, domid, "static_shm is not supported by this domain type.");
> +        ret = ERROR_INVAL;
> +        goto error_out;
> +    }
> +
> +    for (i = 0; i < d_config->num_sshms; ++i) {
> +        ret = libxl__sshm_setdefault(gc, domid, &d_config->sshms[i]);
> +        if (ret) {
> +            LOGD(ERROR, domid, "Unable to set defaults for static shm");
> +            goto error_out;
> +        }
> +    }
> +
> +    ret = libxl__sshm_check_overlap(gc, domid,
> +                                    d_config->sshms, d_config->num_sshms);
> +    if (ret) goto error_out;
> +
>      ret = libxl__domain_make(gc, d_config, &dcs->build_state, &domid);
>      if (ret) {
>          LOGD(ERROR, domid, "cannot make domain: %d", ret);
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 43947b1..6f31a3d 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -4434,6 +4434,20 @@ static inline const char *libxl__qemu_qmp_path(libxl__gc *gc, int domid)
>  #endif
>
>  /*
> + * Set up static shared ram pages for HVM domains to communicate
> + *
> + * This function should only be called after the memory map is constructed
> + * and before any further memory access.
> + */
> +_hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
> +                            libxl_static_shm *sshm, int len);
> +
> +_hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
> +                                      libxl_static_shm *sshms, int len);
> +_hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
> +                                   libxl_static_shm *sshm);
> +
> +/*
>   * Local variables:
>   * mode: C
>   * c-basic-offset: 4
> diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
> new file mode 100644
> index 0000000..f61b80c
> --- /dev/null
> +++ b/tools/libxl/libxl_sshm.c
> @@ -0,0 +1,405 @@
> +#include "libxl_osdeps.h"
> +#include "libxl_internal.h"
> +#include "libxl_arch.h"
> +
> +#define SSHM_PATH(id) GCSPRINTF("/libxl/static_shm/%s", id)
> +
> +#define SSHM_ERROR(domid, sshmid, f, ...)                               \
> +    LOGD(ERROR, domid, "static_shm id = %s: " f, sshmid, ##__VA_ARGS__)
> +
> +
> +/* Set default values for libxl_static_shm */
> +int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
> +                           libxl_static_shm *sshm)
> +{
> +    int rc;
> +
> +    if (sshm->role != LIBXL_SSHM_ROLE_SLAVE &&
> +        sshm->role != LIBXL_SSHM_ROLE_MASTER)
> +        return ERROR_INVAL;
> +    if (sshm->begin & ~XC_PAGE_MASK ||
> +        sshm->size & ~XC_PAGE_MASK ||
> +        (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN &&
> +        sshm->offset & ~XC_PAGE_MASK)) {
> +        SSHM_ERROR(domid, sshm->id,
> +                   "begin/size/offset is not a multiple of 4K");
> +        return ERROR_INVAL;
> +    }
> +
> +    /* role-specific checks */
> +    if (sshm->role == LIBXL_SSHM_ROLE_SLAVE) {
> +        if (sshm->offset == LIBXL_SSHM_RANGE_UNKNOWN)
> +            sshm->offset = 0;
> +        if (sshm->cache_policy != LIBXL_SSHM_CACHEPOLICY_UNKNOWN) {
> +            SSHM_ERROR(domid, sshm->id,
> +                       "cache_policy is only applicable to master domains");
> +            rc = ERROR_INVAL;
> +            goto out;
> +        }
> +    } else {
> +        if (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN) {
> +            SSHM_ERROR(domid, sshm->id,
> +                       "offset is only applicable to slave domains");
> +            rc = ERROR_INVAL;
> +            goto out;
> +        }
> +
> +        rc = libxl__arch_domain_sshm_cachepolicy_setdefault(sshm);
> +        if (rc) {
> +            SSHM_ERROR(domid, sshm->id,
> +                       "cache policy not supported on this platform");
> +            goto out;
> +        }
> +    }
> +
> +    rc = 0;
> +out:
> +    return rc;
> +}
> +
> +/* Comparator for sorting sshm ranges by sshm->begin */
> +static int sshm_range_cmp(const void *a, const void *b)
> +{
> +    libxl_static_shm *const *sshma = a, *const *sshmb = b;
> +    return (*sshma)->begin > (*sshmb)->begin ? 1 : -1;
> +}
> +
> +/* Check if the sshm slave configs in @sshm overlap */
> +int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
> +                                     libxl_static_shm *sshms, int len)
> +{
> +
> +    const libxl_static_shm **slave_sshms = NULL;
> +    int num_slaves;
> +    int i;
> +
> +    if (!len) return 0;
> +
> +    slave_sshms = libxl__calloc(gc, len, sizeof(slave_sshms[0]));
> +    num_slaves = 0;
> +    for (i = 0; i < len; ++i) {
> +        if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
> +            slave_sshms[num_slaves++] = sshms + i;
> +    }
> +    qsort(slave_sshms, num_slaves, sizeof(slave_sshms[0]), sshm_range_cmp);
> +
> +    for (i = 0; i < num_slaves - 1; ++i) {
> +        if (slave_sshms[i+1]->begin <
> +            slave_sshms[i]->begin + slave_sshms[i]->size) {
> +            SSHM_ERROR(domid, slave_sshms[i+1]->id, "slave ranges overlap.");
> +            return ERROR_INVAL;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +/*   libxl__sshm_do_map -- map pages into slave's physmap
> + *
> + *   This functions maps
> + *     master gfn: [@msshm->begin + @sshm->offset,
> + *                  @msshm->begin + @msshm->size + @sshm->offset)
> + *   into
> + *     slave gfn: [@sshm->begin, @sshm->begin + @sshm->size)
> + *
> + *   The gfns of the pages that are successfully mapped will be stored
> + *   in @mapped, and the number of the gfns will be stored in @nmapped.
> + *
> + *   The caller has to guarantee that all the values are page-aligned.
> + */
> +static int libxl__sshm_do_map(libxl__gc *gc, uint32_t mid, uint32_t sid,
> +                              libxl_static_shm *sshm, libxl_static_shm *msshm,
> +                              xen_pfn_t *mapped, unsigned int *nmapped)
> +{
> +    int rc;
> +    int i;
> +    xen_pfn_t num_mpages, num_spages, num_success, offset;
> +    int *errs;
> +    xen_ulong_t *idxs;
> +    xen_pfn_t *gpfns;
> +
> +    num_mpages = (msshm->size) >> XC_PAGE_SHIFT;
> +    num_spages = (sshm->size) >> XC_PAGE_SHIFT;
> +    offset = sshm->offset >> XC_PAGE_SHIFT;
> +
> +    /* Check range. Test offset < mpages first to avoid overflow */
> +    if ((offset >= num_mpages) || (num_mpages - offset < num_spages)) {
> +        SSHM_ERROR(sid, sshm->id, "exceeds master's address space.");
> +        rc = ERROR_INVAL;
> +        goto out;
> +    }
> +
> +    /* fill out the gfn's and do the mapping */
> +    errs = libxl__calloc(gc, num_spages, sizeof(int));
> +    idxs = libxl__calloc(gc, num_spages, sizeof(xen_ulong_t));
> +    gpfns = libxl__calloc(gc, num_spages, sizeof(xen_pfn_t));
> +    for (i = 0; i < num_spages; i++) {
> +        idxs[i] = (msshm->begin >> XC_PAGE_SHIFT) + offset + i;
> +        gpfns[i]= (sshm->begin >> XC_PAGE_SHIFT) + i;
> +    }
> +    rc = xc_domain_add_to_physmap_batch(CTX->xch,
> +                                        sid, mid,
> +                                        XENMAPSPACE_gmfn_share,
> +                                        num_spages,
> +                                        idxs, gpfns, errs);
> +
> +    num_success = 0;
> +    for (i = 0; i < num_spages; i++) {
> +        if (errs[i]) {
> +            SSHM_ERROR(sid, sshm->id,
> +                       "can't map at address 0x%"PRIx64".",
> +                       gpfns[i] << XC_PAGE_SHIFT);
> +            rc = ERROR_FAIL;
> +        } else {
> +            mapped[num_success++] = gpfns[i];
> +        }
> +    }
> +    *nmapped = num_success;
> +    if (rc) goto out;
> +
> +    rc = 0;
> +out:
> +    return rc;
> +}
> +
> +/* Xenstore ops are protected by a transaction */
> +static int libxl__sshm_incref(libxl__gc *gc, xs_transaction_t xt,
> +                              const char *sshm_path)
> +{
> +    int rc, count;
> +    const char *count_path, *count_string;
> +
> +    count_path = GCSPRINTF("%s/usercnt", sshm_path);
> +    rc = libxl__xs_read_checked(gc, xt, count_path, &count_string);
> +    if (rc) goto out;
> +    count = atoi(count_string);
> +
> +    count_string = GCSPRINTF("%d", count+1);
> +    rc = libxl__xs_write_checked(gc, xt, count_path, count_string);
> +    if (rc) goto out;
> +
> +    rc = 0;
> +out:
> +    return rc;
> +}
> +
> +static int libxl__sshm_add_slave(libxl__gc *gc, uint32_t domid,
> +                                 libxl_static_shm *sshm)
> +{
> +    int rc, i;
> +    const char *sshm_path, *slave_path;
> +    const char *dom_path, *dom_sshm_path, *dom_role_path;
> +    const char *xs_value;
> +    char *ents[9];
> +    libxl_static_shm master_sshm;
> +    uint32_t master_domid;
> +    xen_pfn_t *mapped;
> +    unsigned int nmapped = 0;
> +    xs_transaction_t xt = XBT_NULL;
> +    bool isretry;
> +
> +    sshm_path = SSHM_PATH(sshm->id);
> +    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, sshm_path, domid);
> +    dom_path = libxl__xs_get_dompath(gc, domid);
> +    /* the domain should be in xenstore by now */
> +    assert(dom_path);
> +    dom_sshm_path = GCSPRINTF("%s/static_shm/%s", dom_path, sshm->id);
> +    dom_role_path = GCSPRINTF("%s/role", dom_sshm_path);
> +
> +    /* prepare the slave xenstore entries */
> +    ents[0] = "begin";
> +    ents[1] = GCSPRINTF("0x%"PRIx64, sshm->begin);
> +    ents[2] = "size";
> +    ents[3] = GCSPRINTF("0x%"PRIx64, sshm->size);
> +    ents[4] = "offset";
> +    ents[5] = GCSPRINTF("0x%"PRIx64, sshm->offset);
> +    ents[6] = "prot";
> +    ents[7] = libxl__strdup(gc, libxl_sshm_prot_to_string(sshm->prot));
> +    ents[8] = NULL;
> +
> +    mapped = libxl__calloc(gc, sshm->size >> XC_PAGE_SHIFT, sizeof(xen_pfn_t));
> +
> +    isretry = false;
> +    for (;;) {
> +        rc = libxl__xs_transaction_start(gc, &xt);
> +        if (rc) goto out;
> +
> +        if (!libxl__xs_read(gc, xt, sshm_path)) {
> +            SSHM_ERROR(domid, sshm->id, "no master found.");
> +            rc = ERROR_FAIL;
> +            goto out;
> +        }
> +
> +        /* every ID can appear in each domain at most once */
> +        if (libxl__xs_read(gc, xt, dom_sshm_path)) {
> +            SSHM_ERROR(domid, sshm->id,
> +                       "domain tried to map the same ID twice.");
> +            rc = ERROR_FAIL;
> +            goto out;
> +        }
> +
> +        /* look at the master info and see if we could do the mapping */
> +        rc = libxl__xs_read_checked(gc, xt,
> +                                    GCSPRINTF("%s/prot", sshm_path),
> +                                    &xs_value);
> +        if (rc) goto out;
> +        libxl_sshm_prot_from_string(xs_value, &master_sshm.prot);
> +
> +        rc = libxl__xs_read_checked(gc, xt,
> +                                    GCSPRINTF("%s/begin", sshm_path),
> +                                    &xs_value);
> +        if (rc) goto out;
> +        master_sshm.begin = strtoull(xs_value, NULL, 16);
> +
> +        rc = libxl__xs_read_checked(gc, xt,
> +                                    GCSPRINTF("%s/size", sshm_path),
> +                                    &xs_value);
> +        if (rc) goto out;
> +        master_sshm.size = strtoull(xs_value, NULL, 16);
> +
> +        rc = libxl__xs_read_checked(gc, xt,
> +                                    GCSPRINTF("%s/master", sshm_path),
> +                                    &xs_value);
> +        if (rc) goto out;
> +        master_domid = strtoull(xs_value, NULL, 16);
> +
> +        if (sshm->prot == LIBXL_SSHM_PROT_UNKNOWN)
> +            sshm->prot = master_sshm.prot;
> +
> +        /* check if the slave is asking too much permission */
> +        if (master_sshm.prot < sshm->prot) {
> +            SSHM_ERROR(domid, sshm->id, "slave is asking too much permission.");
> +            rc = ERROR_INVAL;
> +            goto out;
> +        }
> +
> +        /* all checks passed, do the job */
> +        if (!isretry) {
> +            rc = libxl__sshm_do_map(gc, master_domid, domid,
> +                                    sshm, &master_sshm,
> +                                    mapped, &nmapped);
> +            if (rc) goto out;
> +        }
> +
> +        /* write the result to xenstore and commit */
> +        rc = libxl__xs_write_checked(gc, xt, dom_role_path, "slave");
> +        if (rc) goto out;
> +        rc = libxl__xs_writev(gc, xt, slave_path, ents);
> +        if (rc) goto out;
> +        rc = libxl__sshm_incref(gc, xt, sshm_path);
> +        if (rc) goto out;
> +
> +        rc = libxl__xs_transaction_commit(gc, &xt);
> +        if (!rc) break;
> +        if (rc < 0) goto out;
> +        isretry = true;

Is it possible to loop here forever waiting for the transaction to be committed?

> +    }
> +
> +    rc = 0;
> +out:
> +    if (rc) {

I would extend check:
if (rc && nmapped) {

> +        /* roll back successfully mapped pages */
> +        SSHM_ERROR(domid, sshm->id, "failed to map some pages, cancelling.");
> +        for (i = 0; i < nmapped; i++) {
> +            xc_domain_remove_from_physmap(CTX->xch, domid, mapped[i]);
> +        }
> +    }
> +
> +    libxl__xs_transaction_abort(gc, &xt);
> +
> +    return rc;
> +}
> +
> +static int libxl__sshm_add_master(libxl__gc *gc, uint32_t domid,
> +                                  libxl_static_shm *sshm)
> +{
> +    int rc;
> +    const char *sshm_path, *dom_path, *dom_role_path;
> +    char *ents[13];
> +    struct xs_permissions noperm;
> +    xs_transaction_t xt = XBT_NULL;
> +
> +    sshm_path = SSHM_PATH(sshm->id);
> +    dom_path = libxl__xs_get_dompath(gc, domid);
> +    /* the domain should be in xenstore by now */
> +    assert(dom_path);
> +    dom_role_path = GCSPRINTF("%s/static_shm/%s/role", dom_path, sshm->id);
> +
> +    /* prepare the xenstore entries */
> +    ents[0] = "master";
> +    ents[1] = GCSPRINTF("%"PRIu32, domid);
> +    ents[2] = "begin";
> +    ents[3] = GCSPRINTF("0x%"PRIx64, sshm->begin);
> +    ents[4] = "size";
> +    ents[5] = GCSPRINTF("0x%"PRIx64, sshm->size);
> +    ents[6] = "prot";
> +    ents[7] = libxl__strdup(gc, libxl_sshm_prot_to_string(sshm->prot));
> +    ents[8] = "cache_policy";
> +    ents[9] = libxl__strdup(gc,
> +                            libxl_sshm_cachepolicy_to_string(sshm->cache_policy));
> +    ents[10] = "usercnt";
> +    ents[11] = "1";
> +    ents[12] = NULL;
> +
> +    /* could only be accessed by Dom0 */
> +    noperm.id = 0;
> +    noperm.perms = XS_PERM_NONE;
> +
> +    for (;;) {
> +        rc = libxl__xs_transaction_start(gc, &xt);
> +        if (rc) goto out;
> +
> +        if (!libxl__xs_read(gc, xt, sshm_path)) {
> +            /* every ID can appear in each domain at most once */
> +            if (libxl__xs_read(gc, xt, dom_role_path)) {
> +                SSHM_ERROR(domid, sshm->id,
> +                           "domain tried to map the same ID twice.");
> +                rc = ERROR_FAIL;
> +                goto out;
> +            }
> +            rc = libxl__xs_write_checked(gc, xt, dom_role_path, "master");
> +            if (rc) goto out;;
> +
> +            libxl__xs_mknod(gc, xt, sshm_path, &noperm, 1);
> +            libxl__xs_writev(gc, xt, sshm_path, ents);
> +        } else {
> +            SSHM_ERROR(domid, sshm->id, "can only have one master.");
> +            rc = ERROR_FAIL;
> +            goto out;
> +        }
> +
> +        rc = libxl__xs_transaction_commit(gc, &xt);
> +        if (!rc) break;
> +        if (rc < 0) goto out;
> +    }
> +
> +    rc = 0;
> +out:
> +    libxl__xs_transaction_abort(gc, &xt);
> +    return rc;
> +}
> +
> +int libxl__sshm_add(libxl__gc *gc,  uint32_t domid,
> +                    libxl_static_shm *sshms, int len)
> +{
> +    int rc, i;
> +
> +    for (i = 0; i < len; ++i) {
> +        if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE) {
> +            rc = libxl__sshm_add_slave(gc, domid, sshms+i);
> +        } else {
> +            rc = libxl__sshm_add_master(gc, domid, sshms+i);
> +        }
> +        if (rc)  return rc;
> +    }
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
> index 8b6759c..3e70a69 100644
> --- a/tools/libxl/libxl_x86.c
> +++ b/tools/libxl/libxl_x86.c
> @@ -619,6 +619,25 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
>      libxl_defbool_setdefault(&b_info->acpi, true);
>  }
>
> +bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
> +{
> +    /* FIXME: Mark this as unsupported for calling p2m_add_foreign on two
> +     * DomU's is currently not allowd on x86, see the comments in
> +     * x86/mm/p2m.c: p2m_add_foreign.
> +     */
> +     return false;
> +}
> +
> +int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
> +{
> +    if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
> +        sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_X86_NORMAL;
> +    if (sshm->cache_policy < LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
> +        return ERROR_INVAL;
> +
> +    return 0;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> --
> 1.9.1
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel



--
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction
  2018-10-24 11:48   ` Oleksandr Andrushchenko
@ 2018-10-24 13:32     ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 46+ messages in thread
From: Oleksandr Tyshchenko @ 2018-10-24 13:32 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: stefanos, Wei Liu, blackskygg, Oleksandr Andrushchenko,
	Ian Jackson, Xen Devel, Julien Grall

On Wed, Oct 24, 2018 at 2:49 PM Oleksandr Andrushchenko
<andr2000@gmail.com> wrote:
>
> On 10/09/2018 02:37 AM, Stefano Stabellini wrote:
> > From: Zhongze Liu <blackskygg@gmail.com>
> >
> > Author: Zhongze Liu <blackskygg@gmail.com>
> >
> > Add libxl__sshm_del to unmap static shared memory areas mapped by
> > libxl__sshm_add during domain creation. The unmapping process is:
> >
> > * For a master: decrease the refcount of the sshm region, if the refcount
> >    reaches 0, cleanup the whole sshm path.
> >
> > * For a slave:
> >    1) unmap the shared pages, and cleanup related xs entries. If the
> >       system works normally, all the shared pages will be unmapped, so there
> >       won't be page leaks. In case of errors, the unmapping process will go
> >       on and unmap all the other pages that can be unmapped, so the other
> >       pages won't be leaked, either.
> >    2) Decrease the refcount of the sshm region, if the refcount reaches
> >       0, cleanup the whole sshm path.
> >
> > This is for the proposal "Allow setting up shared memory areas between VMs
> > from xl config file" (see [1]).
> >
> > [1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
> >
> > Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
> > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> >
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > Cc: Wei Liu <wei.liu2@citrix.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Julien Grall <julien.grall@arm.com>
> > Cc: xen-devel@lists.xen.org
> >
> > ---
> > Changes in v5:
> > - fix typos
> > - add comments
> > - cannot move unmap before xenstore transaction because it needs to
> >    retrieve begin/size values from xenstore
> > ---
> >   tools/libxl/libxl_domain.c   |   5 ++
> >   tools/libxl/libxl_internal.h |   2 +
> >   tools/libxl/libxl_sshm.c     | 107 +++++++++++++++++++++++++++++++++++++++++++
> >   3 files changed, 114 insertions(+)
> >
> > diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
> > index 3377bba..3f7ffa6 100644
> > --- a/tools/libxl/libxl_domain.c
> > +++ b/tools/libxl/libxl_domain.c
> > @@ -1060,6 +1060,11 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
> >           goto out;
> >       }
> >
> > +    rc = libxl__sshm_del(gc, domid);
> > +    if (rc) {
> > +        LOGD(ERROR, domid, "Deleting static shm failed.");
> > +    }
> Do you need brackets here?
> > +
> >       if (libxl__device_pci_destroy_all(gc, domid) < 0)
> >           LOGD(ERROR, domid, "Pci shutdown failed");
> >       rc = xc_domain_pause(ctx->xch, domid);
> > diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> > index 6f31a3d..e86d356 100644
> > --- a/tools/libxl/libxl_internal.h
> > +++ b/tools/libxl/libxl_internal.h
> > @@ -4442,6 +4442,8 @@ static inline const char *libxl__qemu_qmp_path(libxl__gc *gc, int domid)
> >   _hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
> >                               libxl_static_shm *sshm, int len);
> >
> > +_hidden int libxl__sshm_del(libxl__gc *gc, uint32_t domid);
> > +
> >   _hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
> >                                         libxl_static_shm *sshms, int len);
> >   _hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
> > diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
> > index f61b80c..9672056 100644
> > --- a/tools/libxl/libxl_sshm.c
> > +++ b/tools/libxl/libxl_sshm.c
> > @@ -94,6 +94,113 @@ int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
> >       return 0;
> >   }
> >
> > +/*
> > + * Decrease the refcount of an sshm. When refcount reaches 0,
> > + * clean up the whole sshm path.
> > + * Xenstore operations are done within the same transaction.
> > + */
> > +static void libxl__sshm_decref(libxl__gc *gc, xs_transaction_t xt,
> > +                               const char *sshm_path)
> > +{
> > +    int count;
> > +    const char *count_path, *count_string;
> > +
> > +    count_path = GCSPRINTF("%s/usercnt", sshm_path);
> > +    if (libxl__xs_read_checked(gc, xt, count_path, &count_string))
> > +        return;
> > +    count = atoi(count_string);
> > +
> > +    if (--count == 0) {
> > +        libxl__xs_path_cleanup(gc, xt, sshm_path);
> > +        return;
> > +    }
> > +
> > +    count_string = GCSPRINTF("%d", count);
> > +    libxl__xs_write_checked(gc, xt, count_path, count_string);
> > +
> > +    return;
> > +}
> > +
> > +static void libxl__sshm_do_unmap(libxl__gc *gc, uint32_t domid, const char *id,
> > +                                 uint64_t begin, uint64_t size)
> > +{
> > +    uint64_t end;
> > +    begin >>= XC_PAGE_SHIFT;
> > +    size >>= XC_PAGE_SHIFT;
> > +    end = begin + size;
> > +    for (; begin < end; ++begin) {
> > +        if (xc_domain_remove_from_physmap(CTX->xch, domid, begin)) {
What I was thinking is that xc_domain_remove_from_physmap_batch()
could speed up unmapping if was present)

> > +            SSHM_ERROR(domid, id,
> > +                       "unable to unmap shared page at 0x%"PRIx64".",
> > +                       begin);
> > +        }
> > +    }
> > +}
> > +
> > +static void libxl__sshm_del_slave(libxl__gc *gc, xs_transaction_t xt,
> > +                                  uint32_t domid, const char *id, bool isretry)
> isretry is not used, please remove
> > +{
> > +    const char *slave_path, *begin_str, *size_str;
> > +    uint64_t begin, size;
> > +
> > +    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, SSHM_PATH(id), domid);
> > +
> > +    begin_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/begin", slave_path));
> > +    size_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/size", slave_path));
> > +    begin = strtoull(begin_str, NULL, 16);
> > +    size = strtoull(size_str, NULL, 16);
> > +
> > +    libxl__sshm_do_unmap(gc, domid, id, begin, size);
> > +    libxl__xs_path_cleanup(gc, xt, slave_path);
> > +}
> > +
> > +/* Delete static_shm entries in the xensotre. */
> > +int libxl__sshm_del(libxl__gc *gc,  uint32_t domid)
> > +{
> > +    int rc, i;
> > +    bool isretry;
> > +    xs_transaction_t xt = XBT_NULL;
> > +    const char *dom_path, *dom_sshm_path, *role;
> > +    char **sshm_ents;
> > +    unsigned int sshm_num;
> > +
> > +    dom_path = libxl__xs_get_dompath(gc, domid);
> > +    dom_sshm_path = GCSPRINTF("%s/static_shm", dom_path);
> > +
> > +    isretry = false;
> see above, isretry is not used
> > +    for (;;) {
> > +        rc = libxl__xs_transaction_start(gc, &xt);
> > +        if (rc) goto out;
> > +
> > +        if (libxl__xs_read(gc, xt, dom_sshm_path)) {
> > +            sshm_ents = libxl__xs_directory(gc, xt, dom_sshm_path, &sshm_num);
> > +            if (!sshm_ents) continue;
> > +
> > +            for (i = 0; i < sshm_num; ++i) {
> > +                role = libxl__xs_read(gc, xt,
> > +                                      GCSPRINTF("%s/%s/role",
> > +                                                dom_sshm_path,
> > +                                                sshm_ents[i]));
> > +                assert(role);
> > +                if (!strncmp(role, "slave", 5))
> > +                    libxl__sshm_del_slave(gc, xt, domid, sshm_ents[i], isretry);
> > +
> > +                libxl__sshm_decref(gc, xt, SSHM_PATH(sshm_ents[i]));
> > +            }
> > +        }
> > +
> > +        rc = libxl__xs_transaction_commit(gc, &xt);
> > +        if (!rc) break;
> > +        if (rc < 0) goto out;
> > +        isretry = true;
> > +    }
> > +
> > +    rc = 0;
> > +out:
> > +    libxl__xs_transaction_abort(gc, &xt);
> > +    return rc;
> > +}
> > +
> >   /*   libxl__sshm_do_map -- map pages into slave's physmap
> >    *
> >    *   This functions maps
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel



-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files
  2018-10-19 21:06 ` [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
@ 2018-10-24 14:15   ` Oleksandr Tyshchenko
  2018-10-24 14:25     ` Julien Grall
  0 siblings, 1 reply; 46+ messages in thread
From: Oleksandr Tyshchenko @ 2018-10-24 14:15 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Ian Jackson, Julien Grall, Wei Liu, blackskygg, Xen Devel

Hi, Stefano

On Sat, Oct 20, 2018 at 12:07 AM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> Hi Wei,
>
> Any chances you can review this series soon? I would love for it to be
> merged before the 4.12 code freeze.
>
> FYI I am making progress upstreaming the device tree binding, they asked
> for a very minor change, which I am happy to include in the next version
> of the series:
>
> https://marc.info/?l=devicetree&m=153989826510707&w=2
>
>
> On Mon, 8 Oct 2018, Stefano Stabellini wrote:
> > Hi,
> >
> > This series implements a new xl config entry. Users can use the new
> > config entry to statically setup shared memory areas among VMs that
> > don't have grant table support so that they could communicate with each
> > other through the static shared memory areas.
> >
> > It was originally developed by Zhongze, I am just updating the last few
> > issued that were address during the last round of reviews in January. I
> > tested the feature on ARM and works fine.
> >
> > Cheers,
> >
> > Stefano
> >
> >
> > Main changes in v8:
> > - remove "add xen,dmabuf nodes" patch
> > - add "map reserved-memory regions as normal memory in dom0" patch
> > - send new device tree binding request to devicetree.org
> >
> >
> >
> > The following changes since commit 85b00385827e4e061b2ff38b549c03d0f1e66b6a:
> >
> >   xen/sched: Drop set_current_state() (2018-10-08 18:34:55 +0100)
> >
> > are available in the git repository at:
> >
> >   http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git share_mem-v8
> >
> > for you to fetch changes up to fc00e34ce9924b5915deacd881ae7cd6888510ba:
> >
> >   xen/arm: map reserved-memory regions as normal memory in dom0 (2018-10-08 16:34:09 -0700)
> >
> > ----------------------------------------------------------------
> > Stefano Stabellini (2):
> >       xen/arm: export shared memory regions as reserved-memory on device tree
> >       xen/arm: map reserved-memory regions as normal memory in dom0
> >
> > Zhongze Liu (6):
> >       xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
> >       libxl: introduce a new structure to represent static shared memory regions
> >       libxl: support mapping static shared memory areas during domain creation
> >       libxl: support unmapping static shared memory areas during domain destruction
> >       libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
> >       docs: documentation about static shared memory regions
> >
> >  docs/man/xl-static-shm-configuration.pod.5 | 264 +++++++++++++++
> >  docs/man/xl.cfg.pod.5.in                   |   8 +
> >  docs/misc/xenstore-paths.markdown          |  47 +++
> >  tools/flask/policy/modules/xen.if          |   2 +
> >  tools/libxl/Makefile                       |   5 +-
> >  tools/libxl/libxl.h                        |   8 +
> >  tools/libxl/libxl_arch.h                   |   8 +-
> >  tools/libxl/libxl_arm.c                    |  76 ++++-
> >  tools/libxl/libxl_create.c                 |  27 ++
> >  tools/libxl/libxl_dom.c                    |   2 +-
> >  tools/libxl/libxl_domain.c                 |   5 +
> >  tools/libxl/libxl_internal.h               |  16 +
> >  tools/libxl/libxl_sshm.c                   | 512 +++++++++++++++++++++++++++++
> >  tools/libxl/libxl_types.idl                |  32 +-
> >  tools/libxl/libxl_x86.c                    |  21 +-
> >  tools/libxl/libxlu_sshm.c                  | 206 ++++++++++++
> >  tools/libxl/libxlutil.h                    |   6 +
> >  tools/xl/xl_parse.c                        |  25 +-
> >  xen/arch/arm/domain_build.c                |   7 +
> >  xen/arch/arm/mm.c                          |   7 +-
> >  xen/include/public/memory.h                |   8 +
> >  xen/include/xsm/dummy.h                    |  14 +
> >  xen/include/xsm/xsm.h                      |   6 +
> >  xen/xsm/dummy.c                            |   1 +
> >  xen/xsm/flask/hooks.c                      |   9 +
> >  xen/xsm/flask/policy/access_vectors        |   5 +
> >  26 files changed, 1315 insertions(+), 12 deletions(-)
> >  create mode 100644 docs/man/xl-static-shm-configuration.pod.5
> >  create mode 100644 tools/libxl/libxl_sshm.c
> >  create mode 100644 tools/libxl/libxlu_sshm.c
> >
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

I have question regarding format of "begin", "size" and "offset" properties.
Wouldn't be better to operate with page numbers rather than addresses
like it is done for "iomem"?

I see that in many functions (libxl_sshm.c) these addresses are being
transformed into page numbers the first.
Moreover if the Xen shared memory feature can deal with 4K page
aligned addresses only, why we provide
possibility for user to set any addresses and then check them for
alignment rules?
The only one place where we need addresses is the device tree code
where we insert memory regions (here we could transform page numbers
into addresses on the fly).
Or I missed something?

-- 
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files
  2018-10-24 14:15   ` Oleksandr Tyshchenko
@ 2018-10-24 14:25     ` Julien Grall
  2018-10-30 11:11       ` Oleksandr Tyshchenko
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2018-10-24 14:25 UTC (permalink / raw)
  To: Oleksandr Tyshchenko, Stefano Stabellini
  Cc: Ian Jackson, Wei Liu, blackskygg, Xen Devel



On 10/24/18 3:15 PM, Oleksandr Tyshchenko wrote:
> Hi, Stefano
> 
> On Sat, Oct 20, 2018 at 12:07 AM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
>>
>> Hi Wei,
>>
>> Any chances you can review this series soon? I would love for it to be
>> merged before the 4.12 code freeze.
>>
>> FYI I am making progress upstreaming the device tree binding, they asked
>> for a very minor change, which I am happy to include in the next version
>> of the series:
>>
>> https://marc.info/?l=devicetree&m=153989826510707&w=2
>>
>>
>> On Mon, 8 Oct 2018, Stefano Stabellini wrote:
>>> Hi,
>>>
>>> This series implements a new xl config entry. Users can use the new
>>> config entry to statically setup shared memory areas among VMs that
>>> don't have grant table support so that they could communicate with each
>>> other through the static shared memory areas.
>>>
>>> It was originally developed by Zhongze, I am just updating the last few
>>> issued that were address during the last round of reviews in January. I
>>> tested the feature on ARM and works fine.
>>>
>>> Cheers,
>>>
>>> Stefano
>>>
>>>
>>> Main changes in v8:
>>> - remove "add xen,dmabuf nodes" patch
>>> - add "map reserved-memory regions as normal memory in dom0" patch
>>> - send new device tree binding request to devicetree.org
>>>
>>>
>>>
>>> The following changes since commit 85b00385827e4e061b2ff38b549c03d0f1e66b6a:
>>>
>>>    xen/sched: Drop set_current_state() (2018-10-08 18:34:55 +0100)
>>>
>>> are available in the git repository at:
>>>
>>>    http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git share_mem-v8
>>>
>>> for you to fetch changes up to fc00e34ce9924b5915deacd881ae7cd6888510ba:
>>>
>>>    xen/arm: map reserved-memory regions as normal memory in dom0 (2018-10-08 16:34:09 -0700)
>>>
>>> ----------------------------------------------------------------
>>> Stefano Stabellini (2):
>>>        xen/arm: export shared memory regions as reserved-memory on device tree
>>>        xen/arm: map reserved-memory regions as normal memory in dom0
>>>
>>> Zhongze Liu (6):
>>>        xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
>>>        libxl: introduce a new structure to represent static shared memory regions
>>>        libxl: support mapping static shared memory areas during domain creation
>>>        libxl: support unmapping static shared memory areas during domain destruction
>>>        libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
>>>        docs: documentation about static shared memory regions
>>>
>>>   docs/man/xl-static-shm-configuration.pod.5 | 264 +++++++++++++++
>>>   docs/man/xl.cfg.pod.5.in                   |   8 +
>>>   docs/misc/xenstore-paths.markdown          |  47 +++
>>>   tools/flask/policy/modules/xen.if          |   2 +
>>>   tools/libxl/Makefile                       |   5 +-
>>>   tools/libxl/libxl.h                        |   8 +
>>>   tools/libxl/libxl_arch.h                   |   8 +-
>>>   tools/libxl/libxl_arm.c                    |  76 ++++-
>>>   tools/libxl/libxl_create.c                 |  27 ++
>>>   tools/libxl/libxl_dom.c                    |   2 +-
>>>   tools/libxl/libxl_domain.c                 |   5 +
>>>   tools/libxl/libxl_internal.h               |  16 +
>>>   tools/libxl/libxl_sshm.c                   | 512 +++++++++++++++++++++++++++++
>>>   tools/libxl/libxl_types.idl                |  32 +-
>>>   tools/libxl/libxl_x86.c                    |  21 +-
>>>   tools/libxl/libxlu_sshm.c                  | 206 ++++++++++++
>>>   tools/libxl/libxlutil.h                    |   6 +
>>>   tools/xl/xl_parse.c                        |  25 +-
>>>   xen/arch/arm/domain_build.c                |   7 +
>>>   xen/arch/arm/mm.c                          |   7 +-
>>>   xen/include/public/memory.h                |   8 +
>>>   xen/include/xsm/dummy.h                    |  14 +
>>>   xen/include/xsm/xsm.h                      |   6 +
>>>   xen/xsm/dummy.c                            |   1 +
>>>   xen/xsm/flask/hooks.c                      |   9 +
>>>   xen/xsm/flask/policy/access_vectors        |   5 +
>>>   26 files changed, 1315 insertions(+), 12 deletions(-)
>>>   create mode 100644 docs/man/xl-static-shm-configuration.pod.5
>>>   create mode 100644 tools/libxl/libxl_sshm.c
>>>   create mode 100644 tools/libxl/libxlu_sshm.c
>>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xenproject.org
>> https://lists.xenproject.org/mailman/listinfo/xen-devel
> 
> I have question regarding format of "begin", "size" and "offset" properties.
> Wouldn't be better to operate with page numbers rather than addresses
> like it is done for "iomem"?
> 
> I see that in many functions (libxl_sshm.c) these addresses are being
> transformed into page numbers the first.
> Moreover if the Xen shared memory feature can deal with 4K page
> aligned addresses only, why we provide
> possibility for user to set any addresses and then check them for
> alignment rules?

Arm supports multiple page granularity (4K, 16K, 64K). Xen is always 
using 4K today, but guest can use any page granularity. This makes quite 
confusing what is the page granularity used here.

Also, we may want to boot Xen with different granularity in the future. 
So the best is to keep the address agnostic to the page granularity.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files
  2018-10-24 14:25     ` Julien Grall
@ 2018-10-30 11:11       ` Oleksandr Tyshchenko
  0 siblings, 0 replies; 46+ messages in thread
From: Oleksandr Tyshchenko @ 2018-10-30 11:11 UTC (permalink / raw)
  To: Julien Grall
  Cc: Ian Jackson, Stefano Stabellini, Wei Liu, Zhongze Liu, Xen Devel

Hi, Julien.

On Wed, Oct 24, 2018 at 5:25 PM Julien Grall <julien.grall@arm.com> wrote:
>
>
>
> On 10/24/18 3:15 PM, Oleksandr Tyshchenko wrote:
> > Hi, Stefano
> >
> > On Sat, Oct 20, 2018 at 12:07 AM Stefano Stabellini
> > <sstabellini@kernel.org> wrote:
> >>
> >> Hi Wei,
> >>
> >> Any chances you can review this series soon? I would love for it to be
> >> merged before the 4.12 code freeze.
> >>
> >> FYI I am making progress upstreaming the device tree binding, they asked
> >> for a very minor change, which I am happy to include in the next version
> >> of the series:
> >>
> >> https://marc.info/?l=devicetree&m=153989826510707&w=2
> >>
> >>
> >> On Mon, 8 Oct 2018, Stefano Stabellini wrote:
> >>> Hi,
> >>>
> >>> This series implements a new xl config entry. Users can use the new
> >>> config entry to statically setup shared memory areas among VMs that
> >>> don't have grant table support so that they could communicate with each
> >>> other through the static shared memory areas.
> >>>
> >>> It was originally developed by Zhongze, I am just updating the last few
> >>> issued that were address during the last round of reviews in January. I
> >>> tested the feature on ARM and works fine.
> >>>
> >>> Cheers,
> >>>
> >>> Stefano
> >>>
> >>>
> >>> Main changes in v8:
> >>> - remove "add xen,dmabuf nodes" patch
> >>> - add "map reserved-memory regions as normal memory in dom0" patch
> >>> - send new device tree binding request to devicetree.org
> >>>
> >>>
> >>>
> >>> The following changes since commit 85b00385827e4e061b2ff38b549c03d0f1e66b6a:
> >>>
> >>>    xen/sched: Drop set_current_state() (2018-10-08 18:34:55 +0100)
> >>>
> >>> are available in the git repository at:
> >>>
> >>>    http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git share_mem-v8
> >>>
> >>> for you to fetch changes up to fc00e34ce9924b5915deacd881ae7cd6888510ba:
> >>>
> >>>    xen/arm: map reserved-memory regions as normal memory in dom0 (2018-10-08 16:34:09 -0700)
> >>>
> >>> ----------------------------------------------------------------
> >>> Stefano Stabellini (2):
> >>>        xen/arm: export shared memory regions as reserved-memory on device tree
> >>>        xen/arm: map reserved-memory regions as normal memory in dom0
> >>>
> >>> Zhongze Liu (6):
> >>>        xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
> >>>        libxl: introduce a new structure to represent static shared memory regions
> >>>        libxl: support mapping static shared memory areas during domain creation
> >>>        libxl: support unmapping static shared memory areas during domain destruction
> >>>        libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
> >>>        docs: documentation about static shared memory regions
> >>>
> >>>   docs/man/xl-static-shm-configuration.pod.5 | 264 +++++++++++++++
> >>>   docs/man/xl.cfg.pod.5.in                   |   8 +
> >>>   docs/misc/xenstore-paths.markdown          |  47 +++
> >>>   tools/flask/policy/modules/xen.if          |   2 +
> >>>   tools/libxl/Makefile                       |   5 +-
> >>>   tools/libxl/libxl.h                        |   8 +
> >>>   tools/libxl/libxl_arch.h                   |   8 +-
> >>>   tools/libxl/libxl_arm.c                    |  76 ++++-
> >>>   tools/libxl/libxl_create.c                 |  27 ++
> >>>   tools/libxl/libxl_dom.c                    |   2 +-
> >>>   tools/libxl/libxl_domain.c                 |   5 +
> >>>   tools/libxl/libxl_internal.h               |  16 +
> >>>   tools/libxl/libxl_sshm.c                   | 512 +++++++++++++++++++++++++++++
> >>>   tools/libxl/libxl_types.idl                |  32 +-
> >>>   tools/libxl/libxl_x86.c                    |  21 +-
> >>>   tools/libxl/libxlu_sshm.c                  | 206 ++++++++++++
> >>>   tools/libxl/libxlutil.h                    |   6 +
> >>>   tools/xl/xl_parse.c                        |  25 +-
> >>>   xen/arch/arm/domain_build.c                |   7 +
> >>>   xen/arch/arm/mm.c                          |   7 +-
> >>>   xen/include/public/memory.h                |   8 +
> >>>   xen/include/xsm/dummy.h                    |  14 +
> >>>   xen/include/xsm/xsm.h                      |   6 +
> >>>   xen/xsm/dummy.c                            |   1 +
> >>>   xen/xsm/flask/hooks.c                      |   9 +
> >>>   xen/xsm/flask/policy/access_vectors        |   5 +
> >>>   26 files changed, 1315 insertions(+), 12 deletions(-)
> >>>   create mode 100644 docs/man/xl-static-shm-configuration.pod.5
> >>>   create mode 100644 tools/libxl/libxl_sshm.c
> >>>   create mode 100644 tools/libxl/libxlu_sshm.c
> >>>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xenproject.org
> >> https://lists.xenproject.org/mailman/listinfo/xen-devel
> >
> > I have question regarding format of "begin", "size" and "offset" properties.
> > Wouldn't be better to operate with page numbers rather than addresses
> > like it is done for "iomem"?
> >
> > I see that in many functions (libxl_sshm.c) these addresses are being
> > transformed into page numbers the first.
> > Moreover if the Xen shared memory feature can deal with 4K page
> > aligned addresses only, why we provide
> > possibility for user to set any addresses and then check them for
> > alignment rules?
>
> Arm supports multiple page granularity (4K, 16K, 64K). Xen is always
> using 4K today, but guest can use any page granularity. This makes quite
> confusing what is the page granularity used here.
>
> Also, we may want to boot Xen with different granularity in the future.
> So the best is to keep the address agnostic to the page granularity.
Thank you. Now I understand the reason.

>
> Cheers,
>
> --
> Julien Grall



--
Regards,

Oleksandr Tyshchenko

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 6/8] docs: documentation about static shared memory regions
  2018-10-08 23:37 ` [PATCH v8 6/8] docs: documentation about static shared memory regions Stefano Stabellini
@ 2018-10-30 15:17   ` Ian Jackson
  2018-10-30 18:58     ` Stefano Stabellini
  0 siblings, 1 reply; 46+ messages in thread
From: Ian Jackson @ 2018-10-30 15:17 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Stefano Stabellini, wei.liu2, blackskygg, ian.jackson, xen-devel,
	julien.grall

Stefano Stabellini writes ("[PATCH v8 6/8] docs: documentation about static shared memory regions"):
> Author: Zhongze Liu <blackskygg@gmail.com>
> 
> Add docs to document the motivation, usage, use cases and other
> relevant information about the static shared memory feature.
> 
> This is for the proposal "Allow setting up shared memory areas between VMs
> from xl config file". See:

Thanks.  I'm reviewing this series and I obviously needed to start
with this patch so I could see what was going on.

> diff --git a/docs/man/xl-static-shm-configuration.pod.5 b/docs/man/xl-static-shm-configuration.pod.5
> new file mode 100644
> index 0000000..81ff3f1
> --- /dev/null
> +++ b/docs/man/xl-static-shm-configuration.pod.5
...
> +* Uniquely identified by a string that is no longer than 128 characters, which
> +is called an B<identifier> in this document.
> +
> +* Backed by exactly one domain, which is called a B<master> domain, and all
> +the other domains who are also sharing this region are called B<slave>s.

Would it be possible to rename these concepts ?  Nowadays it is
generally preferable to avoid master/slave terminology.  Perhaps
`owning' and `borrowing' domains ?  I'm not sure exactly what the
precise relationship is.  It would be good to explain that by
explaining what the semantic difference is between the roles of the
owning and borrowing domains.

> +More formally, the string is a series of comma-separated keyword/value
> +pairs. Each parameter may be specified at most once. Default values apply if
> +the parameter is not specified.

Thanks for the formal syntax specification.

> +=item Description
> +
> +The unique identifier of the shared memory region.
> +
> +Every identifier could appear only once in each xl config file.

You need to specify in what scope this identifier is unique.
Obviously it is unique with an owning domain; but is is unique within
the host ?

Looking at the config syntax I think it is unique within the host,
because the borrowing domain config syntax does not specify the owning
domain.  So the statement
  Every identifier could appear only once in each xl config file.
is rather more limited than it could be.

Err, wait.  I have just noticed something.  You are saying a domain
cannot take on the slave role multiple times for the same ID
(presumably, with different offsets).

Why this limitation ?  It seems artificial (and is perhaps due to the
libxl xenstore control data structure).  I guess it's liveable with if
we don't bake it into the guest ABI or the config syntax.

> +=item B<begin>/B<size>
> +=item Description
> +The boundaries of the shared memory area.
> +=item Supported values
> +
> +Same with B<offset>.

Units, please.

Also, which memory namespace (address space) is this in ?  I think
guest [pseudo]physical.  (Does that mean that this feature is not ever
going to be available for PV such as x86 PV?)  Please state the
address space, anyway - ideally with an appropriate reference to some
architectural document which defines the various address spaces,
although I suspect there is no such document :-(.

The effect of this facility on the guest ABI needs to be stated.  I
think that specifying this stuff provides (as far as the guests are
concerned) a static area of guest pseudophysical space which is shared
between the various guests.

And there is no built-in way provided for the guest to discover that
this has happened ?  The guest is just supposed to know ?  If so that
seems unfortunate.  It would be nice to advertise this somehow.

> +=item Supported values
> +
> +Decimals or hexadecimals with a prefix "0x", and should be the multiple of the
> +hypervisor page granularity (currently 4K on both ARM and x86).

Is it not also necessary to avoid certain guest pseudophysical areas
because they are reserved, or used by Xen or the tools for RAM, or
something ?

> +=item Description
> +
> +The backing area would be taken from one domain, which we will mark as
> +the "master domain", and this domain should be created prior to any
> +other slave domains that depend on it. The master's shared memory range
> +is NOT allocated in addition to its regular memory. Hence, it is usually
> +a good idea to choose a subrange of the regular guest memory allocation,
> +which starts at GUEST_RAM0_BASE, see xen/include/public/arch-arm.h.

This is not arch-neutral of course.

It is an unfortunate that this setup requires the system administrator
and guest to have intimate knowledge of Xen's approach to memory
layout.  It would be nice to have a way to improve that but I guess
the result would be much more complicated.

(Please forgive me if this has already been extensively discussed.)

> +The "slave domain" maps the memory of the master. The address of said
> +mapping should not be overlapping with the normal memory allocation of
> +the slave domain.

Ah, here is the text which I looked for in begin/end.  Maybe this
discussion about memory selection would benefit from being moved out
into its own part of the document.

In general this document seems to be structured around just the config
options and might benefit from a more explicit and longer `principles
of operation' or `description' where some of this text could go.

(I don't consider such a reorganisation critical to getting an ack for
this series though.)

> diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
> index 33d2819..6ebc324 100644
> --- a/docs/misc/xenstore-paths.markdown
> +++ b/docs/misc/xenstore-paths.markdown
> @@ -174,6 +174,14 @@ than this amount of RAM.
>  
>  The size of the video RAM this domain is configured with.
>  
> +#### ~/static_shm/[_a-zA-Z0-9]+/role = ("master"|"slave") []
> +
> +(Note: Currently, this will only appear on ARM guests.)
> +
> +The role of this domain in the static shared memory region whose id matches
> +the `[_a-zA-Z0-9]+` part in the path. (Described in the manpage
> +**xl-static-shm-configuration(5)**).

This implies that the scope of the `unique id' is at least any group
of domains which do any of this static memory sharing, but leaves open
the possibility of multiple groups using the same id.  But I think
that this is excluded by the config syntax.

Would it make any kind of semantic sense for the same region to be
borrowed multiple times, or for a domain to borrow a region from
itself ?

These may seem silly but this kind of generality often turns out to be
useful.  See for example the use of the Xen vbd protocol back to dom0
for running bootloaders.

> +* master: the domid of the backing domain.
> +* slaves: information about the slaves that are sharing the region, see
> +  ** /libxl/static_shm/[_a-zA-Z0-9]+/slaves/$DOMID/* ** below.
> +* usercnt: An integer. This is the reference count of the backing memory region,
> +  including the master domain itself. When this value reachies 0, the backing
> +  memory region will be freed.

Why is usercnt not equal to the count of
/libxl/static_shm/[_a-zA-Z0-9]+/slaves ?

If it is there is no need to store it separately, and doing so is
undesirable because it needlessly introduced the possibility of
illegal combinations of state.

> +#### /libxl/staitc_shm/[_a-zA-Z0-9]+/slaves/$DOMID/* []

Typo `staitc'.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 2/8] libxl: introduce a new structure to represent static shared memory regions
  2018-10-08 23:37 ` [PATCH v8 2/8] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
@ 2018-10-30 15:20   ` Ian Jackson
  2018-10-30 19:32     ` Julien Grall
  0 siblings, 1 reply; 46+ messages in thread
From: Ian Jackson @ 2018-10-30 15:20 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Stefano Stabellini, wei.liu2, blackskygg, ian.jackson, xen-devel,
	julien.grall

Stefano Stabellini writes ("[PATCH v8 2/8] libxl: introduce a new structure to represent static shared memory regions"):
> From: Zhongze Liu <blackskygg@gmail.com>
> 
> Author: Zhongze Liu <blackskygg@gmail.com>
> 
> Add a new structure to the IDL family to represent static shared memory regions
> as proposed in the proposal "Allow setting up shared memory areas between VMs
> from xl config file" (see [1]).
> 
> And deleted some trailing white spaces.

Can you please not add unrelated changes, even if they are only white
space changes ?  You can put them in a pre-patch and if you do that
please put my ack on the pre-patch :-).

> +libxl_sshm_cachepolicy = Enumeration("sshm_cachepolicy", [
> +    (-1, "UNKNOWN"),
> +    (0,  "ARM_NORMAL"),  # ARM policies should be < 32
> +    (32,  "X86_NORMAL"), # X86 policies should be >= 32
> +    ], init_val = "LIBXL_SSHM_CACHE_POLICY_UNKNOWN")

What ?  Why do these need separating like that ?  This is quite odd.

> +libxl_sshm_prot = Enumeration("sshm_prot", [
> +    (-1, "UNKNOWN"),
> +    (3,  "RW"),
> +    ], init_val = "LIBXL_SSHM_PROT_UNKNOWN")

Why 3 ?

> +libxl_sshm_role = Enumeration("sshm_role", [
> +    (-1, "UNKNOWN"),
> +    (0,  "MASTER"),
> +    (1,  "SLAVE"),
> +    ], init_val = "LIBXL_SSHM_ROLE_UNKNOWN")

See my comments on the docs patch about the master/slave terminology.

Wouldn't it be better for the UNKNOWN values to be 0 ?

That would eliminate a class of possible bugs (although we do try to
avoid them anyway) where the variables are not initialised.  It would
also avoid all of these extra init_val settings:

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation
  2018-10-08 23:37 ` [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
  2018-10-24 13:26   ` Oleksandr Tyshchenko
@ 2018-10-30 15:36   ` Ian Jackson
  2018-10-30 19:38     ` Julien Grall
  1 sibling, 1 reply; 46+ messages in thread
From: Ian Jackson @ 2018-10-30 15:36 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Stefano Stabellini, julien.grall, wei.liu2, blackskygg, xen-devel

Stefano Stabellini writes ("[PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation"):
> +_hidden
> +bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info);
> +
> +_hidden
> +int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm);
> +
>  #if defined(__i386__) || defined(__x86_64__)
>  
>  #define LAPIC_BASE_ADDRESS  0xfee00000
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 25dc3de..054ad58 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -1138,6 +1138,21 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
>      libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_PVH);
>  }
>  
> +bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
> +{
> +    return true;
> +}
> +
> +int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
> +{
> +    if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
> +        sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_ARM_NORMAL;
> +    if (sshm->cache_policy >= LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
> +        return ERROR_INVAL;

This cache policy stuff is odd.  I couldn't see it being used by the
hypervisor.  Why is it even there ?

The same question about prot.

>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 320dbed..45ae9e4 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -513,6 +513,14 @@ int libxl__domain_build(libxl__gc *gc,
>          ret = ERROR_INVAL;
>          goto out;
>      }
> +
> +    /* The p2m has been setup, we could map the static shared memory now. */
> +    ret = libxl__sshm_add(gc, domid, d_config->sshms, d_config->num_sshms);
> +    if (ret != 0) {
> +        LOG(ERROR, "failed to map static shared memory");
> +        goto out;
> +    }
> +
>      ret = libxl__build_post(gc, domid, info, state, vments, localents);
>  out:
>      return ret;
> @@ -949,6 +957,25 @@ static void initiate_domain_create(libxl__egc *egc,
>          goto error_out;
>      }
>  
> +    if (d_config->num_sshms != 0 &&
> +        !libxl__arch_domain_support_sshm(&d_config->b_info)) {
> +        LOGD(ERROR, domid, "static_shm is not supported by this domain type.");
> +        ret = ERROR_INVAL;
> +        goto error_out;
> +    }
> +
> +    for (i = 0; i < d_config->num_sshms; ++i) {
> +        ret = libxl__sshm_setdefault(gc, domid, &d_config->sshms[i]);
> +        if (ret) {
> +            LOGD(ERROR, domid, "Unable to set defaults for static shm");
> +            goto error_out;
> +        }
> +    }
> +
> +    ret = libxl__sshm_check_overlap(gc, domid,
> +                                    d_config->sshms, d_config->num_sshms);
> +    if (ret) goto error_out;
> +
>      ret = libxl__domain_make(gc, d_config, &dcs->build_state, &domid);
>      if (ret) {
>          LOGD(ERROR, domid, "cannot make domain: %d", ret);
> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 43947b1..6f31a3d 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
> @@ -4434,6 +4434,20 @@ static inline const char *libxl__qemu_qmp_path(libxl__gc *gc, int domid)
>  #endif
>  
>  /*
> + * Set up static shared ram pages for HVM domains to communicate
> + *
> + * This function should only be called after the memory map is constructed
> + * and before any further memory access.
> + */
> +_hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
> +                            libxl_static_shm *sshm, int len);
> +
> +_hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
> +                                      libxl_static_shm *sshms, int len);
> +_hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
> +                                   libxl_static_shm *sshm);
> +
> +/*
>   * Local variables:
>   * mode: C
>   * c-basic-offset: 4
> diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
> new file mode 100644
> index 0000000..f61b80c
> --- /dev/null
> +++ b/tools/libxl/libxl_sshm.c
> @@ -0,0 +1,405 @@
> +#include "libxl_osdeps.h"
> +#include "libxl_internal.h"
> +#include "libxl_arch.h"
> +
> +#define SSHM_PATH(id) GCSPRINTF("/libxl/static_shm/%s", id)
> +
> +#define SSHM_ERROR(domid, sshmid, f, ...)                               \
> +    LOGD(ERROR, domid, "static_shm id = %s: " f, sshmid, ##__VA_ARGS__)
> +
> +
> +/* Set default values for libxl_static_shm */
> +int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
> +                           libxl_static_shm *sshm)
> +{
> +    int rc;
> +
> +    if (sshm->role != LIBXL_SSHM_ROLE_SLAVE &&
> +        sshm->role != LIBXL_SSHM_ROLE_MASTER)
> +        return ERROR_INVAL;
> +    if (sshm->begin & ~XC_PAGE_MASK ||
> +        sshm->size & ~XC_PAGE_MASK ||
> +        (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN &&
> +        sshm->offset & ~XC_PAGE_MASK)) {
> +        SSHM_ERROR(domid, sshm->id,
> +                   "begin/size/offset is not a multiple of 4K");
> +        return ERROR_INVAL;
> +    }
> +
> +    /* role-specific checks */
> +    if (sshm->role == LIBXL_SSHM_ROLE_SLAVE) {
> +        if (sshm->offset == LIBXL_SSHM_RANGE_UNKNOWN)
> +            sshm->offset = 0;
> +        if (sshm->cache_policy != LIBXL_SSHM_CACHEPOLICY_UNKNOWN) {
> +            SSHM_ERROR(domid, sshm->id,
> +                       "cache_policy is only applicable to master domains");
> +            rc = ERROR_INVAL;
> +            goto out;
> +        }
> +    } else {
> +        if (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN) {
> +            SSHM_ERROR(domid, sshm->id,
> +                       "offset is only applicable to slave domains");
> +            rc = ERROR_INVAL;
> +            goto out;
> +        }
> +
> +        rc = libxl__arch_domain_sshm_cachepolicy_setdefault(sshm);
> +        if (rc) {
> +            SSHM_ERROR(domid, sshm->id,
> +                       "cache policy not supported on this platform");
> +            goto out;
> +        }
> +    }
> +
> +    rc = 0;
> +out:
> +    return rc;
> +}
> +
> +/* Comparator for sorting sshm ranges by sshm->begin */
> +static int sshm_range_cmp(const void *a, const void *b)
> +{
> +    libxl_static_shm *const *sshma = a, *const *sshmb = b;
> +    return (*sshma)->begin > (*sshmb)->begin ? 1 : -1;
> +}
> +
> +/* Check if the sshm slave configs in @sshm overlap */
> +int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
> +                                     libxl_static_shm *sshms, int len)
> +{
> +
> +    const libxl_static_shm **slave_sshms = NULL;
> +    int num_slaves;
> +    int i;
> +
> +    if (!len) return 0;
> +
> +    slave_sshms = libxl__calloc(gc, len, sizeof(slave_sshms[0]));
> +    num_slaves = 0;
> +    for (i = 0; i < len; ++i) {
> +        if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
> +            slave_sshms[num_slaves++] = sshms + i;
> +    }
> +    qsort(slave_sshms, num_slaves, sizeof(slave_sshms[0]), sshm_range_cmp);
> +
> +    for (i = 0; i < num_slaves - 1; ++i) {
> +        if (slave_sshms[i+1]->begin <
> +            slave_sshms[i]->begin + slave_sshms[i]->size) {
> +            SSHM_ERROR(domid, slave_sshms[i+1]->id, "slave ranges overlap.");
> +            return ERROR_INVAL;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +/*   libxl__sshm_do_map -- map pages into slave's physmap
> + *
> + *   This functions maps
> + *     master gfn: [@msshm->begin + @sshm->offset,
> + *                  @msshm->begin + @msshm->size + @sshm->offset)
> + *   into
> + *     slave gfn: [@sshm->begin, @sshm->begin + @sshm->size)
> + *
> + *   The gfns of the pages that are successfully mapped will be stored
> + *   in @mapped, and the number of the gfns will be stored in @nmapped.
> + *
> + *   The caller has to guarantee that all the values are page-aligned.
> + */
> +static int libxl__sshm_do_map(libxl__gc *gc, uint32_t mid, uint32_t sid,
> +                              libxl_static_shm *sshm, libxl_static_shm *msshm,
> +                              xen_pfn_t *mapped, unsigned int *nmapped)
> +{
> +    int rc;
> +    int i;
> +    xen_pfn_t num_mpages, num_spages, num_success, offset;
> +    int *errs;
> +    xen_ulong_t *idxs;
> +    xen_pfn_t *gpfns;
> +
> +    num_mpages = (msshm->size) >> XC_PAGE_SHIFT;
> +    num_spages = (sshm->size) >> XC_PAGE_SHIFT;
> +    offset = sshm->offset >> XC_PAGE_SHIFT;
> +
> +    /* Check range. Test offset < mpages first to avoid overflow */
> +    if ((offset >= num_mpages) || (num_mpages - offset < num_spages)) {
> +        SSHM_ERROR(sid, sshm->id, "exceeds master's address space.");
> +        rc = ERROR_INVAL;
> +        goto out;
> +    }
> +
> +    /* fill out the gfn's and do the mapping */
> +    errs = libxl__calloc(gc, num_spages, sizeof(int));
> +    idxs = libxl__calloc(gc, num_spages, sizeof(xen_ulong_t));
> +    gpfns = libxl__calloc(gc, num_spages, sizeof(xen_pfn_t));
> +    for (i = 0; i < num_spages; i++) {
> +        idxs[i] = (msshm->begin >> XC_PAGE_SHIFT) + offset + i;
> +        gpfns[i]= (sshm->begin >> XC_PAGE_SHIFT) + i;
> +    }
> +    rc = xc_domain_add_to_physmap_batch(CTX->xch,
> +                                        sid, mid,
> +                                        XENMAPSPACE_gmfn_share,
> +                                        num_spages,
> +                                        idxs, gpfns, errs);
> +
> +    num_success = 0;
> +    for (i = 0; i < num_spages; i++) {
> +        if (errs[i]) {
> +            SSHM_ERROR(sid, sshm->id,
> +                       "can't map at address 0x%"PRIx64".",
> +                       gpfns[i] << XC_PAGE_SHIFT);
> +            rc = ERROR_FAIL;
> +        } else {
> +            mapped[num_success++] = gpfns[i];
> +        }
> +    }
> +    *nmapped = num_success;
> +    if (rc) goto out;
> +
> +    rc = 0;
> +out:
> +    return rc;
> +}
> +
> +/* Xenstore ops are protected by a transaction */
> +static int libxl__sshm_incref(libxl__gc *gc, xs_transaction_t xt,
> +                              const char *sshm_path)
> +{
> +    int rc, count;
> +    const char *count_path, *count_string;
> +
> +    count_path = GCSPRINTF("%s/usercnt", sshm_path);
> +    rc = libxl__xs_read_checked(gc, xt, count_path, &count_string);
> +    if (rc) goto out;
> +    count = atoi(count_string);

libxl__xs_read_checked may set count_string to NULL if it was ENOENT.
But see my comments in the docs patch: why is this count here at all ?

> +    count_string = GCSPRINTF("%d", count+1);
> +    rc = libxl__xs_write_checked(gc, xt, count_path, count_string);
> +    if (rc) goto out;
> +
> +    rc = 0;
> +out:
> +    return rc;
> +}
> +
> +static int libxl__sshm_add_slave(libxl__gc *gc, uint32_t domid,
> +                                 libxl_static_shm *sshm)
> +{
> +    int rc, i;
> +    const char *sshm_path, *slave_path;
> +    const char *dom_path, *dom_sshm_path, *dom_role_path;
> +    const char *xs_value;
> +    char *ents[9];
> +    libxl_static_shm master_sshm;
> +    uint32_t master_domid;
> +    xen_pfn_t *mapped;
> +    unsigned int nmapped = 0;
> +    xs_transaction_t xt = XBT_NULL;
> +    bool isretry;
> +
> +    sshm_path = SSHM_PATH(sshm->id);
> +    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, sshm_path, domid);
> +    dom_path = libxl__xs_get_dompath(gc, domid);
> +    /* the domain should be in xenstore by now */
> +    assert(dom_path);
> +    dom_sshm_path = GCSPRINTF("%s/static_shm/%s", dom_path, sshm->id);
> +    dom_role_path = GCSPRINTF("%s/role", dom_sshm_path);
> +
> +    /* prepare the slave xenstore entries */
> +    ents[0] = "begin";
> +    ents[1] = GCSPRINTF("0x%"PRIx64, sshm->begin);
> +    ents[2] = "size";
> +    ents[3] = GCSPRINTF("0x%"PRIx64, sshm->size);
> +    ents[4] = "offset";
> +    ents[5] = GCSPRINTF("0x%"PRIx64, sshm->offset);
> +    ents[6] = "prot";
> +    ents[7] = libxl__strdup(gc, libxl_sshm_prot_to_string(sshm->prot));
> +    ents[8] = NULL;

I know this ents[<fixed integer>] approach is very common elsewhere
but it is IMO poor.  Please use the flexarray mechanism instead, in
particular flexarray_append_pair.  You may want a macro to encapsulate
the formulaic copying of numeric values from sshm to xenstore.

> +    mapped = libxl__calloc(gc, sshm->size >> XC_PAGE_SHIFT, sizeof(xen_pfn_t));
> +
> +    isretry = false;
> +    for (;;) {
> +        rc = libxl__xs_transaction_start(gc, &xt);
> +        if (rc) goto out;
> +
> +        if (!libxl__xs_read(gc, xt, sshm_path)) {

Error handling is wrong.  Should be libxl__xs_read_checked.

> +        /* every ID can appear in each domain at most once */
> +        if (libxl__xs_read(gc, xt, dom_sshm_path)) {
> +            SSHM_ERROR(domid, sshm->id,

Again.

> +        /* look at the master info and see if we could do the mapping */
> +        rc = libxl__xs_read_checked(gc, xt,
> +                                    GCSPRINTF("%s/prot", sshm_path),
> +                                    &xs_value);
> +        if (rc) goto out;
> +        libxl_sshm_prot_from_string(xs_value, &master_sshm.prot);
> +
> +        rc = libxl__xs_read_checked(gc, xt,
> +                                    GCSPRINTF("%s/begin", sshm_path),
> +                                    &xs_value);
> +        if (rc) goto out;
> +        master_sshm.begin = strtoull(xs_value, NULL, 16);
> +
> +        rc = libxl__xs_read_checked(gc, xt,
> +                                    GCSPRINTF("%s/size", sshm_path),
> +                                    &xs_value);
> +        if (rc) goto out;
> +        master_sshm.size = strtoull(xs_value, NULL, 16);
> +
> +        rc = libxl__xs_read_checked(gc, xt,
> +                                    GCSPRINTF("%s/master", sshm_path),
> +                                    &xs_value);
> +        if (rc) goto out;
> +        master_domid = strtoull(xs_value, NULL, 16);

I think this formulaic code needs to be macro-ified or something.

> +        /* check if the slave is asking too much permission */
> +        if (master_sshm.prot < sshm->prot) {

If you do decide to keep prot, what if it is a bitmap ?  Protections
are not necessarily a total order, so I don't think `<' can work.

This kind of future ABI/API issue is one reason why it might be better
to delete the handling of prot, particularly since it isn't wired up
to anything.

> +            SSHM_ERROR(domid, sshm->id, "slave is asking too much permission.");
> +            rc = ERROR_INVAL;
> +            goto out;
> +        }
> +
> +        /* all checks passed, do the job */
> +        if (!isretry) {
> +            rc = libxl__sshm_do_map(gc, master_domid, domid,
> +                                    sshm, &master_sshm,
> +                                    mapped, &nmapped);
> +            if (rc) goto out;
> +        }

Please don't do it like this.  I think there is a race here: you are
mapping the memory before you have committed to xenstore; you may
therefore be mapping it after someone else has committed a deletion or
reconfiguration of the sshm region.

You should do the mapping after the xenstore transaction has
successfully committed.  That way every existing mapping is
represented in xenstore.  (Unmapped mappings may be in xenstore too
but I think that is fine.)

> +static int libxl__sshm_add_master(libxl__gc *gc, uint32_t domid,
> +                                  libxl_static_shm *sshm)
> +{
...
> +    ents[9] = libxl__strdup(gc,
> +                            libxl_sshm_cachepolicy_to_string(sshm->cache_policy));

I don't know why you need the strdup here.

> +        if (!libxl__xs_read(gc, xt, sshm_path)) {

You need to use libxl__xs_read_checked here too.

> diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
> index 8b6759c..3e70a69 100644
> --- a/tools/libxl/libxl_x86.c
> +++ b/tools/libxl/libxl_x86.c
> @@ -619,6 +619,25 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
>      libxl_defbool_setdefault(&b_info->acpi, true);
>  }
>  
> +bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
> +{
> +    /* FIXME: Mark this as unsupported for calling p2m_add_foreign on two
> +     * DomU's is currently not allowd on x86, see the comments in
> +     * x86/mm/p2m.c: p2m_add_foreign.
> +     */
> +     return false;

I don't think an unsupported feature warrants a `FIXME'.  Also, can
you change the `Mark this as unspported' to `This is unsupported' ?


I think this and the destruction patch need to be combined.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction
  2018-10-08 23:37 ` [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
  2018-10-24 11:35   ` Oleksandr Tyshchenko
  2018-10-24 11:48   ` Oleksandr Andrushchenko
@ 2018-10-30 15:55   ` Ian Jackson
  2 siblings, 0 replies; 46+ messages in thread
From: Ian Jackson @ 2018-10-30 15:55 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Stefano Stabellini, julien.grall, wei.liu2, blackskygg, xen-devel

Stefano Stabellini writes ("[PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction"):
> Add libxl__sshm_del to unmap static shared memory areas mapped by
> libxl__sshm_add during domain creation. The unmapping process is:

This whole part should be in a comment in the code, I think.

> * For a master: decrease the refcount of the sshm region, if the refcount
>   reaches 0, cleanup the whole sshm path.

Does this not prevent actual domain destruction, if there are still
slaves ?

> * For a slave:
>   1) unmap the shared pages, and cleanup related xs entries. If the
>      system works normally, all the shared pages will be unmapped, so there
>      won't be page leaks. In case of errors, the unmapping process will go
>      on and unmap all the other pages that can be unmapped, so the other
>      pages won't be leaked, either.

When might pages be leaked ?

>   2) Decrease the refcount of the sshm region, if the refcount reaches
>      0, cleanup the whole sshm path.

> diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
> index 3377bba..3f7ffa6 100644
> --- a/tools/libxl/libxl_domain.c
> +++ b/tools/libxl/libxl_domain.c
> @@ -1060,6 +1060,11 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
>          goto out;
>      }
>  
> +    rc = libxl__sshm_del(gc, domid);
> +    if (rc) {
> +        LOGD(ERROR, domid, "Deleting static shm failed.");

Please explain (preferably with a code comment) what a failure here
implies and how it is a legal and recoverable state.

> +    count_path = GCSPRINTF("%s/usercnt", sshm_path);
> +    if (libxl__xs_read_checked(gc, xt, count_path, &count_string))
> +        return;

Again, count_string may be 0, but I think you can probably do away
with this.

> +static void libxl__sshm_do_unmap(libxl__gc *gc, uint32_t domid, const char *id,
> +                                 uint64_t begin, uint64_t size)
> +{
> +    uint64_t end;
> +    begin >>= XC_PAGE_SHIFT;
> +    size >>= XC_PAGE_SHIFT;
> +    end = begin + size;
> +    for (; begin < end; ++begin) {
> +        if (xc_domain_remove_from_physmap(CTX->xch, domid, begin)) {
> +            SSHM_ERROR(domid, id,
> +                       "unable to unmap shared page at 0x%"PRIx64".",
> +                       begin);
> +        }

Is this idempotent ?

> +    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, SSHM_PATH(id), domid);
> +
> +    begin_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/begin", slave_path));
> +    size_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/size", slave_path));
> +    begin = strtoull(begin_str, NULL, 16);
> +    size = strtoull(size_str, NULL, 16);

Maybe this formulaic retrieval code could be combined with that for
adding a borrow.

> +        if (libxl__xs_read(gc, xt, dom_sshm_path)) {

Again, needs to be libxl__xs_read_checked.  But:

> +            sshm_ents = libxl__xs_directory(gc, xt, dom_sshm_path, &sshm_num);
> +            if (!sshm_ents) continue;

I don't see why you can't just call libxl__xs_directory without
xs_read.  Also, you need to check errno, after libxl__xs_directory
fails.

> +            for (i = 0; i < sshm_num; ++i) {
> +                role = libxl__xs_read(gc, xt,
> +                                      GCSPRINTF("%s/%s/role",
> +                                                dom_sshm_path,
> +                                                sshm_ents[i]));
> +                assert(role);
> +                if (!strncmp(role, "slave", 5))

This should explicitly handle other values for `role', probably by
failing.  Also the error handling assert is poor.

> +                    libxl__sshm_del_slave(gc, xt, domid, sshm_ents[i], isretry);
> +
> +                libxl__sshm_decref(gc, xt, SSHM_PATH(sshm_ents[i]));
> +            }
> +        }
> +
> +        rc = libxl__xs_transaction_commit(gc, &xt);
> +        if (!rc) break;
> +        if (rc < 0) goto out;
> +        isretry = true;

isretry seems to be a dead variable.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-10-08 23:37 ` [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
  2018-10-24 11:57   ` Oleksandr Andrushchenko
@ 2018-10-30 15:58   ` Ian Jackson
  2018-10-30 19:50     ` Julien Grall
  1 sibling, 1 reply; 46+ messages in thread
From: Ian Jackson @ 2018-10-30 15:58 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Stefano Stabellini, julien.grall, wei.liu2, blackskygg, xen-devel

Stefano Stabellini writes ("[PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree"):
> Shared memory regions need to be advertised to the guest. Fortunately, a
> device tree binding for special memory regions already exist:
> reserved-memory.

Oh!  Here is the guest ABI.

But it's not documented.

> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 054ad58..c1624c0 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -436,6 +436,58 @@ static int make_memory_nodes(libxl__gc *gc, void *fdt,
>      return 0;
>  }
>  
> +static int make_reserved_nodes(libxl__gc *gc, void *fdt,
> +                               libxl_domain_config *d_config)
> +{
> +    int res, i;
> +    const char *name;
> +
> +    if (d_config->num_sshms == 0)
> +        return 0;
> +
> +    res = fdt_begin_node(fdt, "reserved-memory");
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#address-cells", GUEST_ROOT_ADDRESS_CELLS);
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#size-cells", GUEST_ROOT_SIZE_CELLS);
> +    if (res) return res;
> +
> +    res = fdt_property(fdt, "ranges", NULL, 0);
> +    if (res) return res;

The line lengths here are very long.  Also it's quite formulaic.  I
see make_psci_node is quite like that too.  IDK whether a local macro
would help.

> +    for (i = 0; i < d_config->num_sshms; i++) {
> +        uint64_t start = d_config->sshms[i].begin;
> +
> +        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
> +            start += d_config->sshms[i].offset;

Why is d_config->sshms[i].offset not 0 for the owner ?
You could do this unconditionally.

> -static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_build_info *info,
> +static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,

Can we have this NFC change and its consequences as a pre-patch ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 6/8] docs: documentation about static shared memory regions
  2018-10-30 15:17   ` Ian Jackson
@ 2018-10-30 18:58     ` Stefano Stabellini
  2018-10-30 19:25       ` Julien Grall
  0 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-30 18:58 UTC (permalink / raw)
  To: Ian Jackson
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg,
	lars.kurth, ian.jackson, george.dunlap, xen-devel, julien.grall

Hi Ian,

I am replying here, but I am addressing also your other emails. This is
v8 of the series, the first version was sent in Aug 2017, and the design
document, including the master/slave terminology, was accepted even
before that.

At this stage I am only looking for minor suggestions for improvements.
I am not going to address a v1-style review at this point.


On Tue, 30 Oct 2018, Ian Jackson wrote:
> > diff --git a/docs/man/xl-static-shm-configuration.pod.5 b/docs/man/xl-static-shm-configuration.pod.5
> > new file mode 100644
> > index 0000000..81ff3f1
> > --- /dev/null
> > +++ b/docs/man/xl-static-shm-configuration.pod.5
> ...
> > +* Uniquely identified by a string that is no longer than 128 characters, which
> > +is called an B<identifier> in this document.
> > +
> > +* Backed by exactly one domain, which is called a B<master> domain, and all
> > +the other domains who are also sharing this region are called B<slave>s.
> 
> Would it be possible to rename these concepts ?  Nowadays it is
> generally preferable to avoid master/slave terminology.  Perhaps
> `owning' and `borrowing' domains ?  I'm not sure exactly what the
> precise relationship is.  It would be good to explain that by
> explaining what the semantic difference is between the roles of the
> owning and borrowing domains.

I completely agree with you here, and getting rid of the master/slave
terminology would be nice, in retrospect, it was not a good choice. But
this is v8 of the series, and as discussed a few times, we encourage
reviewers to avoid this kind of requests at this stage.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 6/8] docs: documentation about static shared memory regions
  2018-10-30 18:58     ` Stefano Stabellini
@ 2018-10-30 19:25       ` Julien Grall
  2018-10-30 22:43         ` Stefano Stabellini
  2018-10-31 12:11         ` Ian Jackson
  0 siblings, 2 replies; 46+ messages in thread
From: Julien Grall @ 2018-10-30 19:25 UTC (permalink / raw)
  To: Stefano Stabellini, Ian Jackson
  Cc: Stefano Stabellini, wei.liu2, blackskygg, lars.kurth,
	ian.jackson, george.dunlap, xen-devel



On 10/30/18 6:58 PM, Stefano Stabellini wrote:
> Hi Ian,
> 
> I am replying here, but I am addressing also your other emails. This is
> v8 of the series, the first version was sent in Aug 2017, and the design
> document, including the master/slave terminology, was accepted even
> before that.
> 
> At this stage I am only looking for minor suggestions for improvements.
> I am not going to address a v1-style review at this point.
> 
> 
> On Tue, 30 Oct 2018, Ian Jackson wrote:
>>> diff --git a/docs/man/xl-static-shm-configuration.pod.5 b/docs/man/xl-static-shm-configuration.pod.5
>>> new file mode 100644
>>> index 0000000..81ff3f1
>>> --- /dev/null
>>> +++ b/docs/man/xl-static-shm-configuration.pod.5
>> ...
>>> +* Uniquely identified by a string that is no longer than 128 characters, which
>>> +is called an B<identifier> in this document.
>>> +
>>> +* Backed by exactly one domain, which is called a B<master> domain, and all
>>> +the other domains who are also sharing this region are called B<slave>s.
>>
>> Would it be possible to rename these concepts ?  Nowadays it is
>> generally preferable to avoid master/slave terminology.  Perhaps
>> `owning' and `borrowing' domains ?  I'm not sure exactly what the
>> precise relationship is.  It would be good to explain that by
>> explaining what the semantic difference is between the roles of the
>> owning and borrowing domains.
> 
> I completely agree with you here, and getting rid of the master/slave
> terminology would be nice, in retrospect, it was not a good choice. But
> this is v8 of the series, and as discussed a few times, we encourage
> reviewers to avoid this kind of requests at this stage.

While I agree that the design document has been accepted in Aug 2017,
the last things we want is adding more potentially offensive naming in 
Xen. It should not take too much to do the renaming (I am happy to help 
here).

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 2/8] libxl: introduce a new structure to represent static shared memory regions
  2018-10-30 15:20   ` Ian Jackson
@ 2018-10-30 19:32     ` Julien Grall
  0 siblings, 0 replies; 46+ messages in thread
From: Julien Grall @ 2018-10-30 19:32 UTC (permalink / raw)
  To: Ian Jackson, Stefano Stabellini
  Cc: ian.jackson, Stefano Stabellini, wei.liu2, blackskygg, xen-devel

Hi Ian,

On 10/30/18 3:20 PM, Ian Jackson wrote:
> Stefano Stabellini writes ("[PATCH v8 2/8] libxl: introduce a new structure to represent static shared memory regions"):
>> From: Zhongze Liu <blackskygg@gmail.com>
>>
>> Author: Zhongze Liu <blackskygg@gmail.com>
>>
>> Add a new structure to the IDL family to represent static shared memory regions
>> as proposed in the proposal "Allow setting up shared memory areas between VMs
>> from xl config file" (see [1]).
>>
>> And deleted some trailing white spaces.
> 
> Can you please not add unrelated changes, even if they are only white
> space changes ?  You can put them in a pre-patch and if you do that
> please put my ack on the pre-patch :-).
> 
>> +libxl_sshm_cachepolicy = Enumeration("sshm_cachepolicy", [
>> +    (-1, "UNKNOWN"),
>> +    (0,  "ARM_NORMAL"),  # ARM policies should be < 32
>> +    (32,  "X86_NORMAL"), # X86 policies should be >= 32
>> +    ], init_val = "LIBXL_SSHM_CACHE_POLICY_UNKNOWN")
> 
> What ?  Why do these need separating like that ?  This is quite odd.

Caching attributes are different between Arm and x86. So we need to 
provide different values for them.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation
  2018-10-30 15:36   ` Ian Jackson
@ 2018-10-30 19:38     ` Julien Grall
  2018-10-31 11:24       ` Ian Jackson
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2018-10-30 19:38 UTC (permalink / raw)
  To: Ian Jackson, Stefano Stabellini
  Cc: Stefano Stabellini, wei.liu2, blackskygg, xen-devel



On 10/30/18 3:36 PM, Ian Jackson wrote:
> Stefano Stabellini writes ("[PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation"):
>> +_hidden
>> +bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info);
>> +
>> +_hidden
>> +int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm);
>> +
>>   #if defined(__i386__) || defined(__x86_64__)
>>   
>>   #define LAPIC_BASE_ADDRESS  0xfee00000
>> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
>> index 25dc3de..054ad58 100644
>> --- a/tools/libxl/libxl_arm.c
>> +++ b/tools/libxl/libxl_arm.c
>> @@ -1138,6 +1138,21 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
>>       libxl_domain_build_info_init_type(b_info, LIBXL_DOMAIN_TYPE_PVH);
>>   }
>>   
>> +bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
>> +{
>> +    return true;
>> +}
>> +
>> +int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
>> +{
>> +    if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
>> +        sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_ARM_NORMAL;
>> +    if (sshm->cache_policy >= LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
>> +        return ERROR_INVAL;
> 
> This cache policy stuff is odd.  I couldn't see it being used by the
> hypervisor.  Why is it even there ?
We decided to not implement them in the hypervisor yet but still provide 
the xl interface for it. Would there be any issue to extern the 
interface later on?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-10-30 15:58   ` Ian Jackson
@ 2018-10-30 19:50     ` Julien Grall
  2018-10-30 22:42       ` Stefano Stabellini
  2018-10-31 11:29       ` Ian Jackson
  0 siblings, 2 replies; 46+ messages in thread
From: Julien Grall @ 2018-10-30 19:50 UTC (permalink / raw)
  To: Ian Jackson, Stefano Stabellini
  Cc: Stefano Stabellini, wei.liu2, blackskygg, xen-devel

Hi,

On 10/30/18 3:58 PM, Ian Jackson wrote:
> Stefano Stabellini writes ("[PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree"):
>> Shared memory regions need to be advertised to the guest. Fortunately, a
>> device tree binding for special memory regions already exist:
>> reserved-memory.
> 
> Oh!  Here is the guest ABI.
> 
> But it's not documented.

Device-Tree bindings are documented in 
linux/Document/device-tree/bindings. Stefano sent it via the DT mailing 
list (see [1]).

Stefano, would you mind to add a pointer in the commit message?

> 
>> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
>> index 054ad58..c1624c0 100644
>> --- a/tools/libxl/libxl_arm.c
>> +++ b/tools/libxl/libxl_arm.c
>> @@ -436,6 +436,58 @@ static int make_memory_nodes(libxl__gc *gc, void *fdt,
>>       return 0;
>>   }
>>   
>> +static int make_reserved_nodes(libxl__gc *gc, void *fdt,
>> +                               libxl_domain_config *d_config)
>> +{
>> +    int res, i;
>> +    const char *name;
>> +
>> +    if (d_config->num_sshms == 0)
>> +        return 0;
>> +
>> +    res = fdt_begin_node(fdt, "reserved-memory");
>> +    if (res) return res;
>> +
>> +    res = fdt_property_cell(fdt, "#address-cells", GUEST_ROOT_ADDRESS_CELLS);
>> +    if (res) return res;
>> +
>> +    res = fdt_property_cell(fdt, "#size-cells", GUEST_ROOT_SIZE_CELLS);
>> +    if (res) return res;
>> +
>> +    res = fdt_property(fdt, "ranges", NULL, 0);
>> +    if (res) return res;
> 
> The line lengths here are very long. 

They should still be under 75-80 characters a line has mandated per the 
coding style. Is that an issue?

> Also it's quite formulaic.  I
> see make_psci_node is quite like that too.  IDK whether a local macro
> would help.

Possibly, but that not really related to this patch itself. Can we look 
that as a clean-up?

> 
>> +    for (i = 0; i < d_config->num_sshms; i++) {
>> +        uint64_t start = d_config->sshms[i].begin;
>> +
>> +        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
>> +            start += d_config->sshms[i].offset;
> 
> Why is d_config->sshms[i].offset not 0 for the owner ?
> You could do this unconditionally.

 From the documentation:

"Can only appear when B<role> = slave. If set, the address mapping will 
not start from the beginning the backing memory region, but from the middle
(B<offset> bytes away from the beginning) of it. See the graph below:

With B<offset> = 0, the mapping will look like:

   backing memory region:  #########################################
                           |                       |
                           |                       |
                           |                       |
                           V                       V
   slave's shared region:  #########################

With B<offset> > 0:

   backing memory region:   #########################################
                            |<-- offset -->||                       |
                                            |                       |
                                            |                       |
                                            V                       V
   slave's memory region:                   #########################
"

This allows the configuration in the owning domain to be simpler as you 
can share a big region and split between multiple domain.

> 
>> -static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_build_info *info,
>> +static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
> 
> Can we have this NFC change and its consequences as a pre-patch ?

Cheers,


[1] https://lists.xen.org/archives/html/xen-devel/2018-10/msg01812.html


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-10-30 19:50     ` Julien Grall
@ 2018-10-30 22:42       ` Stefano Stabellini
  2018-10-31 11:29       ` Ian Jackson
  1 sibling, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-30 22:42 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg,
	xen-devel, Ian Jackson

On Tue, 30 Oct 2018, Julien Grall wrote:
> Hi,
> 
> On 10/30/18 3:58 PM, Ian Jackson wrote:
> > Stefano Stabellini writes ("[PATCH v8 7/8] xen/arm: export shared memory
> > regions as reserved-memory on device tree"):
> > > Shared memory regions need to be advertised to the guest. Fortunately, a
> > > device tree binding for special memory regions already exist:
> > > reserved-memory.
> > 
> > Oh!  Here is the guest ABI.
> > 
> > But it's not documented.
> 
> Device-Tree bindings are documented in linux/Document/device-tree/bindings.
> Stefano sent it via the DT mailing list (see [1]).
> 
> Stefano, would you mind to add a pointer in the commit message?

Yes, that was my plan. I am waiting for it to be accepted (should happen
shortly) and I'll refresh this patch.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 6/8] docs: documentation about static shared memory regions
  2018-10-30 19:25       ` Julien Grall
@ 2018-10-30 22:43         ` Stefano Stabellini
  2018-10-31 12:11         ` Ian Jackson
  1 sibling, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-30 22:43 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg,
	lars.kurth, ian.jackson, george.dunlap, xen-devel, Ian Jackson

On Tue, 30 Oct 2018, Julien Grall wrote:
> On 10/30/18 6:58 PM, Stefano Stabellini wrote:
> > Hi Ian,
> > 
> > I am replying here, but I am addressing also your other emails. This is
> > v8 of the series, the first version was sent in Aug 2017, and the design
> > document, including the master/slave terminology, was accepted even
> > before that.
> > 
> > At this stage I am only looking for minor suggestions for improvements.
> > I am not going to address a v1-style review at this point.
> > 
> > 
> > On Tue, 30 Oct 2018, Ian Jackson wrote:
> > > > diff --git a/docs/man/xl-static-shm-configuration.pod.5
> > > > b/docs/man/xl-static-shm-configuration.pod.5
> > > > new file mode 100644
> > > > index 0000000..81ff3f1
> > > > --- /dev/null
> > > > +++ b/docs/man/xl-static-shm-configuration.pod.5
> > > ...
> > > > +* Uniquely identified by a string that is no longer than 128
> > > > characters, which
> > > > +is called an B<identifier> in this document.
> > > > +
> > > > +* Backed by exactly one domain, which is called a B<master> domain, and
> > > > all
> > > > +the other domains who are also sharing this region are called
> > > > B<slave>s.
> > > 
> > > Would it be possible to rename these concepts ?  Nowadays it is
> > > generally preferable to avoid master/slave terminology.  Perhaps
> > > `owning' and `borrowing' domains ?  I'm not sure exactly what the
> > > precise relationship is.  It would be good to explain that by
> > > explaining what the semantic difference is between the roles of the
> > > owning and borrowing domains.
> > 
> > I completely agree with you here, and getting rid of the master/slave
> > terminology would be nice, in retrospect, it was not a good choice. But
> > this is v8 of the series, and as discussed a few times, we encourage
> > reviewers to avoid this kind of requests at this stage.
> 
> While I agree that the design document has been accepted in Aug 2017,
> the last things we want is adding more potentially offensive naming in Xen. It
> should not take too much to do the renaming (I am happy to help here).

I can do it, it is worth making this change to avoid even more
master/slave stuff in our tree, the more so that it doesn't make things
any clearer in this case.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation
  2018-10-30 19:38     ` Julien Grall
@ 2018-10-31 11:24       ` Ian Jackson
  0 siblings, 0 replies; 46+ messages in thread
From: Ian Jackson @ 2018-10-31 11:24 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg, xen-devel

Julien Grall writes ("Re: [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation"):
> On 10/30/18 3:36 PM, Ian Jackson wrote:
> > This cache policy stuff is odd.  I couldn't see it being used by the
> > hypervisor.  Why is it even there ?
> 
> We decided to not implement them in the hypervisor yet but still provide 
> the xl interface for it. Would there be any issue to extern the 
> interface later on?

If you haven't implemented the hypervisor part, how do you know what
it will be like ?  Doing things the way you are means committing to
API design decisions very early.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-10-30 19:50     ` Julien Grall
  2018-10-30 22:42       ` Stefano Stabellini
@ 2018-10-31 11:29       ` Ian Jackson
  1 sibling, 0 replies; 46+ messages in thread
From: Ian Jackson @ 2018-10-31 11:29 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg, xen-devel

Julien Grall writes ("Re: [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree"):
> On 10/30/18 3:58 PM, Ian Jackson wrote:
> > But it's not documented.
> 
> Device-Tree bindings are documented in 
> linux/Document/device-tree/bindings. Stefano sent it via the DT mailing 
> list (see [1]).

Oh great.

> Stefano, would you mind to add a pointer in the commit message?

Can we have a pointer in the xen.git documentation to the relevant
linux.git documentation ?  That would be better than just in the
commit message.

> > The line lengths here are very long. 
> 
> They should still be under 75-80 characters a line has mandated per the 
> coding style. Is that an issue?

Hrm.  We recently tightened this up to specify precisely 75.  I'm not
going to argue the point here, so fine, leave this as it is.

> > Also it's quite formulaic.  I
> > see make_psci_node is quite like that too.  IDK whether a local macro
> > would help.
> 
> Possibly, but that not really related to this patch itself. Can we look 
> that as a clean-up?

This patch is adding more of this formulaic code.  I was asing whether
you thought it worth adding a macro for this specific new hunk.

> >> +    for (i = 0; i < d_config->num_sshms; i++) {
> >> +        uint64_t start = d_config->sshms[i].begin;
> >> +
> >> +        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
> >> +            start += d_config->sshms[i].offset;
> > 
> > Why is d_config->sshms[i].offset not 0 for the owner ?
> > You could do this unconditionally.
> 
>  From the documentation:
> 
> "Can only appear when B<role> = slave. If set, the address mapping will 
> not start from the beginning the backing memory region, but from the middle
> (B<offset> bytes away from the beginning) of it. See the graph below:

Yes.

But I think that sshms[i].offset ought always to be 0 if .role ==
LIBXL_SSHM_ROLE_SLAVE.  So the test for LIBXL_SSHM_ROLE_SLAVE is
redundant.  It would be simpler and more obvious to always add
.offset.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 6/8] docs: documentation about static shared memory regions
  2018-10-30 19:25       ` Julien Grall
  2018-10-30 22:43         ` Stefano Stabellini
@ 2018-10-31 12:11         ` Ian Jackson
  2018-10-31 15:35           ` Stefano Stabellini
  1 sibling, 1 reply; 46+ messages in thread
From: Ian Jackson @ 2018-10-31 12:11 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg,
	lars.kurth, ian.jackson, george.dunlap, xen-devel

Julien Grall writes ("Re: [PATCH v8 6/8] docs: documentation about static shared memory regions"):
> On 10/30/18 6:58 PM, Stefano Stabellini wrote:
> > I completely agree with you here, and getting rid of the master/slave
> > terminology would be nice, in retrospect, it was not a good choice. But
> > this is v8 of the series, and as discussed a few times, we encourage
> > reviewers to avoid this kind of requests at this stage.

Sorry I'm late to the party.

> While I agree that the design document has been accepted in Aug 2017,
> the last things we want is adding more potentially offensive naming in 
> Xen. It should not take too much to do the renaming (I am happy to help 
> here).

Thanks for your support.  I am also happy to help.  I don't mind
whether this is done by the equivalent of filter-branch on the patch
series, or with a followup patch to rename everything.  I can
construct the followup patch if that would be welcome.

But we need to know what the new terminology should be.  Is `owner'
and `borrower' a good pairing ?  `Borrow' is perhaps not quite right
because it implies that the original owner no longer has it while it's
borrowed.  OTOH Rust has read-only borrows which work similarly so
borrowing in a way that doesn't exclude the original has at least some
precedent...

I see that the Linux DT document doesn't need to mention the role, so
we just need to fix the Xen tree.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 6/8] docs: documentation about static shared memory regions
  2018-10-31 12:11         ` Ian Jackson
@ 2018-10-31 15:35           ` Stefano Stabellini
  2018-11-02  9:21             ` George Dunlap
  0 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-10-31 15:35 UTC (permalink / raw)
  To: Ian Jackson
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg,
	lars.kurth, ian.jackson, george.dunlap, xen-devel, Julien Grall

On Wed, 31 Oct 2018, Ian Jackson wrote:
> Julien Grall writes ("Re: [PATCH v8 6/8] docs: documentation about static shared memory regions"):
> > On 10/30/18 6:58 PM, Stefano Stabellini wrote:
> > > I completely agree with you here, and getting rid of the master/slave
> > > terminology would be nice, in retrospect, it was not a good choice. But
> > > this is v8 of the series, and as discussed a few times, we encourage
> > > reviewers to avoid this kind of requests at this stage.
> 
> Sorry I'm late to the party.
> 
> > While I agree that the design document has been accepted in Aug 2017,
> > the last things we want is adding more potentially offensive naming in 
> > Xen. It should not take too much to do the renaming (I am happy to help 
> > here).
> 
> Thanks for your support.  I am also happy to help.  I don't mind
> whether this is done by the equivalent of filter-branch on the patch
> series, or with a followup patch to rename everything.  I can
> construct the followup patch if that would be welcome.
> 
> But we need to know what the new terminology should be.  Is `owner'
> and `borrower' a good pairing ?  `Borrow' is perhaps not quite right
> because it implies that the original owner no longer has it while it's
> borrowed.  OTOH Rust has read-only borrows which work similarly so
> borrowing in a way that doesn't exclude the original has at least some
> precedent...
> 
> I see that the Linux DT document doesn't need to mention the role, so
> we just need to fix the Xen tree.

Other options I could think of:

proprietor / renter
benefactor / dependent

But I think owner/borrower is the best choice -- also considering that
"own" and "borrow" are taught in very basic English classes, anybody
should understand them well.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 6/8] docs: documentation about static shared memory regions
  2018-10-31 15:35           ` Stefano Stabellini
@ 2018-11-02  9:21             ` George Dunlap
  2018-11-02 15:36               ` Ian Jackson
  0 siblings, 1 reply; 46+ messages in thread
From: George Dunlap @ 2018-11-02  9:21 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Stefano Stabellini, Wei Liu, blackskygg, Lars Kurth, xen-devel,
	Julien Grall, Ian Jackson



> On Oct 31, 2018, at 3:35 PM, Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> On Wed, 31 Oct 2018, Ian Jackson wrote:
>> Julien Grall writes ("Re: [PATCH v8 6/8] docs: documentation about static shared memory regions"):
>>> On 10/30/18 6:58 PM, Stefano Stabellini wrote:
>>>> I completely agree with you here, and getting rid of the master/slave
>>>> terminology would be nice, in retrospect, it was not a good choice. But
>>>> this is v8 of the series, and as discussed a few times, we encourage
>>>> reviewers to avoid this kind of requests at this stage.
>> 
>> Sorry I'm late to the party.
>> 
>>> While I agree that the design document has been accepted in Aug 2017,
>>> the last things we want is adding more potentially offensive naming in 
>>> Xen. It should not take too much to do the renaming (I am happy to help 
>>> here).
>> 
>> Thanks for your support.  I am also happy to help.  I don't mind
>> whether this is done by the equivalent of filter-branch on the patch
>> series, or with a followup patch to rename everything.  I can
>> construct the followup patch if that would be welcome.
>> 
>> But we need to know what the new terminology should be.  Is `owner'
>> and `borrower' a good pairing ?  `Borrow' is perhaps not quite right
>> because it implies that the original owner no longer has it while it's
>> borrowed.  OTOH Rust has read-only borrows which work similarly so
>> borrowing in a way that doesn't exclude the original has at least some
>> precedent...
>> 
>> I see that the Linux DT document doesn't need to mention the role, so
>> we just need to fix the Xen tree.
> 
> Other options I could think of:
> 
> proprietor / renter
> benefactor / dependent
> 
> But I think owner/borrower is the best choice -- also considering that
> "own" and "borrow" are taught in very basic English classes, anybody
> should understand them well.

master / slave was never the right image, because the non-owning domains aren’t being controlled by the owning domain.  But neither really is “borrower” — borrowing implies that the original owner doesn’t have it for some period of time.

Given that you have one owner and a number of other domains that use it at the same time, it’s a lot more like a pub or a cafe than an apartment or loan.  So “server / client” is probably the most accurate picture; if that’s considered too confusing, maybe “provider / consumer”, or something like that.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 6/8] docs: documentation about static shared memory regions
  2018-11-02  9:21             ` George Dunlap
@ 2018-11-02 15:36               ` Ian Jackson
  0 siblings, 0 replies; 46+ messages in thread
From: Ian Jackson @ 2018-11-02 15:36 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, Stefano Stabellini, Wei Liu, blackskygg,
	Lars Kurth, xen-devel, Julien Grall

George Dunlap writes ("Re: [PATCH v8 6/8] docs: documentation about static shared memory regions"):
> > On Oct 31, 2018, at 3:35 PM, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > On Wed, 31 Oct 2018, Ian Jackson wrote:
> >> But we need to know what the new terminology should be.  Is `owner'
> >> and `borrower' a good pairing ?  `Borrow' is perhaps not quite right
> >> because it implies that the original owner no longer has it while it's
> >> borrowed.  OTOH Rust has read-only borrows which work similarly so
> >> borrowing in a way that doesn't exclude the original has at least some
> >> precedent...
... 
> > Other options I could think of:
> > 
> > proprietor / renter
> > benefactor / dependent

IMO these are definitely worse.

> > But I think owner/borrower is the best choice -- also considering that
> > "own" and "borrow" are taught in very basic English classes, anybody
> > should understand them well.

Right.

> master / slave was never the right image, because the non-owning domains aren’t being controlled by the owning domain.  But neither really is “borrower” — borrowing implies that the original owner doesn’t have it for some period of time.

See my comments about Rust.  I think this is OK,

> Given that you have one owner and a number of other domains that use it at the same time, it’s a lot more like a pub or a cafe than an apartment or loan.  So “server / client” is probably the most accurate picture; if that’s considered too confusing, maybe “provider / consumer”, or something like that.

Server/client implies that the owner does something active to handle
the client's wishes, whereas actually the owner just sits there and
the borrowers help themselves (well, libxl helps them).

Provider/consumer is OK but `consumer' can suggest the thing is used
up.  I think `borrower' is better than `consumer'.  `Owner' definitely
seems right for the owner side.

I think the final decision is up to Stefano.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-10-09  7:12   ` Jan Beulich
@ 2018-11-06 22:42     ` Stefano Stabellini
  2018-11-07  7:41       ` Jan Beulich
  0 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-11-06 22:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Stefano Stabellini, Wei Liu, Sky Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper,
	Stefano Stabellini, Ian Jackson, xen-devel, Julien Grall,
	Daniel de Graaf

On Tue, 9 Oct 2018, Jan Beulich wrote:
> >>> On 09.10.18 at 01:37, <sstabellini@kernel.org> wrote:
> > --- a/xen/include/xsm/dummy.h
> > +++ b/xen/include/xsm/dummy.h
> > @@ -535,6 +535,20 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
> >      return xsm_default_action(action, d, t);
> >  }
> >  
> > +/*
> > + * Be aware that this is not an exact default equivalence of its flask
> > + * variant which also checks if @d and @t "are allowed to share memory
> > + * pages", for now, we don't have a proper default equivalence of such a
> > + * check.
> > + */
> > +static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
> > +                                         struct domain *t)
> > +{
> > +    XSM_ASSERT_ACTION(XSM_TARGET);
> > +    return xsm_default_action(XSM_TARGET, current->domain, d) ?:
> > +           xsm_default_action(action, current->domain, t);
> > +}
> 
> Does this (specifically xsm/dummy.c)) build with XSM enabled?
> Afaict "action" is going to be an undefined symbol in that case.

I tried it and it does build OK


> > --- a/xen/xsm/flask/hooks.c
> > +++ b/xen/xsm/flask/hooks.c
> > @@ -1192,6 +1192,14 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
> >      return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
> >  }
> >  
> > +static int flask_map_gmfn_share(struct domain *d, struct domain *t)
> > +{
> > +    if ( current_has_perm(d, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) )
> > +        return rc;
> > +    return current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) ?:
> > +           domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM);
> > +}
> > +
> 
> Same here, for "rc". It looks to me as if the first two lines of the function
> body were wrongly left in place anyway. But please could you at least
> build-test with XSM enabled when you make changes to XSM code?

This is a mistake. You are right the two lines need to be removed. I
build-tested it with XSM correctly.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0
  2018-10-22  9:57   ` Julien Grall
@ 2018-11-07  0:32     ` Stefano Stabellini
  2018-11-07 12:18       ` Julien Grall
  0 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-11-07  0:32 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg,
	ian.jackson, xen-devel

On Mon, 22 Oct 2018, Julien Grall wrote:
> Hi,
> 
> On 09/10/2018 00:37, Stefano Stabellini wrote:
> > reserved-memory regions should be mapped as normal memory.
> 
> This is already the case with p2m_mmio_direct_c. The hardware domain should
> have full control on the resulting attributes via its stage-1 mappings. So
> what's wrong with that p2m type?

Shared mappings are prevented for any types other than p2m_ram_rw, see
the p2m_is_ram checks in the implementation of XENMAPSPACE_gmfn_share.

The alternative is to remove or extend the p2m_is_ram check at
xen/arch/arm/mm.c:1283


> >  At the
> > moment, they get remapped as device memory because Xen doesn't know
> > better. Add an explicit check for it.
> > 
> > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> > ---
> >   xen/arch/arm/domain_build.c | 7 +++++++
> >   1 file changed, 7 insertions(+)
> > 
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index f552154..c7df4cf 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -1307,6 +1307,13 @@ static int __init handle_node(struct domain *d,
> > struct kernel_info *kinfo,
> >                  "WARNING: Path %s is reserved, skip the node as we may
> > re-use the path.\n",
> >                  path);
> >   +    /*
> > +     * reserved-memory ranges should be mapped as normal memory in the
> > +     * p2m.
> > +     */
> > +    if ( !strcmp(dt_node_name(node), "reserved-memory") )
> > +        p2mt = p2m_ram_rw;
> > +
> >       res = handle_device(d, node, p2mt);
> >       if ( res)
> >           return res;
> > 
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-11-06 22:42     ` Stefano Stabellini
@ 2018-11-07  7:41       ` Jan Beulich
  2018-12-03 22:02         ` Stefano Stabellini
  0 siblings, 1 reply; 46+ messages in thread
From: Jan Beulich @ 2018-11-07  7:41 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Tim Deegan, Wei Liu, Sky Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Stefano Stabellini, Ian Jackson,
	xen-devel, Julien Grall, Daniel de Graaf

>>> On 06.11.18 at 23:42, <sstabellini@kernel.org> wrote:
> On Tue, 9 Oct 2018, Jan Beulich wrote:
>> >>> On 09.10.18 at 01:37, <sstabellini@kernel.org> wrote:
>> > --- a/xen/include/xsm/dummy.h
>> > +++ b/xen/include/xsm/dummy.h
>> > @@ -535,6 +535,20 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
>> >      return xsm_default_action(action, d, t);
>> >  }
>> >  
>> > +/*
>> > + * Be aware that this is not an exact default equivalence of its flask
>> > + * variant which also checks if @d and @t "are allowed to share memory
>> > + * pages", for now, we don't have a proper default equivalence of such a
>> > + * check.
>> > + */
>> > +static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
>> > +                                         struct domain *t)
>> > +{
>> > +    XSM_ASSERT_ACTION(XSM_TARGET);
>> > +    return xsm_default_action(XSM_TARGET, current->domain, d) ?:
>> > +           xsm_default_action(action, current->domain, t);
>> > +}
>> 
>> Does this (specifically xsm/dummy.c)) build with XSM enabled?
>> Afaict "action" is going to be an undefined symbol in that case.
> 
> I tried it and it does build OK

Oh, I see - that because of the way XSM_ASSERT_ACTION() is
defined in that case. It's in fact the other way around: For
consistency with other code, you shouldn't use literal
XSM_TARGET in the first function invocation.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0
  2018-11-07  0:32     ` Stefano Stabellini
@ 2018-11-07 12:18       ` Julien Grall
  2018-11-07 12:27         ` Julien Grall
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2018-11-07 12:18 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: ian.jackson, Stefano Stabellini, wei.liu2, blackskygg, xen-devel

Hi Stefano,

On 07/11/2018 00:32, Stefano Stabellini wrote:
> On Mon, 22 Oct 2018, Julien Grall wrote:
>> Hi,
>>
>> On 09/10/2018 00:37, Stefano Stabellini wrote:
>>> reserved-memory regions should be mapped as normal memory.
>>
>> This is already the case with p2m_mmio_direct_c. The hardware domain should
>> have full control on the resulting attributes via its stage-1 mappings. So
>> what's wrong with that p2m type?
> 
> Shared mappings are prevented for any types other than p2m_ram_rw, see
> the p2m_is_ram checks in the implementation of XENMAPSPACE_gmfn_share.

This does not make any sense. This series is about mapping between any domain 
but dom0. So if you end-up to map the reserved-memory region in dom0, why are 
you using XENMAPSPACE_gmfn_share?

> 
> The alternative is to remove or extend the p2m_is_ram check at
> xen/arch/arm/mm.c:1283

p2m_ram_* means the page is managed by Xen and accounting will be done. 
Similarly XENMAPSPACE_gmfn_share will do accounting on the page mapped through that.

In the case of reserved-memory, we never handled them properly in Xen (see [1]).

There are 2 types of reserved-memory region: static and dynamic. The dynamic one 
are not a concerned as addressed are not specified in them.

In the case of static one, they are backed by a page in Xen because we didn't 
updated the code to carve them out from xenheap. This means that you are mapping 
those pages in Dom0, yet they may not be assigned to Dom0 and may get allocated 
for Xen internal use or to another domain.

As such, this patch is just a workaround to an already broken code. So the first 
step is fixing the brokenness of reserved-memory region. Then we can discuss 
whether this patch is relevant for any of your use case.

Cheers,

[1] https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02674.html

> 
> 
>>>   At the
>>> moment, they get remapped as device memory because Xen doesn't know
>>> better. Add an explicit check for it.
>>>
>>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>>> ---
>>>    xen/arch/arm/domain_build.c | 7 +++++++
>>>    1 file changed, 7 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index f552154..c7df4cf 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -1307,6 +1307,13 @@ static int __init handle_node(struct domain *d,
>>> struct kernel_info *kinfo,
>>>                   "WARNING: Path %s is reserved, skip the node as we may
>>> re-use the path.\n",
>>>                   path);
>>>    +    /*
>>> +     * reserved-memory ranges should be mapped as normal memory in the
>>> +     * p2m.
>>> +     */
>>> +    if ( !strcmp(dt_node_name(node), "reserved-memory") )
>>> +        p2mt = p2m_ram_rw;
>>> +
>>>        res = handle_device(d, node, p2mt);
>>>        if ( res)
>>>            return res;
>>>
>>
>> -- 
>> Julien Grall
>>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0
  2018-11-07 12:18       ` Julien Grall
@ 2018-11-07 12:27         ` Julien Grall
  2018-11-07 19:01           ` Stefano Stabellini
  0 siblings, 1 reply; 46+ messages in thread
From: Julien Grall @ 2018-11-07 12:27 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: ian.jackson, Stefano Stabellini, wei.liu2, blackskygg, xen-devel



On 07/11/2018 12:18, Julien Grall wrote:
> Hi Stefano,
> 
> On 07/11/2018 00:32, Stefano Stabellini wrote:
>> On Mon, 22 Oct 2018, Julien Grall wrote:
>>> Hi,
>>>
>>> On 09/10/2018 00:37, Stefano Stabellini wrote:
>>>> reserved-memory regions should be mapped as normal memory.
>>>
>>> This is already the case with p2m_mmio_direct_c. The hardware domain should
>>> have full control on the resulting attributes via its stage-1 mappings. So
>>> what's wrong with that p2m type?
>>
>> Shared mappings are prevented for any types other than p2m_ram_rw, see
>> the p2m_is_ram checks in the implementation of XENMAPSPACE_gmfn_share.
> 
> This does not make any sense. This series is about mapping between any domain 
> but dom0. So if you end-up to map the reserved-memory region in dom0, why are 
> you using XENMAPSPACE_gmfn_share?

To clarify my question, what are you trying to achieve? Are you trying to share 
memory between Dom0 and a guest. Or are you trying to share memory between an 
external entity (i.e R* core/device) and the guest?

> 
>>
>> The alternative is to remove or extend the p2m_is_ram check at
>> xen/arch/arm/mm.c:1283
> 
> p2m_ram_* means the page is managed by Xen and accounting will be done. 
> Similarly XENMAPSPACE_gmfn_share will do accounting on the page mapped through 
> that.
> 
> In the case of reserved-memory, we never handled them properly in Xen (see [1]).
> 
> There are 2 types of reserved-memory region: static and dynamic. The dynamic one 
> are not a concerned as addressed are not specified in them.
> 
> In the case of static one, they are backed by a page in Xen because we didn't 
> updated the code to carve them out from xenheap. This means that you are mapping 
> those pages in Dom0, yet they may not be assigned to Dom0 and may get allocated 
> for Xen internal use or to another domain.
> 
> As such, this patch is just a workaround to an already broken code. So the first 
> step is fixing the brokenness of reserved-memory region. Then we can discuss 
> whether this patch is relevant for any of your use case.
> 
> Cheers,
> 
> [1] https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02674.html
> 
>>
>>
>>>>   At the
>>>> moment, they get remapped as device memory because Xen doesn't know
>>>> better. Add an explicit check for it.
>>>>
>>>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>>>> ---
>>>>    xen/arch/arm/domain_build.c | 7 +++++++
>>>>    1 file changed, 7 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>> index f552154..c7df4cf 100644
>>>> --- a/xen/arch/arm/domain_build.c
>>>> +++ b/xen/arch/arm/domain_build.c
>>>> @@ -1307,6 +1307,13 @@ static int __init handle_node(struct domain *d,
>>>> struct kernel_info *kinfo,
>>>>                   "WARNING: Path %s is reserved, skip the node as we may
>>>> re-use the path.\n",
>>>>                   path);
>>>>    +    /*
>>>> +     * reserved-memory ranges should be mapped as normal memory in the
>>>> +     * p2m.
>>>> +     */
>>>> +    if ( !strcmp(dt_node_name(node), "reserved-memory") )
>>>> +        p2mt = p2m_ram_rw;
>>>> +
>>>>        res = handle_device(d, node, p2mt);
>>>>        if ( res)
>>>>            return res;
>>>>
>>>
>>> -- 
>>> Julien Grall
>>>
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0
  2018-11-07 12:27         ` Julien Grall
@ 2018-11-07 19:01           ` Stefano Stabellini
  2018-11-08 19:34             ` Julien Grall
  0 siblings, 1 reply; 46+ messages in thread
From: Stefano Stabellini @ 2018-11-07 19:01 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg,
	ian.jackson, xen-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 4368 bytes --]

On Wed, 7 Nov 2018, Julien Grall wrote:
> On 07/11/2018 12:18, Julien Grall wrote:
> > Hi Stefano,
> > 
> > On 07/11/2018 00:32, Stefano Stabellini wrote:
> > > On Mon, 22 Oct 2018, Julien Grall wrote:
> > > > Hi,
> > > > 
> > > > On 09/10/2018 00:37, Stefano Stabellini wrote:
> > > > > reserved-memory regions should be mapped as normal memory.
> > > > 
> > > > This is already the case with p2m_mmio_direct_c. The hardware domain
> > > > should
> > > > have full control on the resulting attributes via its stage-1 mappings.
> > > > So
> > > > what's wrong with that p2m type?
> > > 
> > > Shared mappings are prevented for any types other than p2m_ram_rw, see
> > > the p2m_is_ram checks in the implementation of XENMAPSPACE_gmfn_share.
> > 
> > This does not make any sense. This series is about mapping between any
> > domain but dom0. So if you end-up to map the reserved-memory region in dom0,
> > why are you using XENMAPSPACE_gmfn_share?
> 
> To clarify my question, what are you trying to achieve? Are you trying to
> share memory between Dom0 and a guest. Or are you trying to share memory
> between an external entity (i.e R* core/device) and the guest?

I have in my TODO list to achieve both the goals you mentioned. However,
with this patch I am trying to enable shared cacheable memory between
Dom0 and a guest. Specifically, I am setting up Dom0 as "owner" (with
the new terminology, formerly "master"), and a DomU as "borrower".

A lot of the steps automated by libxl have to be done manually, such as
advertising the memory region as "reserved-memory" on the Dom0 device
tree and adding the "owner" entries to xenstore, but once that is done,
it works just fine.


> > > The alternative is to remove or extend the p2m_is_ram check at
> > > xen/arch/arm/mm.c:1283
> > 
> > p2m_ram_* means the page is managed by Xen and accounting will be done.
> > Similarly XENMAPSPACE_gmfn_share will do accounting on the page mapped
> > through that.
> > 
> > In the case of reserved-memory, we never handled them properly in Xen (see
> > [1]).
> > 
> > There are 2 types of reserved-memory region: static and dynamic. The dynamic
> > one are not a concerned as addressed are not specified in them.
> > 
> > In the case of static one, they are backed by a page in Xen because we
> > didn't updated the code to carve them out from xenheap. This means that you
> > are mapping those pages in Dom0, yet they may not be assigned to Dom0 and
> > may get allocated for Xen internal use or to another domain.
> > 
> > As such, this patch is just a workaround to an already broken code. So the
> > first step is fixing the brokenness of reserved-memory region. Then we can
> > discuss whether this patch is relevant for any of your use case.

By fixing the brokenness of reserved-memory region, you mean remove them
from xenheap? Anything else you can think of that doesn't work right?


> > Cheers,
> > 
> > [1]
> > https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02674.html
> > 
> > > 
> > > 
> > > > >   At the
> > > > > moment, they get remapped as device memory because Xen doesn't know
> > > > > better. Add an explicit check for it.
> > > > > 
> > > > > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> > > > > ---
> > > > >    xen/arch/arm/domain_build.c | 7 +++++++
> > > > >    1 file changed, 7 insertions(+)
> > > > > 
> > > > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > > > index f552154..c7df4cf 100644
> > > > > --- a/xen/arch/arm/domain_build.c
> > > > > +++ b/xen/arch/arm/domain_build.c
> > > > > @@ -1307,6 +1307,13 @@ static int __init handle_node(struct domain *d,
> > > > > struct kernel_info *kinfo,
> > > > >                   "WARNING: Path %s is reserved, skip the node as we
> > > > > may
> > > > > re-use the path.\n",
> > > > >                   path);
> > > > >    +    /*
> > > > > +     * reserved-memory ranges should be mapped as normal memory in
> > > > > the
> > > > > +     * p2m.
> > > > > +     */
> > > > > +    if ( !strcmp(dt_node_name(node), "reserved-memory") )
> > > > > +        p2mt = p2m_ram_rw;
> > > > > +
> > > > >        res = handle_device(d, node, p2mt);
> > > > >        if ( res)
> > > > >            return res;
> > > > > 
> > > > 
> > > > -- 
> > > > Julien Grall

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0
  2018-11-07 19:01           ` Stefano Stabellini
@ 2018-11-08 19:34             ` Julien Grall
  0 siblings, 0 replies; 46+ messages in thread
From: Julien Grall @ 2018-11-08 19:34 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: ian.jackson, Stefano Stabellini, wei.liu2, blackskygg, xen-devel

Hi,

On 11/7/18 7:01 PM, Stefano Stabellini wrote:
> On Wed, 7 Nov 2018, Julien Grall wrote:
>> On 07/11/2018 12:18, Julien Grall wrote:
>>> Hi Stefano,
>>>
>>> On 07/11/2018 00:32, Stefano Stabellini wrote:
>>>> On Mon, 22 Oct 2018, Julien Grall wrote:
>>>>> Hi,
>>>>>
>>>>> On 09/10/2018 00:37, Stefano Stabellini wrote:
>>>>>> reserved-memory regions should be mapped as normal memory.
>>>>>
>>>>> This is already the case with p2m_mmio_direct_c. The hardware domain
>>>>> should
>>>>> have full control on the resulting attributes via its stage-1 mappings.
>>>>> So
>>>>> what's wrong with that p2m type?
>>>>
>>>> Shared mappings are prevented for any types other than p2m_ram_rw, see
>>>> the p2m_is_ram checks in the implementation of XENMAPSPACE_gmfn_share.
>>>
>>> This does not make any sense. This series is about mapping between any
>>> domain but dom0. So if you end-up to map the reserved-memory region in dom0,
>>> why are you using XENMAPSPACE_gmfn_share?
>>
>> To clarify my question, what are you trying to achieve? Are you trying to
>> share memory between Dom0 and a guest. Or are you trying to share memory
>> between an external entity (i.e R* core/device) and the guest?
> 
> I have in my TODO list to achieve both the goals you mentioned. However,
> with this patch I am trying to enable shared cacheable memory between
> Dom0 and a guest. Specifically, I am setting up Dom0 as "owner" (with
> the new terminology, formerly "master"), and a DomU as "borrower".
> 
> A lot of the steps automated by libxl have to be done manually, such as
> advertising the memory region as "reserved-memory" on the Dom0 device
> tree and adding the "owner" entries to xenstore, but once that is done,
> it works just fine.

Thank you for explaining what you are trying to achieve.

> 
> 
>>>> The alternative is to remove or extend the p2m_is_ram check at
>>>> xen/arch/arm/mm.c:1283
>>>
>>> p2m_ram_* means the page is managed by Xen and accounting will be done.
>>> Similarly XENMAPSPACE_gmfn_share will do accounting on the page mapped
>>> through that.
>>>
>>> In the case of reserved-memory, we never handled them properly in Xen (see
>>> [1]).
>>>
>>> There are 2 types of reserved-memory region: static and dynamic. The dynamic
>>> one are not a concerned as addressed are not specified in them.
>>>
>>> In the case of static one, they are backed by a page in Xen because we
>>> didn't updated the code to carve them out from xenheap. This means that you
>>> are mapping those pages in Dom0, yet they may not be assigned to Dom0 and
>>> may get allocated for Xen internal use or to another domain.
>>>
>>> As such, this patch is just a workaround to an already broken code. So the
>>> first step is fixing the brokenness of reserved-memory region. Then we can
>>> discuss whether this patch is relevant for any of your use case.
> 
> By fixing the brokenness of reserved-memory region, you mean remove them
> from xenheap? Anything else you can think of that doesn't work right?

I will try to summarize the discussion we had today.

 From my understanding of the device-tree binding for reserved-region, 
any regions described under that node will be a subset of regions 
described in the node /memory.

reserved-region can either be dynamic or static. Dynamic means the 
region will be allocated by the OS on boot. In the static case, the 
regions is fixed by the HW vendor.

The main concern is static region because Xen must not allocate those 
regions for another purpose (e.g internal memory or guest memory).

I can see two ways to handle reversed-memory regions in Xen:
	1) The regions are not treated as device from Xen PoV. They will need 
to get excluded from xenheap in early boot. Those regions will not be 
backed with struct page_info and therefore they could not be mapped 
using the foreign mapping interface. For guest they would need to be 
mapped using XENDOMCTL_memory_mapping (i.e iomem= from xl). The 
interface would need to be extended with memory attributes (e.g caching, 
shareability) as we map the MMIO region with strict attributes today.

	2) The regions are treated as RAM from Xen PoV. They will need to be 
registered in xenheap and also ensuring in early stage they cannot be 
allocated by xenheap. As they will be backed with a struct page_info, we 
would need to do proper reference counting and making sure they can 
never be re-allocated (e.g if the guest ever decide to balloon those 
pages). The map could be mapped in another guest using the foreign 
mapping interface.

In both case, we also need to ensure that for each reserved-region node, 
we have a corresponding range in /memory.

The option 1) is probably the easiest. It involves less change in the 
core code. It has also the advantage to hide a reserved-region from Dom0 
(i.e with xen,passthrough) and directly assign to the guest (via iomem). 
We may need to investigate the implication from the kernel side (some of 
the reserved-memory could be marked as re-usable).

Finally, regarding sharing memory between dom0 and the guest. I would 
look at using dynamic reserved-region. This would allow dom0 to allocate 
the region at boot. However, I don't know whether it is easy to retrieve 
the allocated region from userspace.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-11-07  7:41       ` Jan Beulich
@ 2018-12-03 22:02         ` Stefano Stabellini
  0 siblings, 0 replies; 46+ messages in thread
From: Stefano Stabellini @ 2018-12-03 22:02 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Stefano Stabellini, Wei Liu, Sky Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper,
	Stefano Stabellini, Ian Jackson, xen-devel, Julien Grall,
	Daniel de Graaf

On Wed, 7 Nov 2018, Jan Beulich wrote:
> >>> On 06.11.18 at 23:42, <sstabellini@kernel.org> wrote:
> > On Tue, 9 Oct 2018, Jan Beulich wrote:
> >> >>> On 09.10.18 at 01:37, <sstabellini@kernel.org> wrote:
> >> > --- a/xen/include/xsm/dummy.h
> >> > +++ b/xen/include/xsm/dummy.h
> >> > @@ -535,6 +535,20 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
> >> >      return xsm_default_action(action, d, t);
> >> >  }
> >> >  
> >> > +/*
> >> > + * Be aware that this is not an exact default equivalence of its flask
> >> > + * variant which also checks if @d and @t "are allowed to share memory
> >> > + * pages", for now, we don't have a proper default equivalence of such a
> >> > + * check.
> >> > + */
> >> > +static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
> >> > +                                         struct domain *t)
> >> > +{
> >> > +    XSM_ASSERT_ACTION(XSM_TARGET);
> >> > +    return xsm_default_action(XSM_TARGET, current->domain, d) ?:
> >> > +           xsm_default_action(action, current->domain, t);
> >> > +}
> >> 
> >> Does this (specifically xsm/dummy.c)) build with XSM enabled?
> >> Afaict "action" is going to be an undefined symbol in that case.
> > 
> > I tried it and it does build OK
> 
> Oh, I see - that because of the way XSM_ASSERT_ACTION() is
> defined in that case. It's in fact the other way around: For
> consistency with other code, you shouldn't use literal
> XSM_TARGET in the first function invocation.

OK, I'll use "action" in both invocations

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2018-12-03 22:02 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-08 23:37 [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
2018-10-08 23:37 ` [PATCH v8 1/8] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
2018-10-09  7:12   ` Jan Beulich
2018-11-06 22:42     ` Stefano Stabellini
2018-11-07  7:41       ` Jan Beulich
2018-12-03 22:02         ` Stefano Stabellini
2018-10-08 23:37 ` [PATCH v8 2/8] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
2018-10-30 15:20   ` Ian Jackson
2018-10-30 19:32     ` Julien Grall
2018-10-08 23:37 ` [PATCH v8 3/8] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
2018-10-24 13:26   ` Oleksandr Tyshchenko
2018-10-30 15:36   ` Ian Jackson
2018-10-30 19:38     ` Julien Grall
2018-10-31 11:24       ` Ian Jackson
2018-10-08 23:37 ` [PATCH v8 4/8] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
2018-10-24 11:35   ` Oleksandr Tyshchenko
2018-10-24 11:48   ` Oleksandr Andrushchenko
2018-10-24 13:32     ` Oleksandr Tyshchenko
2018-10-30 15:55   ` Ian Jackson
2018-10-08 23:37 ` [PATCH v8 5/8] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
2018-10-08 23:37 ` [PATCH v8 6/8] docs: documentation about static shared memory regions Stefano Stabellini
2018-10-30 15:17   ` Ian Jackson
2018-10-30 18:58     ` Stefano Stabellini
2018-10-30 19:25       ` Julien Grall
2018-10-30 22:43         ` Stefano Stabellini
2018-10-31 12:11         ` Ian Jackson
2018-10-31 15:35           ` Stefano Stabellini
2018-11-02  9:21             ` George Dunlap
2018-11-02 15:36               ` Ian Jackson
2018-10-08 23:37 ` [PATCH v8 7/8] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
2018-10-24 11:57   ` Oleksandr Andrushchenko
2018-10-30 15:58   ` Ian Jackson
2018-10-30 19:50     ` Julien Grall
2018-10-30 22:42       ` Stefano Stabellini
2018-10-31 11:29       ` Ian Jackson
2018-10-08 23:37 ` [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
2018-10-22  9:57   ` Julien Grall
2018-11-07  0:32     ` Stefano Stabellini
2018-11-07 12:18       ` Julien Grall
2018-11-07 12:27         ` Julien Grall
2018-11-07 19:01           ` Stefano Stabellini
2018-11-08 19:34             ` Julien Grall
2018-10-19 21:06 ` [PATCH v8 0/8] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
2018-10-24 14:15   ` Oleksandr Tyshchenko
2018-10-24 14:25     ` Julien Grall
2018-10-30 11:11       ` Oleksandr Tyshchenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.