All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files
@ 2018-07-31 18:22 Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Stefano Stabellini @ 2018-07-31 18:22 UTC (permalink / raw)
  To: xen-devel; +Cc: ian.jackson, julien.grall, sstabellini, wei.liu2, blackskygg

Hi,

This series implements a new xl config entry. Users can use the new
config entry to statically setup shared memory areas among VMs that
don't have grant table support so that they could communicate with each
other through the static shared memory areas.

It was originally developed by Zhongze, I am just updating the last few
issued that were address during the last round of reviews in January. I
tested the feature on ARM and works fine.

Cheers,

Stefano

Changes in v6:
- clarify memory allocation in the man page
- add a patch to expose shared memory as reserved-memory on device tree


The following changes since commit 6aaa9fb308171ec58ddf4cf058ad5341f81a65cf:

  hvm/altp2m: Clarify the proper way to extend the altp2m interface (2018-07-31 16:14:01 +0100)

are available in the git repository at:

  http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git share_mem-v6

for you to fetch changes up to 9a7bc6a17eb48905eee7b15895dd76acac49a27b:

  xen/arm: export shared memory regions as reserved-memory on device tree (2018-07-31 11:11:28 -0700)

----------------------------------------------------------------
Stefano Stabellini (1):
      xen/arm: export shared memory regions as reserved-memory on device tree

Zhongze Liu (6):
      xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
      libxl: introduce a new structure to represent static shared memory regions
      libxl: support mapping static shared memory areas during domain creation
      libxl: support unmapping static shared memory areas during domain destruction
      libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
      docs: documentation about static shared memory regions

 docs/man/xl-static-shm-configuration.pod.5 | 264 +++++++++++++++
 docs/man/xl.cfg.pod.5.in                   |   8 +
 docs/misc/xenstore-paths.markdown          |  47 +++
 tools/flask/policy/modules/xen.if          |   2 +
 tools/libxl/Makefile                       |   5 +-
 tools/libxl/libxl.h                        |   6 +
 tools/libxl/libxl_arch.h                   |   8 +-
 tools/libxl/libxl_arm.c                    |  67 +++-
 tools/libxl/libxl_create.c                 |  27 ++
 tools/libxl/libxl_dom.c                    |   2 +-
 tools/libxl/libxl_domain.c                 |   5 +
 tools/libxl/libxl_internal.h               |  16 +
 tools/libxl/libxl_sshm.c                   | 512 +++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl                |  32 +-
 tools/libxl/libxl_x86.c                    |  21 +-
 tools/libxl/libxlu_sshm.c                  | 207 ++++++++++++
 tools/libxl/libxlutil.h                    |   6 +
 tools/xl/xl_parse.c                        |  25 +-
 xen/arch/arm/mm.c                          |   7 +-
 xen/include/public/memory.h                |   8 +
 xen/include/xsm/dummy.h                    |  15 +
 xen/include/xsm/xsm.h                      |   6 +
 xen/xsm/dummy.c                            |   1 +
 xen/xsm/flask/hooks.c                      |  12 +
 xen/xsm/flask/policy/access_vectors        |   5 +
 25 files changed, 1302 insertions(+), 12 deletions(-)
 create mode 100644 docs/man/xl-static-shm-configuration.pod.5
 create mode 100644 tools/libxl/libxl_sshm.c
 create mode 100644 tools/libxl/libxlu_sshm.c

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-07-31 18:22 [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
@ 2018-07-31 18:23 ` Stefano Stabellini
  2018-08-01  8:39   ` Jan Beulich
  2018-07-31 18:23 ` [PATCH v6 2/7] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Stefano Stabellini @ 2018-07-31 18:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall, Daniel De Graaf

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

The existing XENMAPSPACE_gmfn_foreign subop of XENMEM_add_to_physmap forbids
a Dom0 to map memory pages from one DomU to another, which restricts some useful
yet not dangerous use cases -- such as sharing pages among DomU's so that they
can do shm-based communication.

This patch introduces XENMAPSPACE_gmfn_share to address this inconvenience,
which is mostly the same as XENMAPSPACE_gmfn_foreign but has its own xsm check.

Specifically, the patch:

* Introduces a new av permission MMU__SHARE_MEM to denote if two domains can
  share memory by using the new subop;
* Introduces xsm_map_gmfn_share() to check if (current) has proper permission
  over (t) AND MMU__SHARE_MEM is allowed between (d) and (t);
* Modify the default xen.te to allow MMU__SHARE_MEM for normal domains that
  allow grant mapping/event channels.

The new subop is marked unsupported for x86 because calling p2m_add_foregin
on two DomU's is currently not supported on x86.

This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix coding style
- remove useless x86 hypervisor message for the unimplemented op
- code style
- improve/add comments
---
 tools/flask/policy/modules/xen.if   |  2 ++
 xen/arch/arm/mm.c                   |  7 ++++++-
 xen/include/public/memory.h         |  8 ++++++++
 xen/include/xsm/dummy.h             | 15 +++++++++++++++
 xen/include/xsm/xsm.h               |  6 ++++++
 xen/xsm/dummy.c                     |  1 +
 xen/xsm/flask/hooks.c               | 12 ++++++++++++
 xen/xsm/flask/policy/access_vectors |  5 +++++
 8 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 7aefd00..f841125 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -128,6 +128,8 @@ define(`domain_comms', `
 	domain_event_comms($1, $2)
 	allow $1 $2:grant { map_read map_write copy unmap };
 	allow $2 $1:grant { map_read map_write copy unmap };
+	allow $1 $2:mmu share_mem;
+	allow $2 $1:mmu share_mem;
 ')
 
 # domain_self_comms(domain)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d234c46..aa2e067 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1245,6 +1245,7 @@ int xenmem_add_to_physmap_one(
 
         break;
     case XENMAPSPACE_gmfn_foreign:
+    case XENMAPSPACE_gmfn_share:
     {
         struct domain *od;
         p2m_type_t p2mt;
@@ -1259,7 +1260,11 @@ int xenmem_add_to_physmap_one(
             return -EINVAL;
         }
 
-        rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+        if ( space == XENMAPSPACE_gmfn_foreign )
+            rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+        else
+            rc = xsm_map_gmfn_share(XSM_TARGET, d, od);
+
         if ( rc )
         {
             rcu_unlock_domain(od);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index bf2f81f..a706e3c 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -227,6 +227,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_mapping_t);
                                       Stage-2 using the Normal Memory
                                       Inner/Outer Write-Back Cacheable
                                       memory attribute. */
+#define XENMAPSPACE_gmfn_share   6 /* GMFN from another dom,
+                                      XENMEM_add_to_physmap_batch (and
+                                      currently ARM) only. Unlike
+                                      XENMAPSPACE_gmfn_foreign, it
+                                      requires current to have mapping
+                                      privileges instead of the
+                                      destination domain. */
+
 /* ` } */
 
 /*
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index ff6b2db..5064fce 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -535,6 +535,21 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
     return xsm_default_action(action, d, t);
 }
 
+/*
+ * This action also requires that @current targets @d, but it has already been
+ * checked somewhere higher in the call stack.
+ *
+ * Be aware that this is not an exact default equivalence of its flask variant
+ * which also checks if @d and @t "are allowed to share memory pages", for we
+ * don't have a proper default equivalence of such a check.
+ */
+static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
+                                         struct domain *t)
+{
+    XSM_ASSERT_ACTION(XSM_TARGET);
+    return xsm_default_action(action, current->domain, t);
+}
+
 static XSM_INLINE int xsm_hvm_param(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
 {
     XSM_ASSERT_ACTION(XSM_TARGET);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index f0c6fc7..8873253 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -86,6 +86,7 @@ struct xsm_operations {
     int (*add_to_physmap) (struct domain *d1, struct domain *d2);
     int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
     int (*map_gmfn_foreign) (struct domain *d, struct domain *t);
+    int (*map_gmfn_share) (struct domain *d, struct domain *t);
     int (*claim_pages) (struct domain *d);
 
     int (*console_io) (struct domain *d, int cmd);
@@ -376,6 +377,11 @@ static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, str
     return xsm_ops->map_gmfn_foreign(d, t);
 }
 
+static inline int xsm_map_gmfn_share (xsm_default_t def, struct domain *d, struct domain *t)
+{
+    return xsm_ops->map_gmfn_share(d, t);
+}
+
 static inline int xsm_claim_pages(xsm_default_t def, struct domain *d)
 {
     return xsm_ops->claim_pages(d);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 6e75119..04e91d3 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -124,6 +124,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, add_to_physmap);
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
+    set_to_dummy_if_null(ops, map_gmfn_share);
 
     set_to_dummy_if_null(ops, vm_event_control);
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 78bc326..250d476 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1198,6 +1198,17 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
     return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
 }
 
+/*
+ * This action also requires that @current has MMU__MAP_READ/WRITE over @d,
+ * but that has already been checked somewhere higher in the call stack (for
+ * example, by flask_add_to_physmap()).
+ */
+static int flask_map_gmfn_share(struct domain *d, struct domain *t)
+{
+    return current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) ?:
+        domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM);
+}
+
 static int flask_hvm_param(struct domain *d, unsigned long op)
 {
     u32 perm;
@@ -1822,6 +1833,7 @@ static struct xsm_operations flask_ops = {
     .add_to_physmap = flask_add_to_physmap,
     .remove_from_physmap = flask_remove_from_physmap,
     .map_gmfn_foreign = flask_map_gmfn_foreign,
+    .map_gmfn_share = flask_map_gmfn_share,
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
     .get_device_group = flask_get_device_group,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c5d8548..4a92252 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -385,6 +385,11 @@ class mmu
 # Allow a privileged domain to install a map of a page it does not own.  Used
 # for stub domain device models with the PV framebuffer.
     target_hack
+# Checked when using XENMEM_add_to_physmap with XENMAPSPACE_gmfn_share
+# to share memory between two domains:
+#  source = domain whose memory is being shared
+#  target = client domain
+    share_mem
 }
 
 # control of the paging_domctl split by subop
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v6 2/7] libxl: introduce a new structure to represent static shared memory regions
  2018-07-31 18:22 [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
@ 2018-07-31 18:23 ` Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 3/7] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2018-07-31 18:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add a new structure to the IDL family to represent static shared memory regions
as proposed in the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).

And deleted some trailing white spaces.

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix typos
- add LIBXL_HAVE_SSHM
- replace end with size
---
 tools/libxl/libxl.h         |  6 ++++++
 tools/libxl/libxl_types.idl | 32 ++++++++++++++++++++++++++++++--
 2 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index ae2d63d..a9a523e 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -2455,6 +2455,12 @@ int libxl_fd_set_nonblock(libxl_ctx *ctx, int fd, int nonblock);
 int libxl_qemu_monitor_command(libxl_ctx *ctx, uint32_t domid,
                                const char *command_line, char **output);
 
+#define LIBXL_HAVE_SSHM 1
+
+/* Constants for libxl_static_shm */
+#define LIBXL_SSHM_RANGE_UNKNOWN UINT64_MAX
+#define LIBXL_SSHM_ID_MAXLEN    128
+
 #include <libxl_event.h>
 
 #endif /* LIBXL_H */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 4a38580..e1fb975 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -565,10 +565,10 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("keymap",           string),
                                        ("sdl",              libxl_sdl_info),
                                        ("spice",            libxl_spice_info),
-                                       
+
                                        ("gfx_passthru",     libxl_defbool),
                                        ("gfx_passthru_kind", libxl_gfx_passthru_kind),
-                                       
+
                                        ("serial",           string),
                                        ("boot",             string),
                                        ("usb",              libxl_defbool),
@@ -903,6 +903,33 @@ libxl_device_vsnd = Struct("device_vsnd", [
     ("pcms", Array(libxl_vsnd_pcm, "num_vsnd_pcms"))
     ])
 
+libxl_sshm_cachepolicy = Enumeration("sshm_cachepolicy", [
+    (-1, "UNKNOWN"),
+    (0,  "ARM_NORMAL"),  # ARM policies should be < 32
+    (32,  "X86_NORMAL"), # X86 policies should be >= 32
+    ], init_val = "LIBXL_SSHM_CHCHE_POLICY_UNKNOWN")
+
+libxl_sshm_prot = Enumeration("sshm_prot", [
+    (-1, "UNKNOWN"),
+    (3,  "RW"),
+    ], init_val = "LIBXL_SSHM_PROT_UNKNOWN")
+
+libxl_sshm_role = Enumeration("sshm_role", [
+    (-1, "UNKNOWN"),
+    (0,  "MASTER"),
+    (1,  "SLAVE"),
+    ], init_val = "LIBXL_SSHM_ROLE_UNKNOWN")
+
+libxl_static_shm = Struct("static_shm", [
+    ("id", string),
+    ("offset", uint64, {'init_val': 'LIBXL_SSHM_RANGE_UNKNOWN'}),
+    ("begin", uint64, {'init_val': 'LIBXL_SSHM_RANGE_UNKNOWN'}),
+    ("size", uint64, {'init_val': 'LIBXL_SSHM_RANGE_UNKNOWN'}),
+    ("prot", libxl_sshm_prot, {'init_val': 'LIBXL_SSHM_PROT_UNKNOWN'}),
+    ("cache_policy", libxl_sshm_cachepolicy, {'init_val': 'LIBXL_SSHM_CACHEPOLICY_UNKNOWN'}),
+    ("role", libxl_sshm_role, {'init_val': 'LIBXL_SSHM_ROLE_UNKNOWN'}),
+])
+
 libxl_domain_config = Struct("domain_config", [
     ("c_info", libxl_domain_create_info),
     ("b_info", libxl_domain_build_info),
@@ -924,6 +951,7 @@ libxl_domain_config = Struct("domain_config", [
     ("channels", Array(libxl_device_channel, "num_channels")),
     ("usbctrls", Array(libxl_device_usbctrl, "num_usbctrls")),
     ("usbdevs", Array(libxl_device_usbdev, "num_usbdevs")),
+    ("sshms", Array(libxl_static_shm, "num_sshms")),
 
     ("on_poweroff", libxl_action_on_shutdown),
     ("on_reboot", libxl_action_on_shutdown),
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v6 3/7] libxl: support mapping static shared memory areas during domain creation
  2018-07-31 18:22 [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 2/7] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
@ 2018-07-31 18:23 ` Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 4/7] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2018-07-31 18:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add libxl__sshm_add to map shared pages from one DomU to another, The mapping
process involves the following steps:

  * Set defaults and check for further errors in the static_shm configs:
    overlapping areas, invalid ranges, duplicated master domain,
    not page aligned, no master domain etc.
  * Use xc_domain_add_to_physmap_batch to map the shared pages to slaves
  * When some of the pages can't be successfully mapped, roll back any
    successfully mapped pages so that the system stays in a consistent state.
  * Write information about static shared memory areas into the appropriate
    xenstore paths and set the refcount of the shared region accordingly.

Temporarily mark this as unsupported on x86 because calling p2m_add_foreign on
two domU's is currently not allowd on x86 (see the comments in
x86/mm/p2m.c:p2m_add_foreign for more details).

This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix typos
- add comments
- add value checks (including alignment checks) in sshm_setdefaults
---
 tools/libxl/Makefile         |   3 +-
 tools/libxl/libxl_arch.h     |   6 +
 tools/libxl/libxl_arm.c      |  15 ++
 tools/libxl/libxl_create.c   |  27 +++
 tools/libxl/libxl_internal.h |  14 ++
 tools/libxl/libxl_sshm.c     | 405 +++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_x86.c      |  19 ++
 7 files changed, 488 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_sshm.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 6da342e..53af186 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -140,7 +140,8 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_vtpm.o libxl_nic.o libxl_disk.o libxl_console.o \
 			libxl_cpupool.o libxl_mem.o libxl_sched.o libxl_tmem.o \
 			libxl_9pfs.o libxl_domain.o libxl_vdispl.o \
-			libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o $(LIBXL_OBJS-y)
+			libxl_pvcalls.o libxl_vsnd.o libxl_vkb.o libxl_sshm.o \
+			$(LIBXL_OBJS-y)
 LIBXL_OBJS += libxl_genid.o
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 74a5af3..6a07ccf 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -73,6 +73,12 @@ int libxl__arch_extra_memory(libxl__gc *gc,
                              const libxl_domain_build_info *info,
                              uint64_t *out);
 
+_hidden
+bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info);
+
+_hidden
+int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm);
+
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 8af9f6f..5f62e78 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -1141,6 +1141,21 @@ void libxl__arch_domain_build_info_acpi_setdefault(
     libxl_defbool_setdefault(&b_info->acpi, false);
 }
 
+bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
+{
+    return true;
+}
+
+int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
+{
+    if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
+        sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_ARM_NORMAL;
+    if (sshm->cache_policy >= LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
+        return ERROR_INVAL;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1ccb3e3..2acc86a 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -532,6 +532,14 @@ int libxl__domain_build(libxl__gc *gc,
         ret = ERROR_INVAL;
         goto out;
     }
+
+    /* The p2m has been setup, we could map the static shared memory now. */
+    ret = libxl__sshm_add(gc, domid, d_config->sshms, d_config->num_sshms);
+    if (ret != 0) {
+        LOG(ERROR, "failed to map static shared memory");
+        goto out;
+    }
+
     ret = libxl__build_post(gc, domid, info, state, vments, localents);
 out:
     return ret;
@@ -959,6 +967,25 @@ static void initiate_domain_create(libxl__egc *egc,
         goto error_out;
     }
 
+    if (d_config->num_sshms != 0 &&
+        !libxl__arch_domain_support_sshm(&d_config->b_info)) {
+        LOGD(ERROR, domid, "static_shm is not supported by this domain type.");
+        ret = ERROR_INVAL;
+        goto error_out;
+    }
+
+    for (i = 0; i < d_config->num_sshms; ++i) {
+        ret = libxl__sshm_setdefault(gc, domid, &d_config->sshms[i]);
+        if (ret) {
+            LOGD(ERROR, domid, "Unable to set defaults for static shm");
+            goto error_out;
+        }
+    }
+
+    ret = libxl__sshm_check_overlap(gc, domid,
+                                    d_config->sshms, d_config->num_sshms);
+    if (ret) goto error_out;
+
     ret = libxl__domain_make(gc, d_config, &dcs->build_state, &domid);
     if (ret) {
         LOGD(ERROR, domid, "cannot make domain: %d", ret);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 843c625..bb6b840 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -4423,6 +4423,20 @@ static inline bool libxl__string_is_default(char **s)
 #endif
 
 /*
+ * Set up static shared ram pages for HVM domains to communicate
+ *
+ * This function should only be called after the memory map is constructed
+ * and before any further memory access.
+ */
+_hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
+                            libxl_static_shm *sshm, int len);
+
+_hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
+                                      libxl_static_shm *sshms, int len);
+_hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
+                                   libxl_static_shm *sshm);
+
+/*
  * Local variables:
  * mode: C
  * c-basic-offset: 4
diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
new file mode 100644
index 0000000..f61b80c
--- /dev/null
+++ b/tools/libxl/libxl_sshm.c
@@ -0,0 +1,405 @@
+#include "libxl_osdeps.h"
+#include "libxl_internal.h"
+#include "libxl_arch.h"
+
+#define SSHM_PATH(id) GCSPRINTF("/libxl/static_shm/%s", id)
+
+#define SSHM_ERROR(domid, sshmid, f, ...)                               \
+    LOGD(ERROR, domid, "static_shm id = %s: " f, sshmid, ##__VA_ARGS__)
+
+
+/* Set default values for libxl_static_shm */
+int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
+                           libxl_static_shm *sshm)
+{
+    int rc;
+
+    if (sshm->role != LIBXL_SSHM_ROLE_SLAVE &&
+        sshm->role != LIBXL_SSHM_ROLE_MASTER)
+        return ERROR_INVAL;
+    if (sshm->begin & ~XC_PAGE_MASK ||
+        sshm->size & ~XC_PAGE_MASK ||
+        (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN &&
+        sshm->offset & ~XC_PAGE_MASK)) {
+        SSHM_ERROR(domid, sshm->id,
+                   "begin/size/offset is not a multiple of 4K");
+        return ERROR_INVAL;
+    }
+
+    /* role-specific checks */
+    if (sshm->role == LIBXL_SSHM_ROLE_SLAVE) {
+        if (sshm->offset == LIBXL_SSHM_RANGE_UNKNOWN)
+            sshm->offset = 0;
+        if (sshm->cache_policy != LIBXL_SSHM_CACHEPOLICY_UNKNOWN) {
+            SSHM_ERROR(domid, sshm->id,
+                       "cache_policy is only applicable to master domains");
+            rc = ERROR_INVAL;
+            goto out;
+        }
+    } else {
+        if (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN) {
+            SSHM_ERROR(domid, sshm->id,
+                       "offset is only applicable to slave domains");
+            rc = ERROR_INVAL;
+            goto out;
+        }
+
+        rc = libxl__arch_domain_sshm_cachepolicy_setdefault(sshm);
+        if (rc) {
+            SSHM_ERROR(domid, sshm->id,
+                       "cache policy not supported on this platform");
+            goto out;
+        }
+    }
+
+    rc = 0;
+out:
+    return rc;
+}
+
+/* Comparator for sorting sshm ranges by sshm->begin */
+static int sshm_range_cmp(const void *a, const void *b)
+{
+    libxl_static_shm *const *sshma = a, *const *sshmb = b;
+    return (*sshma)->begin > (*sshmb)->begin ? 1 : -1;
+}
+
+/* Check if the sshm slave configs in @sshm overlap */
+int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
+                                     libxl_static_shm *sshms, int len)
+{
+
+    const libxl_static_shm **slave_sshms = NULL;
+    int num_slaves;
+    int i;
+
+    if (!len) return 0;
+
+    slave_sshms = libxl__calloc(gc, len, sizeof(slave_sshms[0]));
+    num_slaves = 0;
+    for (i = 0; i < len; ++i) {
+        if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
+            slave_sshms[num_slaves++] = sshms + i;
+    }
+    qsort(slave_sshms, num_slaves, sizeof(slave_sshms[0]), sshm_range_cmp);
+
+    for (i = 0; i < num_slaves - 1; ++i) {
+        if (slave_sshms[i+1]->begin <
+            slave_sshms[i]->begin + slave_sshms[i]->size) {
+            SSHM_ERROR(domid, slave_sshms[i+1]->id, "slave ranges overlap.");
+            return ERROR_INVAL;
+        }
+    }
+
+    return 0;
+}
+
+/*   libxl__sshm_do_map -- map pages into slave's physmap
+ *
+ *   This functions maps
+ *     master gfn: [@msshm->begin + @sshm->offset,
+ *                  @msshm->begin + @msshm->size + @sshm->offset)
+ *   into
+ *     slave gfn: [@sshm->begin, @sshm->begin + @sshm->size)
+ *
+ *   The gfns of the pages that are successfully mapped will be stored
+ *   in @mapped, and the number of the gfns will be stored in @nmapped.
+ *
+ *   The caller has to guarantee that all the values are page-aligned.
+ */
+static int libxl__sshm_do_map(libxl__gc *gc, uint32_t mid, uint32_t sid,
+                              libxl_static_shm *sshm, libxl_static_shm *msshm,
+                              xen_pfn_t *mapped, unsigned int *nmapped)
+{
+    int rc;
+    int i;
+    xen_pfn_t num_mpages, num_spages, num_success, offset;
+    int *errs;
+    xen_ulong_t *idxs;
+    xen_pfn_t *gpfns;
+
+    num_mpages = (msshm->size) >> XC_PAGE_SHIFT;
+    num_spages = (sshm->size) >> XC_PAGE_SHIFT;
+    offset = sshm->offset >> XC_PAGE_SHIFT;
+
+    /* Check range. Test offset < mpages first to avoid overflow */
+    if ((offset >= num_mpages) || (num_mpages - offset < num_spages)) {
+        SSHM_ERROR(sid, sshm->id, "exceeds master's address space.");
+        rc = ERROR_INVAL;
+        goto out;
+    }
+
+    /* fill out the gfn's and do the mapping */
+    errs = libxl__calloc(gc, num_spages, sizeof(int));
+    idxs = libxl__calloc(gc, num_spages, sizeof(xen_ulong_t));
+    gpfns = libxl__calloc(gc, num_spages, sizeof(xen_pfn_t));
+    for (i = 0; i < num_spages; i++) {
+        idxs[i] = (msshm->begin >> XC_PAGE_SHIFT) + offset + i;
+        gpfns[i]= (sshm->begin >> XC_PAGE_SHIFT) + i;
+    }
+    rc = xc_domain_add_to_physmap_batch(CTX->xch,
+                                        sid, mid,
+                                        XENMAPSPACE_gmfn_share,
+                                        num_spages,
+                                        idxs, gpfns, errs);
+
+    num_success = 0;
+    for (i = 0; i < num_spages; i++) {
+        if (errs[i]) {
+            SSHM_ERROR(sid, sshm->id,
+                       "can't map at address 0x%"PRIx64".",
+                       gpfns[i] << XC_PAGE_SHIFT);
+            rc = ERROR_FAIL;
+        } else {
+            mapped[num_success++] = gpfns[i];
+        }
+    }
+    *nmapped = num_success;
+    if (rc) goto out;
+
+    rc = 0;
+out:
+    return rc;
+}
+
+/* Xenstore ops are protected by a transaction */
+static int libxl__sshm_incref(libxl__gc *gc, xs_transaction_t xt,
+                              const char *sshm_path)
+{
+    int rc, count;
+    const char *count_path, *count_string;
+
+    count_path = GCSPRINTF("%s/usercnt", sshm_path);
+    rc = libxl__xs_read_checked(gc, xt, count_path, &count_string);
+    if (rc) goto out;
+    count = atoi(count_string);
+
+    count_string = GCSPRINTF("%d", count+1);
+    rc = libxl__xs_write_checked(gc, xt, count_path, count_string);
+    if (rc) goto out;
+
+    rc = 0;
+out:
+    return rc;
+}
+
+static int libxl__sshm_add_slave(libxl__gc *gc, uint32_t domid,
+                                 libxl_static_shm *sshm)
+{
+    int rc, i;
+    const char *sshm_path, *slave_path;
+    const char *dom_path, *dom_sshm_path, *dom_role_path;
+    const char *xs_value;
+    char *ents[9];
+    libxl_static_shm master_sshm;
+    uint32_t master_domid;
+    xen_pfn_t *mapped;
+    unsigned int nmapped = 0;
+    xs_transaction_t xt = XBT_NULL;
+    bool isretry;
+
+    sshm_path = SSHM_PATH(sshm->id);
+    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, sshm_path, domid);
+    dom_path = libxl__xs_get_dompath(gc, domid);
+    /* the domain should be in xenstore by now */
+    assert(dom_path);
+    dom_sshm_path = GCSPRINTF("%s/static_shm/%s", dom_path, sshm->id);
+    dom_role_path = GCSPRINTF("%s/role", dom_sshm_path);
+
+    /* prepare the slave xenstore entries */
+    ents[0] = "begin";
+    ents[1] = GCSPRINTF("0x%"PRIx64, sshm->begin);
+    ents[2] = "size";
+    ents[3] = GCSPRINTF("0x%"PRIx64, sshm->size);
+    ents[4] = "offset";
+    ents[5] = GCSPRINTF("0x%"PRIx64, sshm->offset);
+    ents[6] = "prot";
+    ents[7] = libxl__strdup(gc, libxl_sshm_prot_to_string(sshm->prot));
+    ents[8] = NULL;
+
+    mapped = libxl__calloc(gc, sshm->size >> XC_PAGE_SHIFT, sizeof(xen_pfn_t));
+
+    isretry = false;
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &xt);
+        if (rc) goto out;
+
+        if (!libxl__xs_read(gc, xt, sshm_path)) {
+            SSHM_ERROR(domid, sshm->id, "no master found.");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        /* every ID can appear in each domain at most once */
+        if (libxl__xs_read(gc, xt, dom_sshm_path)) {
+            SSHM_ERROR(domid, sshm->id,
+                       "domain tried to map the same ID twice.");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        /* look at the master info and see if we could do the mapping */
+        rc = libxl__xs_read_checked(gc, xt,
+                                    GCSPRINTF("%s/prot", sshm_path),
+                                    &xs_value);
+        if (rc) goto out;
+        libxl_sshm_prot_from_string(xs_value, &master_sshm.prot);
+
+        rc = libxl__xs_read_checked(gc, xt,
+                                    GCSPRINTF("%s/begin", sshm_path),
+                                    &xs_value);
+        if (rc) goto out;
+        master_sshm.begin = strtoull(xs_value, NULL, 16);
+
+        rc = libxl__xs_read_checked(gc, xt,
+                                    GCSPRINTF("%s/size", sshm_path),
+                                    &xs_value);
+        if (rc) goto out;
+        master_sshm.size = strtoull(xs_value, NULL, 16);
+
+        rc = libxl__xs_read_checked(gc, xt,
+                                    GCSPRINTF("%s/master", sshm_path),
+                                    &xs_value);
+        if (rc) goto out;
+        master_domid = strtoull(xs_value, NULL, 16);
+
+        if (sshm->prot == LIBXL_SSHM_PROT_UNKNOWN)
+            sshm->prot = master_sshm.prot;
+
+        /* check if the slave is asking too much permission */
+        if (master_sshm.prot < sshm->prot) {
+            SSHM_ERROR(domid, sshm->id, "slave is asking too much permission.");
+            rc = ERROR_INVAL;
+            goto out;
+        }
+
+        /* all checks passed, do the job */
+        if (!isretry) {
+            rc = libxl__sshm_do_map(gc, master_domid, domid,
+                                    sshm, &master_sshm,
+                                    mapped, &nmapped);
+            if (rc) goto out;
+        }
+
+        /* write the result to xenstore and commit */
+        rc = libxl__xs_write_checked(gc, xt, dom_role_path, "slave");
+        if (rc) goto out;
+        rc = libxl__xs_writev(gc, xt, slave_path, ents);
+        if (rc) goto out;
+        rc = libxl__sshm_incref(gc, xt, sshm_path);
+        if (rc) goto out;
+
+        rc = libxl__xs_transaction_commit(gc, &xt);
+        if (!rc) break;
+        if (rc < 0) goto out;
+        isretry = true;
+    }
+
+    rc = 0;
+out:
+    if (rc) {
+        /* roll back successfully mapped pages */
+        SSHM_ERROR(domid, sshm->id, "failed to map some pages, cancelling.");
+        for (i = 0; i < nmapped; i++) {
+            xc_domain_remove_from_physmap(CTX->xch, domid, mapped[i]);
+        }
+    }
+
+    libxl__xs_transaction_abort(gc, &xt);
+
+    return rc;
+}
+
+static int libxl__sshm_add_master(libxl__gc *gc, uint32_t domid,
+                                  libxl_static_shm *sshm)
+{
+    int rc;
+    const char *sshm_path, *dom_path, *dom_role_path;
+    char *ents[13];
+    struct xs_permissions noperm;
+    xs_transaction_t xt = XBT_NULL;
+
+    sshm_path = SSHM_PATH(sshm->id);
+    dom_path = libxl__xs_get_dompath(gc, domid);
+    /* the domain should be in xenstore by now */
+    assert(dom_path);
+    dom_role_path = GCSPRINTF("%s/static_shm/%s/role", dom_path, sshm->id);
+
+    /* prepare the xenstore entries */
+    ents[0] = "master";
+    ents[1] = GCSPRINTF("%"PRIu32, domid);
+    ents[2] = "begin";
+    ents[3] = GCSPRINTF("0x%"PRIx64, sshm->begin);
+    ents[4] = "size";
+    ents[5] = GCSPRINTF("0x%"PRIx64, sshm->size);
+    ents[6] = "prot";
+    ents[7] = libxl__strdup(gc, libxl_sshm_prot_to_string(sshm->prot));
+    ents[8] = "cache_policy";
+    ents[9] = libxl__strdup(gc,
+                            libxl_sshm_cachepolicy_to_string(sshm->cache_policy));
+    ents[10] = "usercnt";
+    ents[11] = "1";
+    ents[12] = NULL;
+
+    /* could only be accessed by Dom0 */
+    noperm.id = 0;
+    noperm.perms = XS_PERM_NONE;
+
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &xt);
+        if (rc) goto out;
+
+        if (!libxl__xs_read(gc, xt, sshm_path)) {
+            /* every ID can appear in each domain at most once */
+            if (libxl__xs_read(gc, xt, dom_role_path)) {
+                SSHM_ERROR(domid, sshm->id,
+                           "domain tried to map the same ID twice.");
+                rc = ERROR_FAIL;
+                goto out;
+            }
+            rc = libxl__xs_write_checked(gc, xt, dom_role_path, "master");
+            if (rc) goto out;;
+
+            libxl__xs_mknod(gc, xt, sshm_path, &noperm, 1);
+            libxl__xs_writev(gc, xt, sshm_path, ents);
+        } else {
+            SSHM_ERROR(domid, sshm->id, "can only have one master.");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        rc = libxl__xs_transaction_commit(gc, &xt);
+        if (!rc) break;
+        if (rc < 0) goto out;
+    }
+
+    rc = 0;
+out:
+    libxl__xs_transaction_abort(gc, &xt);
+    return rc;
+}
+
+int libxl__sshm_add(libxl__gc *gc,  uint32_t domid,
+                    libxl_static_shm *sshms, int len)
+{
+    int rc, i;
+
+    for (i = 0; i < len; ++i) {
+        if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE) {
+            rc = libxl__sshm_add_slave(gc, domid, sshms+i);
+        } else {
+            rc = libxl__sshm_add_master(gc, domid, sshms+i);
+        }
+        if (rc)  return rc;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index ab88562..dd3de96 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -619,6 +619,25 @@ void libxl__arch_domain_build_info_acpi_setdefault(
     libxl_defbool_setdefault(&b_info->acpi, true);
 }
 
+bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
+{
+    /* FIXME: Mark this as unsupported for calling p2m_add_foreign on two
+     * DomU's is currently not allowd on x86, see the comments in
+     * x86/mm/p2m.c: p2m_add_foreign.
+     */
+     return false;
+}
+
+int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
+{
+    if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
+        sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_X86_NORMAL;
+    if (sshm->cache_policy < LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
+        return ERROR_INVAL;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v6 4/7] libxl: support unmapping static shared memory areas during domain destruction
  2018-07-31 18:22 [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (2 preceding siblings ...)
  2018-07-31 18:23 ` [PATCH v6 3/7] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
@ 2018-07-31 18:23 ` Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 5/7] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2018-07-31 18:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add libxl__sshm_del to unmap static shared memory areas mapped by
libxl__sshm_add during domain creation. The unmapping process is:

* For a master: decrease the refcount of the sshm region, if the refcount
  reaches 0, cleanup the whole sshm path.

* For a slave:
  1) unmap the shared pages, and cleanup related xs entries. If the
     system works normally, all the shared pages will be unmapped, so there
     won't be page leaks. In case of errors, the unmapping process will go
     on and unmap all the other pages that can be unmapped, so the other
     pages won't be leaked, either.
  2) Decrease the refcount of the sshm region, if the refcount reaches
     0, cleanup the whole sshm path.

This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org

---
Changes in v5:
- fix typos
- add comments
- cannot move unmap before xenstore transaction because it needs to
  retrieve begin/size values from xenstore
---
 tools/libxl/libxl_domain.c   |   5 ++
 tools/libxl/libxl_internal.h |   2 +
 tools/libxl/libxl_sshm.c     | 107 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 114 insertions(+)

diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
index 533bcdf..053bbe2 100644
--- a/tools/libxl/libxl_domain.c
+++ b/tools/libxl/libxl_domain.c
@@ -1060,6 +1060,11 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
         goto out;
     }
 
+    rc = libxl__sshm_del(gc, domid);
+    if (rc) {
+        LOGD(ERROR, domid, "Deleting static shm failed.");
+    }
+
     if (libxl__device_pci_destroy_all(gc, domid) < 0)
         LOGD(ERROR, domid, "Pci shutdown failed");
     rc = xc_domain_pause(ctx->xch, domid);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index bb6b840..937b743 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -4431,6 +4431,8 @@ static inline bool libxl__string_is_default(char **s)
 _hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
                             libxl_static_shm *sshm, int len);
 
+_hidden int libxl__sshm_del(libxl__gc *gc, uint32_t domid);
+
 _hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
                                       libxl_static_shm *sshms, int len);
 _hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
index f61b80c..9672056 100644
--- a/tools/libxl/libxl_sshm.c
+++ b/tools/libxl/libxl_sshm.c
@@ -94,6 +94,113 @@ int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
     return 0;
 }
 
+/*
+ * Decrease the refcount of an sshm. When refcount reaches 0,
+ * clean up the whole sshm path.
+ * Xenstore operations are done within the same transaction.
+ */
+static void libxl__sshm_decref(libxl__gc *gc, xs_transaction_t xt,
+                               const char *sshm_path)
+{
+    int count;
+    const char *count_path, *count_string;
+
+    count_path = GCSPRINTF("%s/usercnt", sshm_path);
+    if (libxl__xs_read_checked(gc, xt, count_path, &count_string))
+        return;
+    count = atoi(count_string);
+
+    if (--count == 0) {
+        libxl__xs_path_cleanup(gc, xt, sshm_path);
+        return;
+    }
+
+    count_string = GCSPRINTF("%d", count);
+    libxl__xs_write_checked(gc, xt, count_path, count_string);
+
+    return;
+}
+
+static void libxl__sshm_do_unmap(libxl__gc *gc, uint32_t domid, const char *id,
+                                 uint64_t begin, uint64_t size)
+{
+    uint64_t end;
+    begin >>= XC_PAGE_SHIFT;
+    size >>= XC_PAGE_SHIFT;
+    end = begin + size;
+    for (; begin < end; ++begin) {
+        if (xc_domain_remove_from_physmap(CTX->xch, domid, begin)) {
+            SSHM_ERROR(domid, id,
+                       "unable to unmap shared page at 0x%"PRIx64".",
+                       begin);
+        }
+    }
+}
+
+static void libxl__sshm_del_slave(libxl__gc *gc, xs_transaction_t xt,
+                                  uint32_t domid, const char *id, bool isretry)
+{
+    const char *slave_path, *begin_str, *size_str;
+    uint64_t begin, size;
+
+    slave_path = GCSPRINTF("%s/slaves/%"PRIu32, SSHM_PATH(id), domid);
+
+    begin_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/begin", slave_path));
+    size_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/size", slave_path));
+    begin = strtoull(begin_str, NULL, 16);
+    size = strtoull(size_str, NULL, 16);
+
+    libxl__sshm_do_unmap(gc, domid, id, begin, size);
+    libxl__xs_path_cleanup(gc, xt, slave_path);
+}
+
+/* Delete static_shm entries in the xensotre. */
+int libxl__sshm_del(libxl__gc *gc,  uint32_t domid)
+{
+    int rc, i;
+    bool isretry;
+    xs_transaction_t xt = XBT_NULL;
+    const char *dom_path, *dom_sshm_path, *role;
+    char **sshm_ents;
+    unsigned int sshm_num;
+
+    dom_path = libxl__xs_get_dompath(gc, domid);
+    dom_sshm_path = GCSPRINTF("%s/static_shm", dom_path);
+
+    isretry = false;
+    for (;;) {
+        rc = libxl__xs_transaction_start(gc, &xt);
+        if (rc) goto out;
+
+        if (libxl__xs_read(gc, xt, dom_sshm_path)) {
+            sshm_ents = libxl__xs_directory(gc, xt, dom_sshm_path, &sshm_num);
+            if (!sshm_ents) continue;
+
+            for (i = 0; i < sshm_num; ++i) {
+                role = libxl__xs_read(gc, xt,
+                                      GCSPRINTF("%s/%s/role",
+                                                dom_sshm_path,
+                                                sshm_ents[i]));
+                assert(role);
+                if (!strncmp(role, "slave", 5))
+                    libxl__sshm_del_slave(gc, xt, domid, sshm_ents[i], isretry);
+
+                libxl__sshm_decref(gc, xt, SSHM_PATH(sshm_ents[i]));
+            }
+        }
+
+        rc = libxl__xs_transaction_commit(gc, &xt);
+        if (!rc) break;
+        if (rc < 0) goto out;
+        isretry = true;
+    }
+
+    rc = 0;
+out:
+    libxl__xs_transaction_abort(gc, &xt);
+    return rc;
+}
+
 /*   libxl__sshm_do_map -- map pages into slave's physmap
  *
  *   This functions maps
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v6 5/7] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
  2018-07-31 18:22 [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (3 preceding siblings ...)
  2018-07-31 18:23 ` [PATCH v6 4/7] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
@ 2018-07-31 18:23 ` Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 6/7] docs: documentation about static shared memory regions Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
  6 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2018-07-31 18:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add the parsing utils for the newly introduced libxl_static_sshm struct
to the libxl/libxlu_* family. And add realated parsing code in xl to
parse the struct from xl config files. This is for the proposal "Allow
setting up shared memory areas between VMs from xl config file" (see [1]).

[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- remove alignment checks, they were moved to libxl
---
 tools/libxl/Makefile      |   2 +-
 tools/libxl/libxlu_sshm.c | 207 ++++++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxlutil.h   |   6 ++
 tools/xl/xl_parse.c       |  25 +++++-
 4 files changed, 238 insertions(+), 2 deletions(-)
 create mode 100644 tools/libxl/libxlu_sshm.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 53af186..f3de189 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -177,7 +177,7 @@ AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
 AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
-	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o
+	libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o libxlu_sshm.o
 $(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 
 $(TEST_PROG_OBJS) _libxl.api-for-check: CFLAGS += $(CFLAGS_libxentoollog) $(CFLAGS_libxentoolcore)
diff --git a/tools/libxl/libxlu_sshm.c b/tools/libxl/libxlu_sshm.c
new file mode 100644
index 0000000..cc709a7
--- /dev/null
+++ b/tools/libxl/libxlu_sshm.c
@@ -0,0 +1,207 @@
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxlu_internal.h"
+#include "xenctrl.h"
+
+#include <ctype.h>
+
+#define PARAM_RE(EXPR) "^\\s*" EXPR "\\s*(,|$)"
+#define WORD_RE         "([_a-zA-Z0-9]+)"
+#define EQU_RE         PARAM_RE(WORD_RE "\\s*=\\s*" WORD_RE)
+
+#define RET_INVAL(msg, curr_str)  do {              \
+        xlu__sshm_err(cfg, msg, curr_str);          \
+        rc = EINVAL;                                \
+        goto out;                                   \
+    } while(0)
+
+/* set a member in libxl_static_shm and report an error if it's respecified,
+ * @curr_str indicates the head of the remaining string. */
+#define SET_VAL(var, name, type, value, curr_str)  do {                 \
+        if ((var) != LIBXL_SSHM_##type##_UNKNOWN && (var) != value) {   \
+            RET_INVAL("\"" name "\" respecified", curr_str);            \
+        }                                                               \
+        (var) = value;                                                  \
+    } while(0)
+
+
+static void xlu__sshm_err(XLU_Config *cfg, const char *msg,
+                          const char *curr_str) {
+    fprintf(cfg->report,
+            "%s: config parsing error in shared_memory: %s at '%s'\n",
+            cfg->config_source, msg, curr_str);
+}
+
+static int parse_prot(XLU_Config *cfg, char *str, libxl_sshm_prot *prot)
+{
+    int rc;
+    libxl_sshm_prot new_prot;
+
+    if (!strcmp(str, "rw")) {
+        new_prot = LIBXL_SSHM_PROT_RW;
+    } else {
+        RET_INVAL("invalid permission flags", str);
+    }
+
+    SET_VAL(*prot, "permission flags", PROT, new_prot, str);
+
+    rc = 0;
+
+ out:
+    return rc;
+}
+
+static int parse_cachepolicy(XLU_Config *cfg, char *str,
+                             libxl_sshm_cachepolicy *policy)
+{
+    int rc;
+    libxl_sshm_cachepolicy new_policy;
+
+    if (!strcmp(str, "ARM_normal")) {
+        new_policy = LIBXL_SSHM_CACHEPOLICY_ARM_NORMAL;
+    } else if (!strcmp(str, "x86_normal")) {
+        new_policy = LIBXL_SSHM_CACHEPOLICY_X86_NORMAL;
+    } else {
+        RET_INVAL("invalid cache policy", str);
+    }
+
+    SET_VAL(*policy, "cache policy", CACHEPOLICY, new_policy, str);
+    rc = 0;
+
+ out:
+    return rc;
+}
+
+/* handle key = value pairs */
+static int handle_equ(XLU_Config *cfg, char *key, char *val,
+                      libxl_static_shm *sshm)
+{
+    int rc;
+
+    if (!strcmp(key, "id")) {
+        if (strlen(val) > LIBXL_SSHM_ID_MAXLEN) { RET_INVAL("id too long", val); }
+        if (sshm->id && !strcmp(sshm->id, val)) {
+            RET_INVAL("id respecified", val);
+        }
+
+        sshm->id = strdup(val);
+        if (!sshm->id) {
+            fprintf(stderr, "sshm parser out of memory\n");
+            rc = ENOMEM;
+            goto out;
+        }
+    } else if (!strcmp(key, "role")) {
+        libxl_sshm_role new_role;
+
+        if (!strcmp("master", val)) {
+            new_role = LIBXL_SSHM_ROLE_MASTER;
+        } else if (!strcmp("slave", val)) {
+            new_role = LIBXL_SSHM_ROLE_SLAVE;
+        } else {
+            RET_INVAL("invalid role", val);
+        }
+
+        SET_VAL(sshm->role, "role", ROLE, new_role, val);
+    } else if (!strcmp(key, "begin") ||
+               !strcmp(key, "size") ||
+               !strcmp(key, "offset")) {
+        char *endptr;
+        int base = 10;
+        uint64_t new_addr;
+
+        /* Could be in hex form. Note that we don't need to check the length here,
+         * for val[] is NULL-terminated */
+        if (val[0] == '0' && val[1] == 'x') { base = 16; }
+        new_addr = strtoull(val, &endptr, base);
+        if (errno == ERANGE || *endptr)
+            RET_INVAL("invalid begin/size/offset", val);
+
+        if (key[0] == 'b') {
+            SET_VAL(sshm->begin, "beginning address", RANGE, new_addr, val);
+        } else if(key[0] == 's'){
+            SET_VAL(sshm->size, "size", RANGE, new_addr, val);
+        } else {
+            SET_VAL(sshm->offset, "offset", RANGE, new_addr, val);
+        }
+    } else if (!strcmp(key, "prot")) {
+        rc = parse_prot(cfg, val, &sshm->prot);
+        if (rc) { goto out; }
+    } else if (!strcmp(key, "cache_policy")) {
+        rc = parse_cachepolicy(cfg, val, &sshm->cache_policy);
+        if (rc) { goto out; }
+    } else {
+        RET_INVAL("invalid option", key);
+    }
+
+    rc = 0;
+
+ out:
+    return rc;
+}
+
+int xlu_sshm_parse(XLU_Config *cfg, const char *spec,
+                   libxl_static_shm *sshm)
+{
+    int rc;
+    regex_t equ_rec;
+    char *buf2 = NULL, *ptr = NULL;
+    regmatch_t pmatch[3];
+
+    rc = regcomp(&equ_rec, EQU_RE, REG_EXTENDED);
+    if (rc) {
+        fprintf(stderr, "sshm parser failed to initialize\n");
+        goto out;
+    }
+
+    buf2 = ptr = strdup(spec);
+    if (!buf2) {
+        fprintf(stderr, "sshm parser out of memory\n");
+        rc = ENOMEM;
+        goto out;
+    }
+
+    /* main parsing loop */
+    while (true) {
+        if (!*ptr) { break; }
+        if (regexec(&equ_rec, ptr, 3, pmatch, 0))
+            RET_INVAL("unrecognized token", ptr);
+
+        ptr[pmatch[1].rm_eo] = '\0';
+        ptr[pmatch[2].rm_eo] = '\0';
+        rc = handle_equ(cfg, ptr + pmatch[1].rm_so,
+                        ptr + pmatch[2].rm_so, sshm);
+        if (rc) { goto out; }
+
+        ptr += pmatch[0].rm_eo;
+    }
+
+    if (*ptr) { RET_INVAL("invalid syntax", ptr); }
+
+    /* do some early checks */
+    if (!sshm->id) {
+        RET_INVAL("id not specified", spec);
+    }
+    if (sshm->begin == LIBXL_SSHM_RANGE_UNKNOWN) {
+        RET_INVAL("begin address not specified", spec);
+    }
+    if (sshm->size == LIBXL_SSHM_RANGE_UNKNOWN) {
+        RET_INVAL("size not specified", spec);
+    }
+    if (sshm->prot == LIBXL_SSHM_PROT_UNKNOWN) {
+        sshm->prot = LIBXL_SSHM_PROT_RW;
+    }
+
+    rc = 0;
+
+ out:
+    if (buf2) { free(buf2); }
+    regfree(&equ_rec);
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
index e81b644..ee39cb5 100644
--- a/tools/libxl/libxlutil.h
+++ b/tools/libxl/libxlutil.h
@@ -118,6 +118,12 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str);
 int xlu_vif_parse_rate(XLU_Config *cfg, const char *rate,
                        libxl_device_nic *nic);
 
+/*
+ * static shared memory specification parsing
+ */
+int xlu_sshm_parse(XLU_Config *cfg, const char *spec,
+                   libxl_static_shm *sshm);
+
 #endif /* LIBXLUTIL_H */
 
 /*
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 971ec1b..e85f2cc 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1209,7 +1209,8 @@ void parse_config_data(const char *config_source,
     long l, vcpus = 0;
     XLU_Config *config;
     XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *vtpms,
-                   *usbctrls, *usbdevs, *p9devs, *vdispls, *pvcallsifs_devs;
+                   *usbctrls, *usbdevs, *p9devs, *vdispls, *pvcallsifs_devs,
+                   *sshms;
     XLU_ConfigList *channels, *ioports, *irqs, *iomem, *viridian, *dtdevs,
                    *mca_caps;
     int num_ioports, num_irqs, num_iomem, num_cpus, num_viridian, num_mca_caps;
@@ -1915,6 +1916,28 @@ void parse_config_data(const char *config_source,
         }
     }
 
+    if (!xlu_cfg_get_list (config, "static_shm", &sshms, 0, 0)) {
+        d_config->num_sshms = 0;
+        d_config->sshms = NULL;
+        while ((buf = xlu_cfg_get_listitem (sshms, d_config->num_sshms)) != NULL) {
+            libxl_static_shm *sshm;
+            char *buf2 = strdup(buf);
+            int ret;
+
+            sshm = ARRAY_EXTEND_INIT_NODEVID(d_config->sshms,
+                                             d_config->num_sshms,
+                                             libxl_static_shm_init);
+            ret = xlu_sshm_parse(config, buf2, sshm);
+            if (ret) {
+                fprintf(stderr,
+                        "xl: Invalid argument for static_shm: %s", buf2);
+                exit(EXIT_FAILURE);
+            }
+
+            free(buf2);
+        }
+    }
+
     if (!xlu_cfg_get_list(config, "p9", &p9devs, 0, 0)) {
         libxl_device_p9 *p9;
         char *security_model = NULL;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v6 6/7] docs: documentation about static shared memory regions
  2018-07-31 18:22 [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (4 preceding siblings ...)
  2018-07-31 18:23 ` [PATCH v6 5/7] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
@ 2018-07-31 18:23 ` Stefano Stabellini
  2018-07-31 18:23 ` [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
  6 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2018-07-31 18:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

From: Zhongze Liu <blackskygg@gmail.com>

Author: Zhongze Liu <blackskygg@gmail.com>

Add docs to document the motivation, usage, use cases and other
relevant information about the static shared memory feature.

This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file". See:

  https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html

Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>

Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org

---
Changes in v6:
- add clarifications on memory allocation

Changes in v5:
- fix typos
---
 docs/man/xl-static-shm-configuration.pod.5 | 264 +++++++++++++++++++++++++++++
 docs/man/xl.cfg.pod.5.in                   |   8 +
 docs/misc/xenstore-paths.markdown          |  47 +++++
 3 files changed, 319 insertions(+)
 create mode 100644 docs/man/xl-static-shm-configuration.pod.5

diff --git a/docs/man/xl-static-shm-configuration.pod.5 b/docs/man/xl-static-shm-configuration.pod.5
new file mode 100644
index 0000000..81ff3f1
--- /dev/null
+++ b/docs/man/xl-static-shm-configuration.pod.5
@@ -0,0 +1,264 @@
+=head1 NAME
+
+xl-static-shm-configuration - XL Static Shared Memory Configuration Syntax
+
+
+(B<NOTE>: This is currently only available to ARM guests.)
+
+=head1 DESCRIPTION
+
+The static_shm option allows users to statically setup shared memory regions
+among a group of VMs, enabling guests without grant table support to do
+shm-based communication.
+
+Every shared region is:
+
+=over 4
+
+* Uniquely identified by a string that is no longer than 128 characters, which
+is called an B<identifier> in this document.
+
+* Backed by exactly one domain, which is called a B<master> domain, and all
+the other domains who are also sharing this region are called B<slave>s.
+
+=back
+
+=head1 SYNTAX
+
+This document specifies syntax of the static shared memory configuration in
+the xl config file. It has the following form:
+
+    static_shm = [ "SSHM_SPEC", "SSHM_SPEC", ... ]
+
+where each C<SSHM_SPEC> is in this form:
+
+    [<key>=<value>,]*
+
+Valid examples of C<SSHM_SPEC> are:
+
+    id=ID1, begin=0x100000, size=0x100000, role=master, cache_policy=x86_normal
+    id=ID1, offset = 0, begin=0x500000, size=0x100000, role=slave, prot=rw
+    id=ID2, begin=0x300000, size=0x100000, role=master
+    id=ID2, offset = 0x10000, begin=0x690000, size=0x110000, role=slave
+    id=ID2, offset = 0x10000, begin=0x690000, size=0x110000, role=slave
+
+These might be specified in the domain config file like this:
+
+    static_shm = ["id=ID2, offset = 0x10000, begin=0x690000, size=0x110000,
+role=slave"]
+
+
+More formally, the string is a series of comma-separated keyword/value
+pairs. Each parameter may be specified at most once. Default values apply if
+the parameter is not specified.
+
+=head1 Parameters
+
+=over 4
+
+=item B<id>
+
+=over 4
+
+=item Description
+
+The unique identifier of the shared memory region.
+
+Every identifier could appear only once in each xl config file.
+
+=item Supported values
+
+A string that contains alphanumerics and "_"s, and is no longer than 128
+characters.
+
+=item Default value
+
+None, this parameter is mandatory.
+
+=back
+
+=item B<begin>/B<size>
+
+=over 4
+
+=item Description
+
+The boundaries of the shared memory area.
+
+=item Supported values
+
+Same with B<offset>.
+
+=item Default Value
+
+None, this parameter is mandatory.
+
+=back
+
+=item B<offset>
+
+=over 4
+
+=item Description
+
+Can only appear when B<role> = slave. If set, the address mapping will not
+start from the beginning the backing memory region, but from the middle
+(B<offset> bytes away from the beginning) of it. See the graph below:
+
+With B<offset> = 0, the mapping will look like:
+
+  backing memory region:  #########################################
+                          |                       |
+                          |                       |
+                          |                       |
+                          V                       V
+  slave's shared region:  #########################
+
+With B<offset> > 0:
+
+  backing memory region:   #########################################
+                           |<-- offset -->||                       |
+                                           |                       |
+                                           |                       |
+                                           V                       V
+  slave's memory region:                   #########################
+
+=item Supported values
+
+Decimals or hexadecimals with a prefix "0x", and should be the multiple of the
+hypervisor page granularity (currently 4K on both ARM and x86).
+
+=item Default value
+
+0x0
+
+=back
+
+=item B<role>
+
+=over 4
+
+=item Description
+
+The backing area would be taken from one domain, which we will mark as
+the "master domain", and this domain should be created prior to any
+other slave domains that depend on it. The master's shared memory range
+is NOT allocated in addition to its regular memory. Hence, it is usually
+a good idea to choose a subrange of the regular guest memory allocation,
+which starts at GUEST_RAM0_BASE, see xen/include/public/arch-arm.h.
+
+The "slave domain" maps the memory of the master. The address of said
+mapping should not be overlapping with the normal memory allocation of
+the slave domain.
+
+This argument specifies the role of this domain.
+
+=item Supported values
+
+master, slave
+
+=item Default value
+
+slave
+
+=back
+
+=item B<prot>
+
+=over 4
+
+=item Description
+
+When B<role> = master, this means the largest set of stage-2 permission flags
+that can be granted to the slave domains. When B<role> = slave, this means the
+stage-2 permission flags of the shared memory area.
+
+=item Supported values
+
+Currently only 'rw' is supported.
+
+=item Default value
+
+rw
+
+=back
+
+=item B<cache_policy>
+
+=over 4
+
+=item Description
+
+The stage-2 cacheability/shareability attributes of the shared memory area.
+This can only appear when B<role> = master.
+
+=item Supported values
+
+Currently, only the following policy is supported:
+
+=over 4
+
+=item B<ARM_normal>
+
+Only applicable to ARM guests. This would mean Inner and Outer Write-Back
+Cacheable, and Inner Shareable.
+
+=back
+
+=item Default value
+
+ARM_normal
+
+=back
+
+=back
+
+=head1 TYPICAL USAGE
+
+A typical procedure of setting up a shared mem region among several VMs is:
+
+=over 4
+
+1. Add a static_shm option to the master domain's xl config file, assign an
+B<ID> to it and mark it's B<role> as master, and set up the boundaries, prot
+flag, and B<cache_policy> appropriately.
+
+2. Add a static_shm option to every slave domain's xl config file, set
+their B<ID> to the same value as the master's, and set up the B<offset>,
+boundaries and prot flag appropriately.
+
+3. Create the master domain.
+
+4. Create the slaves.
+
+=back
+
+Remember that the master domain must be created before any slave domains could
+be created, for the slaves depend on the memory pages backed by their master.
+
+=head1 Example
+
+Suppose that we have 3 domains: vm1~vm3. And we want to setup two shared
+regions, say, ID1 and ID2, among the three domains, with the following address
+mapping:
+
+   ID1: (vm1 : 0x100000~0x200000) <=====> (vm2 : 0x500000~0x600000)
+   ID2: (vm1 : 0x310000~0x400000) <=====> (vm3 : 0x690000~0x800000)
+
+According to the syntax defined above, the xl config files of the three domains
+should contains the following content:
+
+In xl config file of vm1:
+  static_shm = [ "id=ID1, begin=0x100000, size=0x100000, role=master,
+cache_policy=x86_normal, prot=rw",
+"id=ID2, begin=0x300000, size=0x100000, role=master" ]
+
+In xl config file of vm2:
+  static_shm = [ "id=ID1, offset=0, begin=0x500000, size=0x100000,
+role=slave, prot=rw" ]
+
+In xl config file of vm3:
+  static_shm = [ "id=ID2, offset=0x10000, begin=0x690000,
+size=0x110000, role=slave" ]
+
+After that, just create vm1 first, and then create vm2 and vm3 in any order.
diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index b727181..b16403d 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -277,6 +277,14 @@ memory=8096 will report significantly less memory available for use
 than a system with maxmem=8096 memory=8096 due to the memory overhead
 of having to track the unused pages.
 
+=item B<static_shm=[ "SSHM_SPEC", "SSHM_SPEC", ... ]>
+
+Specifies the static shared memory regions of this guest. Static shared
+memory regions enables guests to communicate with each other through
+one or more shared memory regions, even without grant table support.
+Currently, this only works on ARM guests.
+See L<xl-static-shm-configuration(5)> for more details.
+
 =back
 
 =head3 Guest Virtual NUMA Configuration
diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
index 60c8b3f..59b4be5 100644
--- a/docs/misc/xenstore-paths.markdown
+++ b/docs/misc/xenstore-paths.markdown
@@ -174,6 +174,14 @@ than this amount of RAM.
 
 The size of the video RAM this domain is configured with.
 
+#### ~/static_shm/[_a-zA-Z0-9]+/role = ("master"|"slave") []
+
+(Note: Currently, this will only appear on ARM guests.)
+
+The role of this domain in the static shared memory region whose id matches
+the `[_a-zA-Z0-9]+` part in the path. (Described in the manpage
+**xl-static-shm-configuration(5)**).
+
 #### ~/device/suspend/event-channel = ""|EVTCHN [w]
 
 The domain's suspend event channel. The toolstack will create this
@@ -548,6 +556,45 @@ type. The name of each backend directory is the same as the backend type
 
 Contains the PIDs of the device models running on the domain.
 
+#### /libxl/static_shm/[_a-zA-Z0-9]+/* []
+
+(Note: Currently, this will only appear on ARM guests.)
+
+The following paths contain backing memory parameters of a static shared memory
+whose id matches the `[_a-zA-Z0-9]+` part in the path. Their formats and
+meanings are the same as those in an xl config file, described in the manpage
+**xl-static-shm-configuration(5)**.
+
+* begin/size: the boundary of the backing memory region.
+* prot: the largest set of stage-2 permission flags that can be granted to
+  the slave domains.
+* cache_policy: the stage-2 cacheability/shareability attributes of the backing
+  memory region.
+
+The following paths contain run-time information about the static shared memory
+region.
+
+* master: the domid of the backing domain.
+* slaves: information about the slaves that are sharing the region, see
+  ** /libxl/static_shm/[_a-zA-Z0-9]+/slaves/$DOMID/* ** below.
+* usercnt: An integer. This is the reference count of the backing memory region,
+  including the master domain itself. When this value reachies 0, the backing
+  memory region will be freed.
+
+#### /libxl/staitc_shm/[_a-zA-Z0-9]+/slaves/$DOMID/* []
+
+(Note: Currently, this will only appear on ARM guests.)
+
+The following paths contain static shared memory region parameters of a slave
+domain. Their formats and meanings are the same as those in xl config files,
+described in the manpage **xl-static-shm-configuration(5)**.
+
+* begin/size: the boundary of the shared memory region.
+* prot: the stage-2 permission flags of the shared memory area.
+* offset: when mapping the backing memory region to the slave's memory space,
+  the mapping will start from offset bytes after the beginning of the backing
+  memory region.
+
 ## Virtual Machine Paths
 
 The /vm/$UUID namespace is used by toolstacks to store various
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-07-31 18:22 [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
                   ` (5 preceding siblings ...)
  2018-07-31 18:23 ` [PATCH v6 6/7] docs: documentation about static shared memory regions Stefano Stabellini
@ 2018-07-31 18:23 ` Stefano Stabellini
  2018-08-01  9:26   ` Julien Grall
  6 siblings, 1 reply; 17+ messages in thread
From: Stefano Stabellini @ 2018-07-31 18:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
	ian.jackson, julien.grall

Shared memory regions need to be advertised to the guest. Fortunately, a
device tree binding for special memory regions already exist:
reserved-memory.

Add a reserved-memory node for each shared memory region, for both
masters and slaves.

Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
---
 tools/libxl/libxl_arch.h |  2 +-
 tools/libxl/libxl_arm.c  | 52 +++++++++++++++++++++++++++++++++++++++++++++---
 tools/libxl/libxl_dom.c  |  2 +-
 tools/libxl/libxl_x86.c  |  2 +-
 4 files changed, 52 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 6a07ccf..3626e4a 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -36,7 +36,7 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
 /* setup arch specific hardware description, i.e. DTB on ARM */
 _hidden
 int libxl__arch_domain_init_hw_description(libxl__gc *gc,
-                                           libxl_domain_build_info *info,
+                                           libxl_domain_config *d_config,
                                            libxl__domain_build_state *state,
                                            struct xc_dom_image *dom);
 /* finalize arch specific hardware description. */
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 5f62e78..4020453 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -461,6 +461,49 @@ static int make_memory_nodes(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+static int make_reserved_nodes(libxl__gc *gc, void *fdt,
+                               libxl_domain_config *d_config)
+{
+    int res, i;
+    const char *name;
+
+    if (d_config->num_sshms == 0)
+        return 0;
+
+    res = fdt_begin_node(fdt, "reserved-memory");
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "#address-cells", ROOT_ADDRESS_CELLS);
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "#size-cells", ROOT_SIZE_CELLS);
+    if (res) return res;
+
+    res = fdt_property(fdt, "ranges", NULL, 0);
+    if (res) return res;
+
+    for (i = 0; i < d_config->num_sshms; i++) {
+        uint64_t start = d_config->sshms[i].begin;
+        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
+            start += d_config->sshms[i].offset;
+        name = GCSPRINTF("memory@%"PRIx64, start);
+
+        res = fdt_begin_node(fdt, name);
+        if (res) return res;
+
+        res = fdt_property_regs(gc, fdt, ROOT_ADDRESS_CELLS, ROOT_SIZE_CELLS,
+                                1, start, d_config->sshms[i].size);
+
+        res = fdt_end_node(fdt);
+        if (res) return res;
+    }
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+}
+
 static int make_gicv2_node(libxl__gc *gc, void *fdt,
                            uint64_t gicd_base, uint64_t gicd_size,
                            uint64_t gicc_base, uint64_t gicc_size)
@@ -836,10 +879,11 @@ static int copy_partial_fdt(libxl__gc *gc, void *fdt, void *pfdt)
 
 #define FDT_MAX_SIZE (1<<20)
 
-static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_build_info *info,
+static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_config *d_config,
                               libxl__domain_build_state *state,
                               struct xc_dom_image *dom)
 {
+    libxl_domain_build_info *info = &d_config->b_info;
     void *fdt = NULL;
     void *pfdt = NULL;
     int rc, res;
@@ -922,6 +966,7 @@ next_resize:
         FDT( make_psci_node(gc, fdt) );
 
         FDT( make_memory_nodes(gc, fdt, dom) );
+        FDT( make_reserved_nodes(gc, fdt, d_config) );
 
         switch (info->arch_arm.gic_version) {
         case LIBXL_GIC_VERSION_V2:
@@ -971,12 +1016,13 @@ out:
 }
 
 int libxl__arch_domain_init_hw_description(libxl__gc *gc,
-                                           libxl_domain_build_info *info,
+                                           libxl_domain_config *d_config,
                                            libxl__domain_build_state *state,
                                            struct xc_dom_image *dom)
 {
     int rc;
     uint64_t val;
+    libxl_domain_build_info *info = &d_config->b_info;
 
     assert(info->type == LIBXL_DOMAIN_TYPE_PV);
 
@@ -992,7 +1038,7 @@ int libxl__arch_domain_init_hw_description(libxl__gc *gc,
     if (rc)
         return rc;
 
-    rc = libxl__prepare_dtb(gc, info, state, dom);
+    rc = libxl__prepare_dtb(gc, d_config, state, dom);
     if (rc) goto out;
 
     if (!libxl_defbool_val(info->acpi)) {
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 3cfe0d4..4ab52df 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -719,7 +719,7 @@ static int libxl__build_dom(libxl__gc *gc, uint32_t domid,
         LOG(ERROR, "xc_dom_parse_image failed");
         goto out;
     }
-    if ( (ret = libxl__arch_domain_init_hw_description(gc, info, state, dom)) != 0 ) {
+    if ( (ret = libxl__arch_domain_init_hw_description(gc, d_config, state, dom)) != 0 ) {
         LOGE(ERROR, "libxl__arch_domain_init_hw_description failed");
         goto out;
     }
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index dd3de96..a3e2915 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -367,7 +367,7 @@ int libxl__arch_extra_memory(libxl__gc *gc,
 }
 
 int libxl__arch_domain_init_hw_description(libxl__gc *gc,
-                                           libxl_domain_build_info *info,
+                                           libxl_domain_config *d_config,
                                            libxl__domain_build_state *state,
                                            struct xc_dom_image *dom)
 {
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-07-31 18:23 ` [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
@ 2018-08-01  8:39   ` Jan Beulich
  2018-08-01 22:21     ` Stefano Stabellini
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2018-08-01  8:39 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Stefano Stabellini, Wei Liu, Sky Liu, Ian Jackson, xen-devel,
	Julien Grall, Daniel de Graaf

First of all I think your Cc list is too short here - all of REST should be
included imo.

>>> On 31.07.18 at 20:23, <sstabellini@kernel.org> wrote:
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -535,6 +535,21 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
>      return xsm_default_action(action, d, t);
>  }
>  
> +/*
> + * This action also requires that @current targets @d, but it has already been
> + * checked somewhere higher in the call stack.

I'm not convinced it is a good idea to have such a dependency, even
more so with this cloudy a reference. If there's another XSM check
that has necessarily been done before, you should at least name it
here so it's easy to later verify that the assumption still holds. But
even better would imo be to re-do the check here, just in case.

> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -1198,6 +1198,17 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
>      return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
>  }
>  
> +/*
> + * This action also requires that @current has MMU__MAP_READ/WRITE over @d,
> + * but that has already been checked somewhere higher in the call stack (for
> + * example, by flask_add_to_physmap()).

This one's better in that it at least names the other hook. Still I'm
not fully convinced that omitting the other half of the check here
is prudent. We'll see what others think ...

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-07-31 18:23 ` [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
@ 2018-08-01  9:26   ` Julien Grall
  2018-08-01 22:25     ` Stefano Stabellini
  0 siblings, 1 reply; 17+ messages in thread
From: Julien Grall @ 2018-08-01  9:26 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: ian.jackson, wei.liu2, blackskygg, Stefano Stabellini

Hi,

On 31/07/18 19:23, Stefano Stabellini wrote:
> Shared memory regions need to be advertised to the guest. Fortunately, a
> device tree binding for special memory regions already exist:
> reserved-memory.
> 
> Add a reserved-memory node for each shared memory region, for both
> masters and slaves.
> 
> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> ---
>   tools/libxl/libxl_arch.h |  2 +-
>   tools/libxl/libxl_arm.c  | 52 +++++++++++++++++++++++++++++++++++++++++++++---
>   tools/libxl/libxl_dom.c  |  2 +-
>   tools/libxl/libxl_x86.c  |  2 +-
>   4 files changed, 52 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
> index 6a07ccf..3626e4a 100644
> --- a/tools/libxl/libxl_arch.h
> +++ b/tools/libxl/libxl_arch.h
> @@ -36,7 +36,7 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
>   /* setup arch specific hardware description, i.e. DTB on ARM */
>   _hidden
>   int libxl__arch_domain_init_hw_description(libxl__gc *gc,
> -                                           libxl_domain_build_info *info,
> +                                           libxl_domain_config *d_config,
>                                              libxl__domain_build_state *state,
>                                              struct xc_dom_image *dom);
>   /* finalize arch specific hardware description. */
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 5f62e78..4020453 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -461,6 +461,49 @@ static int make_memory_nodes(libxl__gc *gc, void *fdt,
>       return 0;
>   }
>   
> +static int make_reserved_nodes(libxl__gc *gc, void *fdt,
> +                               libxl_domain_config *d_config)
> +{
> +    int res, i;
> +    const char *name;
> +
> +    if (d_config->num_sshms == 0)
> +        return 0;
> +
> +    res = fdt_begin_node(fdt, "reserved-memory");
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#address-cells", ROOT_ADDRESS_CELLS);
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#size-cells", ROOT_SIZE_CELLS);
> +    if (res) return res;
> +
> +    res = fdt_property(fdt, "ranges", NULL, 0);
> +    if (res) return res;
> +
> +    for (i = 0; i < d_config->num_sshms; i++) {
> +        uint64_t start = d_config->sshms[i].begin;
> +        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
> +            start += d_config->sshms[i].offset;
> +        name = GCSPRINTF("memory@%"PRIx64, start);

I understand the node will be useful for avoid the guest using as a 
normal memory. We also need to make sure the guest can detect what it is 
used for (imagine a driver for it). So you probably want a need 
compatible and more information in it.

Also, this would be clearer if the name is called xen-shmem@ rather than 
memory@.

> +
> +        res = fdt_begin_node(fdt, name);
> +        if (res) return res;
> +
> +        res = fdt_property_regs(gc, fdt, ROOT_ADDRESS_CELLS, ROOT_SIZE_CELLS,
> +                                1, start, d_config->sshms[i].size);
> +
> +        res = fdt_end_node(fdt);
> +        if (res) return res;
> +    }
> +
> +    res = fdt_end_node(fdt);
> +    if (res) return res;
> +
> +    return 0;
> +}
> +

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-08-01  8:39   ` Jan Beulich
@ 2018-08-01 22:21     ` Stefano Stabellini
  2018-08-02  6:47       ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Stefano Stabellini @ 2018-08-01 22:21 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Stefano Stabellini, Wei Liu, Sky Liu,
	George.Dunlap, andrew.cooper3, Ian Jackson, xen-devel,
	Julien Grall, tim, jbeulich, Daniel de Graaf

On Wed, 1 Aug 2018, Jan Beulich wrote:
> First of all I think your Cc list is too short here - all of REST should be
> included imo.

CC'ing them now. I'll also add them automatically from next time.


> >>> On 31.07.18 at 20:23, <sstabellini@kernel.org> wrote:
> > --- a/xen/include/xsm/dummy.h
> > +++ b/xen/include/xsm/dummy.h
> > @@ -535,6 +535,21 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
> >      return xsm_default_action(action, d, t);
> >  }
> >  
> > +/*
> > + * This action also requires that @current targets @d, but it has already been
> > + * checked somewhere higher in the call stack.
> 
> I'm not convinced it is a good idea to have such a dependency, even
> more so with this cloudy a reference. If there's another XSM check
> that has necessarily been done before, you should at least name it
> here so it's easy to later verify that the assumption still holds. But
> even better would imo be to re-do the check here, just in case.

I am fine with that. It should be just a matter of doing the following,
right?

static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
                                         struct domain *t)
{
    XSM_ASSERT_ACTION(XSM_TARGET);
    return xsm_default_action(XSM_TARGET, current->domain, d) &&
           xsm_default_action(action, current->domain, t);


> > --- a/xen/xsm/flask/hooks.c
> > +++ b/xen/xsm/flask/hooks.c
> > @@ -1198,6 +1198,17 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
> >      return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
> >  }
> >  
> > +/*
> > + * This action also requires that @current has MMU__MAP_READ/WRITE over @d,
> > + * but that has already been checked somewhere higher in the call stack (for
> > + * example, by flask_add_to_physmap()).
> 
> This one's better in that it at least names the other hook. Still I'm
> not fully convinced that omitting the other half of the check here
> is prudent. We'll see what others think ...

Sure, it should be simple to change (I hope I am not missing something).

static int flask_map_gmfn_share(struct domain *d, struct domain *t)
{
  return current_has_perm(d, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) &&
         (current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) ?:
          domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM));

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-08-01  9:26   ` Julien Grall
@ 2018-08-01 22:25     ` Stefano Stabellini
  2018-08-03  9:12       ` Julien Grall
  0 siblings, 1 reply; 17+ messages in thread
From: Stefano Stabellini @ 2018-08-01 22:25 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Stefano Stabellini, wei.liu2, blackskygg,
	ian.jackson, xen-devel

On Wed, 1 Aug 2018, Julien Grall wrote:
> Hi,
> 
> On 31/07/18 19:23, Stefano Stabellini wrote:
> > Shared memory regions need to be advertised to the guest. Fortunately, a
> > device tree binding for special memory regions already exist:
> > reserved-memory.
> > 
> > Add a reserved-memory node for each shared memory region, for both
> > masters and slaves.
> > 
> > Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
> > ---
> >   tools/libxl/libxl_arch.h |  2 +-
> >   tools/libxl/libxl_arm.c  | 52
> > +++++++++++++++++++++++++++++++++++++++++++++---
> >   tools/libxl/libxl_dom.c  |  2 +-
> >   tools/libxl/libxl_x86.c  |  2 +-
> >   4 files changed, 52 insertions(+), 6 deletions(-)
> > 
> > diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
> > index 6a07ccf..3626e4a 100644
> > --- a/tools/libxl/libxl_arch.h
> > +++ b/tools/libxl/libxl_arch.h
> > @@ -36,7 +36,7 @@ int libxl__arch_domain_create(libxl__gc *gc,
> > libxl_domain_config *d_config,
> >   /* setup arch specific hardware description, i.e. DTB on ARM */
> >   _hidden
> >   int libxl__arch_domain_init_hw_description(libxl__gc *gc,
> > -                                           libxl_domain_build_info *info,
> > +                                           libxl_domain_config *d_config,
> >                                              libxl__domain_build_state
> > *state,
> >                                              struct xc_dom_image *dom);
> >   /* finalize arch specific hardware description. */
> > diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> > index 5f62e78..4020453 100644
> > --- a/tools/libxl/libxl_arm.c
> > +++ b/tools/libxl/libxl_arm.c
> > @@ -461,6 +461,49 @@ static int make_memory_nodes(libxl__gc *gc, void *fdt,
> >       return 0;
> >   }
> >   +static int make_reserved_nodes(libxl__gc *gc, void *fdt,
> > +                               libxl_domain_config *d_config)
> > +{
> > +    int res, i;
> > +    const char *name;
> > +
> > +    if (d_config->num_sshms == 0)
> > +        return 0;
> > +
> > +    res = fdt_begin_node(fdt, "reserved-memory");
> > +    if (res) return res;
> > +
> > +    res = fdt_property_cell(fdt, "#address-cells", ROOT_ADDRESS_CELLS);
> > +    if (res) return res;
> > +
> > +    res = fdt_property_cell(fdt, "#size-cells", ROOT_SIZE_CELLS);
> > +    if (res) return res;
> > +
> > +    res = fdt_property(fdt, "ranges", NULL, 0);
> > +    if (res) return res;
> > +
> > +    for (i = 0; i < d_config->num_sshms; i++) {
> > +        uint64_t start = d_config->sshms[i].begin;
> > +        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
> > +            start += d_config->sshms[i].offset;
> > +        name = GCSPRINTF("memory@%"PRIx64, start);
> 
> I understand the node will be useful for avoid the guest using as a normal
> memory. 

Yes, that's right.


> We also need to make sure the guest can detect what it is used for
> (imagine a driver for it). So you probably want a need compatible and more
> information in it.

Something like "xen,shared-memory" ?


> Also, this would be clearer if the name is called xen-shmem@ rather than
> memory@.

Sure, we are free to choose the node name.


> > +
> > +        res = fdt_begin_node(fdt, name);
> > +        if (res) return res;
> > +
> > +        res = fdt_property_regs(gc, fdt, ROOT_ADDRESS_CELLS,
> > ROOT_SIZE_CELLS,
> > +                                1, start, d_config->sshms[i].size);
> > +
> > +        res = fdt_end_node(fdt);
> > +        if (res) return res;
> > +    }
> > +
> > +    res = fdt_end_node(fdt);
> > +    if (res) return res;
> > +
> > +    return 0;
> > +}

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-08-01 22:21     ` Stefano Stabellini
@ 2018-08-02  6:47       ` Jan Beulich
  2018-08-02 20:57         ` Stefano Stabellini
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2018-08-02  6:47 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Tim Deegan, Wei Liu, Sky Liu, George Dunlap, Andrew Cooper,
	Stefano Stabellini, Ian Jackson, xen-devel, Julien Grall,
	Daniel de Graaf

>>> On 02.08.18 at 00:21, <sstabellini@kernel.org> wrote:
> On Wed, 1 Aug 2018, Jan Beulich wrote:
>> First of all I think your Cc list is too short here - all of REST should be
>> included imo.
> 
> CC'ing them now. I'll also add them automatically from next time.
> 
> 
>> >>> On 31.07.18 at 20:23, <sstabellini@kernel.org> wrote:
>> > --- a/xen/include/xsm/dummy.h
>> > +++ b/xen/include/xsm/dummy.h
>> > @@ -535,6 +535,21 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
>> >      return xsm_default_action(action, d, t);
>> >  }
>> >  
>> > +/*
>> > + * This action also requires that @current targets @d, but it has already been
>> > + * checked somewhere higher in the call stack.
>> 
>> I'm not convinced it is a good idea to have such a dependency, even
>> more so with this cloudy a reference. If there's another XSM check
>> that has necessarily been done before, you should at least name it
>> here so it's easy to later verify that the assumption still holds. But
>> even better would imo be to re-do the check here, just in case.
> 
> I am fine with that. It should be just a matter of doing the following,
> right?
> 
> static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
>                                          struct domain *t)
> {
>     XSM_ASSERT_ACTION(XSM_TARGET);
>     return xsm_default_action(XSM_TARGET, current->domain, d) &&
>            xsm_default_action(action, current->domain, t);

Not exactly - xsm_default_action() doesn't return boolean, nor should
the function here.

>> > --- a/xen/xsm/flask/hooks.c
>> > +++ b/xen/xsm/flask/hooks.c
>> > @@ -1198,6 +1198,17 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
>> >      return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
>> >  }
>> >  
>> > +/*
>> > + * This action also requires that @current has MMU__MAP_READ/WRITE over @d,
>> > + * but that has already been checked somewhere higher in the call stack (for
>> > + * example, by flask_add_to_physmap()).
>> 
>> This one's better in that it at least names the other hook. Still I'm
>> not fully convinced that omitting the other half of the check here
>> is prudent. We'll see what others think ...
> 
> Sure, it should be simple to change (I hope I am not missing something).
> 
> static int flask_map_gmfn_share(struct domain *d, struct domain *t)
> {
>   return current_has_perm(d, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) &&
>          (current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) 
> ?:
>           domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM));

Same here afaict.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-08-02  6:47       ` Jan Beulich
@ 2018-08-02 20:57         ` Stefano Stabellini
  2018-08-03  6:37           ` Jan Beulich
  0 siblings, 1 reply; 17+ messages in thread
From: Stefano Stabellini @ 2018-08-02 20:57 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Stefano Stabellini, Wei Liu, Sky Liu, George Dunlap,
	Andrew Cooper, Stefano Stabellini, Ian Jackson, xen-devel,
	Julien Grall, Daniel de Graaf

On Thu, 2 Aug 2018, Jan Beulich wrote:
> >> > +/*
> >> > + * This action also requires that @current targets @d, but it has already been
> >> > + * checked somewhere higher in the call stack.
> >> 
> >> I'm not convinced it is a good idea to have such a dependency, even
> >> more so with this cloudy a reference. If there's another XSM check
> >> that has necessarily been done before, you should at least name it
> >> here so it's easy to later verify that the assumption still holds. But
> >> even better would imo be to re-do the check here, just in case.
> > 
> > I am fine with that. It should be just a matter of doing the following,
> > right?
> > 
> > static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
> >                                          struct domain *t)
> > {
> >     XSM_ASSERT_ACTION(XSM_TARGET);
> >     return xsm_default_action(XSM_TARGET, current->domain, d) &&
> >            xsm_default_action(action, current->domain, t);
> 
> Not exactly - xsm_default_action() doesn't return boolean, nor should
> the function here.

You are right, zero is success. So something like the following:

    int rc = xsm_default_action(XSM_TARGET, current->domain, d);
    if ( rc )
        return rc;
    return xsm_default_action(action, current->domain, t);


> >> > --- a/xen/xsm/flask/hooks.c
> >> > +++ b/xen/xsm/flask/hooks.c
> >> > @@ -1198,6 +1198,17 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
> >> >      return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
> >> >  }
> >> >  
> >> > +/*
> >> > + * This action also requires that @current has MMU__MAP_READ/WRITE over @d,
> >> > + * but that has already been checked somewhere higher in the call stack (for
> >> > + * example, by flask_add_to_physmap()).
> >> 
> >> This one's better in that it at least names the other hook. Still I'm
> >> not fully convinced that omitting the other half of the check here
> >> is prudent. We'll see what others think ...
> > 
> > Sure, it should be simple to change (I hope I am not missing something).
> > 
> > static int flask_map_gmfn_share(struct domain *d, struct domain *t)
> > {
> >   return current_has_perm(d, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) &&
> >          (current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) 
> > ?:
> >           domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM));
> 
> Same here afaict.

Yep:

    int rc = current_has_perm(d, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
    if ( rc )
        return rc;
    rc = current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
    if ( rc )
        return rc;
    return domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-08-02 20:57         ` Stefano Stabellini
@ 2018-08-03  6:37           ` Jan Beulich
  2018-08-03 20:17             ` Stefano Stabellini
  0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2018-08-03  6:37 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Tim Deegan, Wei Liu, Sky Liu, George Dunlap, Andrew Cooper,
	Stefano Stabellini, Ian Jackson, xen-devel, Julien Grall,
	Daniel de Graaf

>>> On 02.08.18 at 22:57, <sstabellini@kernel.org> wrote:
> On Thu, 2 Aug 2018, Jan Beulich wrote:
>> >> > +/*
>> >> > + * This action also requires that @current targets @d, but it has already been
>> >> > + * checked somewhere higher in the call stack.
>> >> 
>> >> I'm not convinced it is a good idea to have such a dependency, even
>> >> more so with this cloudy a reference. If there's another XSM check
>> >> that has necessarily been done before, you should at least name it
>> >> here so it's easy to later verify that the assumption still holds. But
>> >> even better would imo be to re-do the check here, just in case.
>> > 
>> > I am fine with that. It should be just a matter of doing the following,
>> > right?
>> > 
>> > static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
>> >                                          struct domain *t)
>> > {
>> >     XSM_ASSERT_ACTION(XSM_TARGET);
>> >     return xsm_default_action(XSM_TARGET, current->domain, d) &&
>> >            xsm_default_action(action, current->domain, t);
>> 
>> Not exactly - xsm_default_action() doesn't return boolean, nor should
>> the function here.
> 
> You are right, zero is success. So something like the following:
> 
>     int rc = xsm_default_action(XSM_TARGET, current->domain, d);
>     if ( rc )
>         return rc;
>     return xsm_default_action(action, current->domain, t);

Didn't you inherit the patch from someone? I think we had already
arrived at

    return xsm_default_action(XSM_TARGET, current->domain, d)
            ?: xsm_default_action(action, current->domain, t);

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree
  2018-08-01 22:25     ` Stefano Stabellini
@ 2018-08-03  9:12       ` Julien Grall
  0 siblings, 0 replies; 17+ messages in thread
From: Julien Grall @ 2018-08-03  9:12 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: ian.jackson, Stefano Stabellini, wei.liu2, blackskygg, xen-devel

Hi Stefano,

On 08/01/2018 11:25 PM, Stefano Stabellini wrote:
> On Wed, 1 Aug 2018, Julien Grall wrote:
>> Hi,
>>
>> On 31/07/18 19:23, Stefano Stabellini wrote:
>>> Shared memory regions need to be advertised to the guest. Fortunately, a
>>> device tree binding for special memory regions already exist:
>>> reserved-memory.
>>>
>>> Add a reserved-memory node for each shared memory region, for both
>>> masters and slaves.
>>>
>>> Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
>>> ---
>>>    tools/libxl/libxl_arch.h |  2 +-
>>>    tools/libxl/libxl_arm.c  | 52
>>> +++++++++++++++++++++++++++++++++++++++++++++---
>>>    tools/libxl/libxl_dom.c  |  2 +-
>>>    tools/libxl/libxl_x86.c  |  2 +-
>>>    4 files changed, 52 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
>>> index 6a07ccf..3626e4a 100644
>>> --- a/tools/libxl/libxl_arch.h
>>> +++ b/tools/libxl/libxl_arch.h
>>> @@ -36,7 +36,7 @@ int libxl__arch_domain_create(libxl__gc *gc,
>>> libxl_domain_config *d_config,
>>>    /* setup arch specific hardware description, i.e. DTB on ARM */
>>>    _hidden
>>>    int libxl__arch_domain_init_hw_description(libxl__gc *gc,
>>> -                                           libxl_domain_build_info *info,
>>> +                                           libxl_domain_config *d_config,
>>>                                               libxl__domain_build_state
>>> *state,
>>>                                               struct xc_dom_image *dom);
>>>    /* finalize arch specific hardware description. */
>>> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
>>> index 5f62e78..4020453 100644
>>> --- a/tools/libxl/libxl_arm.c
>>> +++ b/tools/libxl/libxl_arm.c
>>> @@ -461,6 +461,49 @@ static int make_memory_nodes(libxl__gc *gc, void *fdt,
>>>        return 0;
>>>    }
>>>    +static int make_reserved_nodes(libxl__gc *gc, void *fdt,
>>> +                               libxl_domain_config *d_config)
>>> +{
>>> +    int res, i;
>>> +    const char *name;
>>> +
>>> +    if (d_config->num_sshms == 0)
>>> +        return 0;
>>> +
>>> +    res = fdt_begin_node(fdt, "reserved-memory");
>>> +    if (res) return res;
>>> +
>>> +    res = fdt_property_cell(fdt, "#address-cells", ROOT_ADDRESS_CELLS);
>>> +    if (res) return res;
>>> +
>>> +    res = fdt_property_cell(fdt, "#size-cells", ROOT_SIZE_CELLS);
>>> +    if (res) return res;
>>> +
>>> +    res = fdt_property(fdt, "ranges", NULL, 0);
>>> +    if (res) return res;
>>> +
>>> +    for (i = 0; i < d_config->num_sshms; i++) {
>>> +        uint64_t start = d_config->sshms[i].begin;
>>> +        if (d_config->sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
>>> +            start += d_config->sshms[i].offset;
>>> +        name = GCSPRINTF("memory@%"PRIx64, start);
>>
>> I understand the node will be useful for avoid the guest using as a normal
>> memory.
> 
> Yes, that's right.
> 
> 
>> We also need to make sure the guest can detect what it is used for
>> (imagine a driver for it). So you probably want a need compatible and more
>> information in it.
> 
> Something like "xen,shared-memory" ?

The name sounds good to me. We might also want to expose the ID used for 
the share. I think this would help the guest to know what to do with it.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
  2018-08-03  6:37           ` Jan Beulich
@ 2018-08-03 20:17             ` Stefano Stabellini
  0 siblings, 0 replies; 17+ messages in thread
From: Stefano Stabellini @ 2018-08-03 20:17 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, Stefano Stabellini, Wei Liu, Sky Liu, George Dunlap,
	Andrew Cooper, Stefano Stabellini, Ian Jackson, xen-devel,
	Julien Grall, Daniel de Graaf

On Fri, 3 Aug 2018, Jan Beulich wrote:
> >>> On 02.08.18 at 22:57, <sstabellini@kernel.org> wrote:
> > On Thu, 2 Aug 2018, Jan Beulich wrote:
> >> >> > +/*
> >> >> > + * This action also requires that @current targets @d, but it has already been
> >> >> > + * checked somewhere higher in the call stack.
> >> >> 
> >> >> I'm not convinced it is a good idea to have such a dependency, even
> >> >> more so with this cloudy a reference. If there's another XSM check
> >> >> that has necessarily been done before, you should at least name it
> >> >> here so it's easy to later verify that the assumption still holds. But
> >> >> even better would imo be to re-do the check here, just in case.
> >> > 
> >> > I am fine with that. It should be just a matter of doing the following,
> >> > right?
> >> > 
> >> > static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
> >> >                                          struct domain *t)
> >> > {
> >> >     XSM_ASSERT_ACTION(XSM_TARGET);
> >> >     return xsm_default_action(XSM_TARGET, current->domain, d) &&
> >> >            xsm_default_action(action, current->domain, t);
> >> 
> >> Not exactly - xsm_default_action() doesn't return boolean, nor should
> >> the function here.
> > 
> > You are right, zero is success. So something like the following:
> > 
> >     int rc = xsm_default_action(XSM_TARGET, current->domain, d);
> >     if ( rc )
> >         return rc;
> >     return xsm_default_action(action, current->domain, t);
> 
> Didn't you inherit the patch from someone? I think we had already
> arrived at
> 
>     return xsm_default_action(XSM_TARGET, current->domain, d)
>             ?: xsm_default_action(action, current->domain, t);

If you are referring to the code style, I don't have an opinion, I am
happy to do the above. I changed it in these code snippets only because
in the other function (flask_map_gmfn_share) the ?: model would become
awkward as there are three consecutive checks. But here it would be
fine.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-08-03 20:17 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-31 18:22 [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
2018-08-01  8:39   ` Jan Beulich
2018-08-01 22:21     ` Stefano Stabellini
2018-08-02  6:47       ` Jan Beulich
2018-08-02 20:57         ` Stefano Stabellini
2018-08-03  6:37           ` Jan Beulich
2018-08-03 20:17             ` Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 2/7] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 3/7] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 4/7] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 5/7] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 6/7] docs: documentation about static shared memory regions Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
2018-08-01  9:26   ` Julien Grall
2018-08-01 22:25     ` Stefano Stabellini
2018-08-03  9:12       ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.