All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN
@ 2018-12-18 21:11 Volodymyr Babchuk
  2018-12-18 21:11 ` [PATCH v3 01/11] arm: add generic TEE mediator framework Volodymyr Babchuk
                   ` (11 more replies)
  0 siblings, 12 replies; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Volodymyr Babchuk, Andrew Cooper, Ian Jackson, Tim Deegan,
	tee-dev, George Dunlap, Jan Beulich, Volodymyr Babchuk

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

Hello all,

Sorry for late submussion. I was busy with other projects.

Global changes from v2:

 - Use domain flags insted of domctl interface to enable optee for guests
 - Remove patch "libxc: add xc_dom_tee_enable(...) function" because
   of previous change
 - Mediator now stores own context in arch part of struct domain, so
   I removed patch "optee: add domain contexts"

Per-patch changes are described in corresponding emails.

====
v2:

This is v2 of patch series for OP-TEE mediator support in XEN. Changes from v1:

 - Added domctl interface, so now xl decides what domain should work with TEE
 - Removed XSM support due to change described above
 - Patch with OP-TEE mediator was splited to 7 separate patches
 - Removed patch with call_smccc() function. Now this series depend on
   Julien Grall's series "xen/arm: SMCCC fixup and improvement" [3]

=====
v1:

This is follow for patch series [1]. There was lots of discussions
for that series and I tried to address all of them in this new patchset.

Currently, I had a working solution for OP-TEE virtualization and it is being
upstreamed right now ([2]). So, I think it is a good time to introduce support
in XEN as well.

This series include generic TEE mediator framework and full-scale OP-TEE mediator
which is working with mentioned chages in OP-TEE. So, multiple domains can
work simultaneously with OP-TEE.

I added XSM support, so now it is possible to control which domains can work
with TEEs. Also I changed way how TEE discovery is done. Now  it is very
generic and should support any platform.

[1] https://lists.xenproject.org/archives/html/xen-devel/2017-10/msg01451.html
[2] https://github.com/OP-TEE/optee_os/pull/2370
[3] https://lists.xenproject.org/archives/html/xen-devel/2018-08/msg02138.html



Volodymyr Babchuk (11):
  arm: add generic TEE mediator framework
  arm: add tee_enabled flag to xen_arch_domainconfig
  arm: tee: add OP-TEE header files
  optee: add OP-TEE mediator skeleton
  optee: add fast calls handling
  optee: add std call handling
  optee: add support for RPC SHM buffers
  optee: add support for arbitrary shared memory
  optee: add support for RPC commands
  xl: add "tee" option for xl.cfg
  libxl: arm: create optee firmware node in DT if tee=1

 MAINTAINERS                         |    6 +
 docs/man/xl.cfg.pod.5.in            |   10 +
 tools/libxl/libxl_arm.c             |   31 +
 tools/libxl/libxl_create.c          |    1 +
 tools/libxl/libxl_types.idl         |    1 +
 tools/xl/xl_parse.c                 |    1 +
 xen/arch/arm/Kconfig                |    9 +
 xen/arch/arm/Makefile               |    1 +
 xen/arch/arm/domain.c               |    8 +
 xen/arch/arm/domain_build.c         |    4 +
 xen/arch/arm/domctl.c               |    1 +
 xen/arch/arm/setup.c                |    1 +
 xen/arch/arm/shutdown.c             |    1 +
 xen/arch/arm/tee/Kconfig            |    4 +
 xen/arch/arm/tee/Makefile           |    2 +
 xen/arch/arm/tee/optee.c            | 1054 +++++++++++++++++++++++++++
 xen/arch/arm/tee/tee.c              |   69 ++
 xen/arch/arm/vsmc.c                 |    5 +
 xen/arch/arm/xen.lds.S              |    7 +
 xen/include/asm-arm/domain.h        |    3 +
 xen/include/asm-arm/tee/optee_msg.h |  444 +++++++++++
 xen/include/asm-arm/tee/optee_smc.h |  507 +++++++++++++
 xen/include/asm-arm/tee/tee.h       |   91 +++
 xen/include/public/arch-arm.h       |    3 +
 24 files changed, 2264 insertions(+)
 create mode 100644 xen/arch/arm/tee/Kconfig
 create mode 100644 xen/arch/arm/tee/Makefile
 create mode 100644 xen/arch/arm/tee/optee.c
 create mode 100644 xen/arch/arm/tee/tee.c
 create mode 100644 xen/include/asm-arm/tee/optee_msg.h
 create mode 100644 xen/include/asm-arm/tee/optee_smc.h
 create mode 100644 xen/include/asm-arm/tee/tee.h

-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v3 02/11] arm: add tee_enabled flag to xen_arch_domainconfig
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
  2018-12-18 21:11 ` [PATCH v3 01/11] arm: add generic TEE mediator framework Volodymyr Babchuk
  2018-12-18 21:11 ` [PATCH v3 03/11] arm: tee: add OP-TEE header files Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-15 17:35   ` Julien Grall
  2018-12-18 21:11 ` [PATCH v3 04/11] optee: add OP-TEE mediator skeleton Volodymyr Babchuk
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall; +Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

This flag enables TEE support for a domain.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
---
 xen/arch/arm/domain.c         | 4 ++++
 xen/arch/arm/domctl.c         | 1 +
 xen/include/public/arch-arm.h | 3 +++
 3 files changed, 8 insertions(+)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 11b618515b..f04041931d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -702,6 +702,10 @@ int arch_domain_create(struct domain *d,
     if ( (rc = domain_vtimer_init(d, &config->arch)) != 0 )
         goto fail;
 
+    if ( config->arch.tee_enabled )
+        if ( (rc = tee_enable(d)) != 0 )
+            goto fail;
+
     update_domain_wallclock_time(d);
 
     /*
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 20691528a6..f019e035e8 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -13,6 +13,7 @@
 #include <xen/sched.h>
 #include <xen/types.h>
 #include <xsm/xsm.h>
+#include <asm-arm/tee/tee.h>
 #include <public/domctl.h>
 
 void arch_get_domain_info(const struct domain *d,
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index eb424e8286..b7a010e99e 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -323,6 +323,9 @@ struct xen_arch_domainconfig {
      *
      */
     uint32_t clock_frequency;
+
+    /* IN */
+    uint8_t tee_enabled;
 };
 #endif /* __XEN__ || __XEN_TOOLS__ */
 
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 01/11] arm: add generic TEE mediator framework
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-15 17:50   ` Julien Grall
  2018-12-18 21:11 ` [PATCH v3 03/11] arm: tee: add OP-TEE header files Volodymyr Babchuk
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan, tee-dev,
	Jan Beulich, Volodymyr Babchuk

This patch adds basic framework for TEE mediators. Guests can't talk
to TEE directly, we need some entity that will intercept request
and decide what to do with them. "TEE mediator" is a such entity.

This is how it works: user can build XEN with multiple TEE mediators
(see the next patches, where OP-TEE mediator is introduced).
TEE mediator register self with REGISTER_TEE_MEDIATOR() macro in the
same way, as device drivers use DT_DEVICE_START()/DT_DEVICE_END()
macros.
In run-time, during initialization, framework calls probe() function
for each available mediator driver to find which TEE is installed
on the platform. Then generic vSMC handler will call selected mediator
when it intercept SMC/HVC that belongs to TEE OS or TEE application.

Currently TEE mediator is enabled only for Dom0.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 Changes from v2:
  - Removed empty tee/Kconfig file

 Changes from v1:
  - Removed tee_remove() function
  - CONFIG_TEE depends on EXPERT
  - tee_domain_created() converted to tee_enable()
  - tee_init() is called using initcall() mechanism
  - tee_handle_smc() renamed to tee_handle_call()

 Changes from "RFC" version:
  - renamed CONFIG_ARM_TEE to CONFIG_TEE
  - changed discovery mechanism: instead of UUID mathing, TEE-specific
     probing is used

 MAINTAINERS                   |  6 +++
 xen/arch/arm/Kconfig          |  7 +++
 xen/arch/arm/Makefile         |  1 +
 xen/arch/arm/domain.c         |  4 ++
 xen/arch/arm/domain_build.c   |  4 ++
 xen/arch/arm/setup.c          |  1 +
 xen/arch/arm/shutdown.c       |  1 +
 xen/arch/arm/tee/Makefile     |  1 +
 xen/arch/arm/tee/tee.c        | 69 ++++++++++++++++++++++++++
 xen/arch/arm/vsmc.c           |  5 ++
 xen/arch/arm/xen.lds.S        |  7 +++
 xen/include/asm-arm/tee/tee.h | 91 +++++++++++++++++++++++++++++++++++
 12 files changed, 197 insertions(+)
 create mode 100644 xen/arch/arm/tee/Makefile
 create mode 100644 xen/arch/arm/tee/tee.c
 create mode 100644 xen/include/asm-arm/tee/tee.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 96a0518f49..eac2b40fdf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -376,6 +376,12 @@ F:	config/Stubdom.mk.in
 F:	m4/stubdom.m4
 F:	stubdom/
 
+TEE MEDIATORS
+M:	Volodymyr Babchuk <volodymyr_babchuk@epam.com>
+S:	Supported
+F:	xen/arch/arm/tee/
+F:	xen/include/asm-arm/tee
+
 TOOLSTACK
 M:	Ian Jackson <ian.jackson@eu.citrix.com>
 M:	Wei Liu <wei.liu2@citrix.com>
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 581de67b6b..e527b2f885 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -105,6 +105,13 @@ config HARDEN_BRANCH_PREDICTOR
 
 	  If unsure, say Y.
 
+config TEE
+	bool "Enable TEE mediators support" if EXPERT = "y"
+	default n
+	help
+	  This option enables generic TEE mediators support. It allows guests
+	  to access real TEE via one of TEE mediators implemented in XEN.
+
 endmenu
 
 menu "ARM errata workaround via the alternative framework"
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index cb902cb6fe..5c2aa34557 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -5,6 +5,7 @@ subdir-$(CONFIG_ACPI) += acpi
 ifneq ($(CONFIG_NO_PLAT),y)
 subdir-y += platforms
 endif
+subdir-$(CONFIG_TEE) += tee
 
 obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
 obj-y += bootfdt.init.o
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 1d926dcb29..11b618515b 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -32,6 +32,7 @@
 #include <asm/platform.h>
 #include <asm/procinfo.h>
 #include <asm/regs.h>
+#include <asm/tee/tee.h>
 #include <asm/vfp.h>
 #include <asm/vgic.h>
 #include <asm/vtimer.h>
@@ -931,6 +932,9 @@ int domain_relinquish_resources(struct domain *d)
          */
         domain_vpl011_deinit(d);
 
+        /* Free TEE mediator resources */
+        tee_domain_destroy(d);
+
         d->arch.relmem = RELMEM_xen;
         /* Fallthrough */
 
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index b0ec3f0b72..39a887b505 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -24,6 +24,7 @@
 #include <asm/setup.h>
 #include <asm/cpufeature.h>
 #include <asm/domain_build.h>
+#include <asm/tee/tee.h>
 
 #include <xen/irq.h>
 #include <xen/grant_table.h>
@@ -1962,6 +1963,9 @@ static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
     set_current(saved_current);
     p2m_restore_state(saved_current);
 
+    /* Enable TEE */
+    tee_enable(d);
+
     memset(regs, 0, sizeof(*regs));
 
     regs->pc = (register_t)kinfo->entry;
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index e83221ab79..cad568d432 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -48,6 +48,7 @@
 #include <asm/platform.h>
 #include <asm/procinfo.h>
 #include <asm/setup.h>
+#include <asm/tee/tee.h>
 #include <xsm/xsm.h>
 #include <asm/acpi.h>
 
diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index b32f07ec0e..30c69506ff 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -5,6 +5,7 @@
 #include <xen/smp.h>
 #include <asm/platform.h>
 #include <asm/psci.h>
+#include <asm/tee/tee.h>
 
 static void noreturn halt_this_cpu(void *arg)
 {
diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
new file mode 100644
index 0000000000..c54d4796ff
--- /dev/null
+++ b/xen/arch/arm/tee/Makefile
@@ -0,0 +1 @@
+obj-y += tee.o
diff --git a/xen/arch/arm/tee/tee.c b/xen/arch/arm/tee/tee.c
new file mode 100644
index 0000000000..7fd0148b8a
--- /dev/null
+++ b/xen/arch/arm/tee/tee.c
@@ -0,0 +1,69 @@
+/*
+ * xen/arch/arm/tee/tee.c
+ *
+ * Generic part of TEE mediator subsystem
+ *
+ * Volodymyr Babchuk <volodymyr_babchuk@epam.com>
+ * Copyright (c) 2018 EPAM Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <xen/init.h>
+#include <xen/errno.h>
+#include <xen/types.h>
+#include <asm/tee/tee.h>
+
+extern const struct tee_mediator_desc _steemediator[], _eteemediator[];
+static const struct tee_mediator_ops *mediator_ops;
+
+bool tee_handle_call(struct cpu_user_regs *regs)
+{
+    if ( !mediator_ops )
+        return false;
+
+    return mediator_ops->handle_call(regs);
+}
+
+int tee_enable(struct domain *d)
+{
+    if ( !mediator_ops )
+        return -ENODEV;
+
+    return mediator_ops->enable(d);
+}
+
+void tee_domain_destroy(struct domain *d)
+{
+    if ( !mediator_ops )
+        return;
+
+    return mediator_ops->domain_destroy(d);
+}
+
+static int __init tee_init(void)
+{
+    const struct tee_mediator_desc *desc;
+
+    for ( desc = _steemediator; desc != _eteemediator; desc++ )
+        if ( desc->ops->probe() )
+        {
+            printk(XENLOG_INFO "Using TEE mediator for %s\n", desc->name);
+            mediator_ops = desc->ops;
+            return 0;
+        }
+    return 0;
+}
+
+__initcall(tee_init);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
index c4ccae6030..d0199c7874 100644
--- a/xen/arch/arm/vsmc.c
+++ b/xen/arch/arm/vsmc.c
@@ -23,6 +23,7 @@
 #include <asm/monitor.h>
 #include <asm/regs.h>
 #include <asm/smccc.h>
+#include <asm/tee/tee.h>
 #include <asm/traps.h>
 #include <asm/vpsci.h>
 
@@ -272,6 +273,10 @@ static bool vsmccc_handle_call(struct cpu_user_regs *regs)
         case ARM_SMCCC_OWNER_STANDARD:
             handled = handle_sssc(regs);
             break;
+        case ARM_SMCCC_OWNER_TRUSTED_APP ... ARM_SMCCC_OWNER_TRUSTED_APP_END:
+        case ARM_SMCCC_OWNER_TRUSTED_OS ... ARM_SMCCC_OWNER_TRUSTED_OS_END:
+            handled = tee_handle_call(regs);
+            break;
         }
     }
 
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 1e72906477..e664c4441a 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -137,6 +137,13 @@ SECTIONS
       _aedevice = .;
   } :text
 
+  . = ALIGN(8);
+  .teemediator.info : {
+      _steemediator = .;
+      *(.teemediator.info)
+      _eteemediator = .;
+  } :text
+
   . = ALIGN(PAGE_SIZE);             /* Init code and data */
   __init_begin = .;
   .init.text : {
diff --git a/xen/include/asm-arm/tee/tee.h b/xen/include/asm-arm/tee/tee.h
new file mode 100644
index 0000000000..0e8b576372
--- /dev/null
+++ b/xen/include/asm-arm/tee/tee.h
@@ -0,0 +1,91 @@
+/*
+ * xen/include/asm-arm/tee/tee.h
+ *
+ * Generic part of TEE mediator subsystem
+ *
+ * Volodymyr Babchuk <volodymyr_babchuk@epam.com>
+ * Copyright (c) 2018 EPAM Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __ARCH_ARM_TEE_TEE_H__
+#define __ARCH_ARM_TEE_TEE_H__
+
+#include <xen/lib.h>
+#include <xen/types.h>
+#include <asm/regs.h>
+
+#ifdef CONFIG_TEE
+
+struct tee_mediator_ops {
+    /*
+     * Probe for TEE. Should return true if TEE found and
+     * mediator is initialized.
+     */
+    bool (*probe)(void);
+
+    /*
+     * Called during domain construction if toolstack requests to enable
+     * TEE support so mediator can inform TEE about new
+     * guest and create own structures for the new domain.
+     */
+    int (*enable)(struct domain *d);
+
+    /*
+     * Called during domain destruction to inform TEE that guest is now dead
+     * and to destroy all resources allocated for the domain being destroyed.
+     */
+    void (*domain_destroy)(struct domain *d);
+
+    /* Handle SMCCC call for current domain. */
+    bool (*handle_call)(struct cpu_user_regs *regs);
+};
+
+struct tee_mediator_desc {
+    /* Name of the TEE. Just for debugging purposes. */
+    const char *name;
+
+    /* Mediator callbacks as described above. */
+    const struct tee_mediator_ops *ops;
+};
+
+bool tee_handle_call(struct cpu_user_regs *regs);
+int tee_enable(struct domain *d);
+void tee_domain_destroy(struct domain *d);
+
+#define REGISTER_TEE_MEDIATOR(_name, _namestr, _ops)          \
+static const struct tee_mediator_desc __tee_desc_##_name __used     \
+__section(".teemediator.info") = {                                  \
+    .name = _namestr,                                               \
+    .ops = _ops                                                     \
+}
+
+#else
+
+static inline bool tee_handle_call(struct cpu_user_regs *regs)
+{
+    return false;
+}
+
+static inline int tee_enable(struct domain *d)
+{
+    return -ENODEV;
+}
+
+static inline void tee_domain_destroy(struct domain *d) {}
+
+#endif  /* CONFIG_TEE */
+
+#endif /* __ARCH_ARM_TEE_TEE_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 03/11] arm: tee: add OP-TEE header files
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
  2018-12-18 21:11 ` [PATCH v3 01/11] arm: add generic TEE mediator framework Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2018-12-18 21:11 ` [PATCH v3 02/11] arm: add tee_enabled flag to xen_arch_domainconfig Volodymyr Babchuk
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall; +Cc: tee-dev, Stefano Stabellini, Volodymyr Babchuk

This header files describe protocol between OP-TEE and OP-TEE client
driver in Linux. They are needed for upcoming OP-TEE mediator, which
is added in the next patch.
Reason to add those headers in separate patch is to ease up review.
Those files were taken from linux tree (drivers/tee/optee/) and mangled
a bit to compile with XEN.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/include/asm-arm/tee/optee_msg.h | 444 +++++++++++++++++++++++++++
 xen/include/asm-arm/tee/optee_smc.h | 457 ++++++++++++++++++++++++++++
 2 files changed, 901 insertions(+)
 create mode 100644 xen/include/asm-arm/tee/optee_msg.h
 create mode 100644 xen/include/asm-arm/tee/optee_smc.h

diff --git a/xen/include/asm-arm/tee/optee_msg.h b/xen/include/asm-arm/tee/optee_msg.h
new file mode 100644
index 0000000000..10747b2aa8
--- /dev/null
+++ b/xen/include/asm-arm/tee/optee_msg.h
@@ -0,0 +1,444 @@
+/*
+ * Copyright (c) 2015-2016, Linaro Limited
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef _OPTEE_MSG_H
+#define _OPTEE_MSG_H
+
+#include <xen/bitops.h>
+#include <xen/types.h>
+
+/*
+ * This file defines the OP-TEE message protocol used to communicate
+ * with an instance of OP-TEE running in secure world.
+ *
+ * This file is divided into three sections.
+ * 1. Formatting of messages.
+ * 2. Requests from normal world
+ * 3. Requests from secure world, Remote Procedure Call (RPC), handled by
+ *    tee-supplicant.
+ */
+
+/*****************************************************************************
+ * Part 1 - formatting of messages
+ *****************************************************************************/
+
+#define OPTEE_MSG_ATTR_TYPE_NONE		0x0
+#define OPTEE_MSG_ATTR_TYPE_VALUE_INPUT		0x1
+#define OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT	0x2
+#define OPTEE_MSG_ATTR_TYPE_VALUE_INOUT		0x3
+#define OPTEE_MSG_ATTR_TYPE_RMEM_INPUT		0x5
+#define OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT		0x6
+#define OPTEE_MSG_ATTR_TYPE_RMEM_INOUT		0x7
+#define OPTEE_MSG_ATTR_TYPE_TMEM_INPUT		0x9
+#define OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT		0xa
+#define OPTEE_MSG_ATTR_TYPE_TMEM_INOUT		0xb
+
+#define OPTEE_MSG_ATTR_TYPE_MASK		GENMASK(7, 0)
+
+/*
+ * Meta parameter to be absorbed by the Secure OS and not passed
+ * to the Trusted Application.
+ *
+ * Currently only used with OPTEE_MSG_CMD_OPEN_SESSION.
+ */
+#define OPTEE_MSG_ATTR_META			BIT(8)
+
+/*
+ * Pointer to a list of pages used to register user-defined SHM buffer.
+ * Used with OPTEE_MSG_ATTR_TYPE_TMEM_*.
+ * buf_ptr should point to the beginning of the buffer. Buffer will contain
+ * list of page addresses. OP-TEE core can reconstruct contiguous buffer from
+ * that page addresses list. Page addresses are stored as 64 bit values.
+ * Last entry on a page should point to the next page of buffer.
+ * Every entry in buffer should point to a 4k page beginning (12 least
+ * significant bits must be equal to zero).
+ *
+ * 12 least significant bints of optee_msg_param.u.tmem.buf_ptr should hold page
+ * offset of the user buffer.
+ *
+ * So, entries should be placed like members of this structure:
+ *
+ * struct page_data {
+ *   uint64_t pages_array[OPTEE_MSG_NONCONTIG_PAGE_SIZE/sizeof(uint64_t) - 1];
+ *   uint64_t next_page_data;
+ * };
+ *
+ * Structure is designed to exactly fit into the page size
+ * OPTEE_MSG_NONCONTIG_PAGE_SIZE which is a standard 4KB page.
+ *
+ * The size of 4KB is chosen because this is the smallest page size for ARM
+ * architectures. If REE uses larger pages, it should divide them to 4KB ones.
+ */
+#define OPTEE_MSG_ATTR_NONCONTIG		BIT(9)
+
+/*
+ * Memory attributes for caching passed with temp memrefs. The actual value
+ * used is defined outside the message protocol with the exception of
+ * OPTEE_MSG_ATTR_CACHE_PREDEFINED which means the attributes already
+ * defined for the memory range should be used. If optee_smc.h is used as
+ * bearer of this protocol OPTEE_SMC_SHM_* is used for values.
+ */
+#define OPTEE_MSG_ATTR_CACHE_SHIFT		16
+#define OPTEE_MSG_ATTR_CACHE_MASK		GENMASK(2, 0)
+#define OPTEE_MSG_ATTR_CACHE_PREDEFINED		0
+
+/*
+ * Same values as TEE_LOGIN_* from TEE Internal API
+ */
+#define OPTEE_MSG_LOGIN_PUBLIC			0x00000000
+#define OPTEE_MSG_LOGIN_USER			0x00000001
+#define OPTEE_MSG_LOGIN_GROUP			0x00000002
+#define OPTEE_MSG_LOGIN_APPLICATION		0x00000004
+#define OPTEE_MSG_LOGIN_APPLICATION_USER	0x00000005
+#define OPTEE_MSG_LOGIN_APPLICATION_GROUP	0x00000006
+
+/*
+ * Page size used in non-contiguous buffer entries
+ */
+#define OPTEE_MSG_NONCONTIG_PAGE_SIZE		4096
+
+/**
+ * struct optee_msg_param_tmem - temporary memory reference parameter
+ * @buf_ptr:	Address of the buffer
+ * @size:	Size of the buffer
+ * @shm_ref:	Temporary shared memory reference, pointer to a struct tee_shm
+ *
+ * Secure and normal world communicates pointers as physical address
+ * instead of the virtual address. This is because secure and normal world
+ * have completely independent memory mapping. Normal world can even have a
+ * hypervisor which need to translate the guest physical address (AKA IPA
+ * in ARM documentation) to a real physical address before passing the
+ * structure to secure world.
+ */
+struct optee_msg_param_tmem {
+	u64 buf_ptr;
+	u64 size;
+	u64 shm_ref;
+};
+
+/**
+ * struct optee_msg_param_rmem - registered memory reference parameter
+ * @offs:	Offset into shared memory reference
+ * @size:	Size of the buffer
+ * @shm_ref:	Shared memory reference, pointer to a struct tee_shm
+ */
+struct optee_msg_param_rmem {
+	u64 offs;
+	u64 size;
+	u64 shm_ref;
+};
+
+/**
+ * struct optee_msg_param_value - opaque value parameter
+ *
+ * Value parameters are passed unchecked between normal and secure world.
+ */
+struct optee_msg_param_value {
+	u64 a;
+	u64 b;
+	u64 c;
+};
+
+/**
+ * struct optee_msg_param - parameter used together with struct optee_msg_arg
+ * @attr:	attributes
+ * @tmem:	parameter by temporary memory reference
+ * @rmem:	parameter by registered memory reference
+ * @value:	parameter by opaque value
+ *
+ * @attr & OPTEE_MSG_ATTR_TYPE_MASK indicates if tmem, rmem or value is used in
+ * the union. OPTEE_MSG_ATTR_TYPE_VALUE_* indicates value,
+ * OPTEE_MSG_ATTR_TYPE_TMEM_* indicates @tmem and
+ * OPTEE_MSG_ATTR_TYPE_RMEM_* indicates @rmem,
+ * OPTEE_MSG_ATTR_TYPE_NONE indicates that none of the members are used.
+ */
+struct optee_msg_param {
+	u64 attr;
+	union {
+		struct optee_msg_param_tmem tmem;
+		struct optee_msg_param_rmem rmem;
+		struct optee_msg_param_value value;
+	} u;
+};
+
+/**
+ * struct optee_msg_arg - call argument
+ * @cmd: Command, one of OPTEE_MSG_CMD_* or OPTEE_MSG_RPC_CMD_*
+ * @func: Trusted Application function, specific to the Trusted Application,
+ *	     used if cmd == OPTEE_MSG_CMD_INVOKE_COMMAND
+ * @session: In parameter for all OPTEE_MSG_CMD_* except
+ *	     OPTEE_MSG_CMD_OPEN_SESSION where it's an output parameter instead
+ * @cancel_id: Cancellation id, a unique value to identify this request
+ * @ret: return value
+ * @ret_origin: origin of the return value
+ * @num_params: number of parameters supplied to the OS Command
+ * @params: the parameters supplied to the OS Command
+ *
+ * All normal calls to Trusted OS uses this struct. If cmd requires further
+ * information than what these field holds it can be passed as a parameter
+ * tagged as meta (setting the OPTEE_MSG_ATTR_META bit in corresponding
+ * attrs field). All parameters tagged as meta has to come first.
+ *
+ * Temp memref parameters can be fragmented if supported by the Trusted OS
+ * (when optee_smc.h is bearer of this protocol this is indicated with
+ * OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM). If a logical memref parameter is
+ * fragmented then has all but the last fragment the
+ * OPTEE_MSG_ATTR_FRAGMENT bit set in attrs. Even if a memref is fragmented
+ * it will still be presented as a single logical memref to the Trusted
+ * Application.
+ */
+struct optee_msg_arg {
+	u32 cmd;
+	u32 func;
+	u32 session;
+	u32 cancel_id;
+	u32 pad;
+	u32 ret;
+	u32 ret_origin;
+	u32 num_params;
+
+	/* num_params tells the actual number of element in params */
+	struct optee_msg_param params[0];
+};
+
+/**
+ * OPTEE_MSG_GET_ARG_SIZE - return size of struct optee_msg_arg
+ *
+ * @num_params: Number of parameters embedded in the struct optee_msg_arg
+ *
+ * Returns the size of the struct optee_msg_arg together with the number
+ * of embedded parameters.
+ */
+#define OPTEE_MSG_GET_ARG_SIZE(num_params) \
+	(sizeof(struct optee_msg_arg) + \
+	 sizeof(struct optee_msg_param) * (num_params))
+
+/*****************************************************************************
+ * Part 2 - requests from normal world
+ *****************************************************************************/
+
+/*
+ * Return the following UID if using API specified in this file without
+ * further extensions:
+ * 384fb3e0-e7f8-11e3-af63-0002a5d5c51b.
+ * Represented in 4 32-bit words in OPTEE_MSG_UID_0, OPTEE_MSG_UID_1,
+ * OPTEE_MSG_UID_2, OPTEE_MSG_UID_3.
+ */
+#define OPTEE_MSG_UID_0			0x384fb3e0
+#define OPTEE_MSG_UID_1			0xe7f811e3
+#define OPTEE_MSG_UID_2			0xaf630002
+#define OPTEE_MSG_UID_3			0xa5d5c51b
+#define OPTEE_MSG_FUNCID_CALLS_UID	0xFF01
+
+/*
+ * Returns 2.0 if using API specified in this file without further
+ * extensions. Represented in 2 32-bit words in OPTEE_MSG_REVISION_MAJOR
+ * and OPTEE_MSG_REVISION_MINOR
+ */
+#define OPTEE_MSG_REVISION_MAJOR	2
+#define OPTEE_MSG_REVISION_MINOR	0
+#define OPTEE_MSG_FUNCID_CALLS_REVISION	0xFF03
+
+/*
+ * Get UUID of Trusted OS.
+ *
+ * Used by non-secure world to figure out which Trusted OS is installed.
+ * Note that returned UUID is the UUID of the Trusted OS, not of the API.
+ *
+ * Returns UUID in 4 32-bit words in the same way as
+ * OPTEE_MSG_FUNCID_CALLS_UID described above.
+ */
+#define OPTEE_MSG_OS_OPTEE_UUID_0	0x486178e0
+#define OPTEE_MSG_OS_OPTEE_UUID_1	0xe7f811e3
+#define OPTEE_MSG_OS_OPTEE_UUID_2	0xbc5e0002
+#define OPTEE_MSG_OS_OPTEE_UUID_3	0xa5d5c51b
+#define OPTEE_MSG_FUNCID_GET_OS_UUID	0x0000
+
+/*
+ * Get revision of Trusted OS.
+ *
+ * Used by non-secure world to figure out which version of the Trusted OS
+ * is installed. Note that the returned revision is the revision of the
+ * Trusted OS, not of the API.
+ *
+ * Returns revision in 2 32-bit words in the same way as
+ * OPTEE_MSG_CALLS_REVISION described above.
+ */
+#define OPTEE_MSG_FUNCID_GET_OS_REVISION	0x0001
+
+/*
+ * Do a secure call with struct optee_msg_arg as argument
+ * The OPTEE_MSG_CMD_* below defines what goes in struct optee_msg_arg::cmd
+ *
+ * OPTEE_MSG_CMD_OPEN_SESSION opens a session to a Trusted Application.
+ * The first two parameters are tagged as meta, holding two value
+ * parameters to pass the following information:
+ * param[0].u.value.a-b uuid of Trusted Application
+ * param[1].u.value.a-b uuid of Client
+ * param[1].u.value.c Login class of client OPTEE_MSG_LOGIN_*
+ *
+ * OPTEE_MSG_CMD_INVOKE_COMMAND invokes a command a previously opened
+ * session to a Trusted Application.  struct optee_msg_arg::func is Trusted
+ * Application function, specific to the Trusted Application.
+ *
+ * OPTEE_MSG_CMD_CLOSE_SESSION closes a previously opened session to
+ * Trusted Application.
+ *
+ * OPTEE_MSG_CMD_CANCEL cancels a currently invoked command.
+ *
+ * OPTEE_MSG_CMD_REGISTER_SHM registers a shared memory reference. The
+ * information is passed as:
+ * [in] param[0].attr			OPTEE_MSG_ATTR_TYPE_TMEM_INPUT
+ *					[| OPTEE_MSG_ATTR_FRAGMENT]
+ * [in] param[0].u.tmem.buf_ptr		physical address (of first fragment)
+ * [in] param[0].u.tmem.size		size (of first fragment)
+ * [in] param[0].u.tmem.shm_ref		holds shared memory reference
+ * ...
+ * The shared memory can optionally be fragmented, temp memrefs can follow
+ * each other with all but the last with the OPTEE_MSG_ATTR_FRAGMENT bit set.
+ *
+ * OPTEE_MSG_CMD_UNREGISTER_SHM unregisteres a previously registered shared
+ * memory reference. The information is passed as:
+ * [in] param[0].attr			OPTEE_MSG_ATTR_TYPE_RMEM_INPUT
+ * [in] param[0].u.rmem.shm_ref		holds shared memory reference
+ * [in] param[0].u.rmem.offs		0
+ * [in] param[0].u.rmem.size		0
+ */
+#define OPTEE_MSG_CMD_OPEN_SESSION	0
+#define OPTEE_MSG_CMD_INVOKE_COMMAND	1
+#define OPTEE_MSG_CMD_CLOSE_SESSION	2
+#define OPTEE_MSG_CMD_CANCEL		3
+#define OPTEE_MSG_CMD_REGISTER_SHM	4
+#define OPTEE_MSG_CMD_UNREGISTER_SHM	5
+#define OPTEE_MSG_FUNCID_CALL_WITH_ARG	0x0004
+
+/*****************************************************************************
+ * Part 3 - Requests from secure world, RPC
+ *****************************************************************************/
+
+/*
+ * All RPC is done with a struct optee_msg_arg as bearer of information,
+ * struct optee_msg_arg::arg holds values defined by OPTEE_MSG_RPC_CMD_* below
+ *
+ * RPC communication with tee-supplicant is reversed compared to normal
+ * client communication desribed above. The supplicant receives requests
+ * and sends responses.
+ */
+
+/*
+ * Load a TA into memory, defined in tee-supplicant
+ */
+#define OPTEE_MSG_RPC_CMD_LOAD_TA	0
+
+/*
+ * Reserved
+ */
+#define OPTEE_MSG_RPC_CMD_RPMB		1
+
+/*
+ * File system access, defined in tee-supplicant
+ */
+#define OPTEE_MSG_RPC_CMD_FS		2
+
+/*
+ * Get time
+ *
+ * Returns number of seconds and nano seconds since the Epoch,
+ * 1970-01-01 00:00:00 +0000 (UTC).
+ *
+ * [out] param[0].u.value.a	Number of seconds
+ * [out] param[0].u.value.b	Number of nano seconds.
+ */
+#define OPTEE_MSG_RPC_CMD_GET_TIME	3
+
+/*
+ * Wait queue primitive, helper for secure world to implement a wait queue.
+ *
+ * If secure world need to wait for a secure world mutex it issues a sleep
+ * request instead of spinning in secure world. Conversely is a wakeup
+ * request issued when a secure world mutex with a thread waiting thread is
+ * unlocked.
+ *
+ * Waiting on a key
+ * [in] param[0].u.value.a OPTEE_MSG_RPC_WAIT_QUEUE_SLEEP
+ * [in] param[0].u.value.b wait key
+ *
+ * Waking up a key
+ * [in] param[0].u.value.a OPTEE_MSG_RPC_WAIT_QUEUE_WAKEUP
+ * [in] param[0].u.value.b wakeup key
+ */
+#define OPTEE_MSG_RPC_CMD_WAIT_QUEUE	4
+#define OPTEE_MSG_RPC_WAIT_QUEUE_SLEEP	0
+#define OPTEE_MSG_RPC_WAIT_QUEUE_WAKEUP	1
+
+/*
+ * Suspend execution
+ *
+ * [in] param[0].value	.a number of milliseconds to suspend
+ */
+#define OPTEE_MSG_RPC_CMD_SUSPEND	5
+
+/*
+ * Allocate a piece of shared memory
+ *
+ * Shared memory can optionally be fragmented, to support that additional
+ * spare param entries are allocated to make room for eventual fragments.
+ * The spare param entries has .attr = OPTEE_MSG_ATTR_TYPE_NONE when
+ * unused. All returned temp memrefs except the last should have the
+ * OPTEE_MSG_ATTR_FRAGMENT bit set in the attr field.
+ *
+ * [in]  param[0].u.value.a		type of memory one of
+ *					OPTEE_MSG_RPC_SHM_TYPE_* below
+ * [in]  param[0].u.value.b		requested size
+ * [in]  param[0].u.value.c		required alignment
+ *
+ * [out] param[0].u.tmem.buf_ptr	physical address (of first fragment)
+ * [out] param[0].u.tmem.size		size (of first fragment)
+ * [out] param[0].u.tmem.shm_ref	shared memory reference
+ * ...
+ * [out] param[n].u.tmem.buf_ptr	physical address
+ * [out] param[n].u.tmem.size		size
+ * [out] param[n].u.tmem.shm_ref	shared memory reference (same value
+ *					as in param[n-1].u.tmem.shm_ref)
+ */
+#define OPTEE_MSG_RPC_CMD_SHM_ALLOC	6
+/* Memory that can be shared with a non-secure user space application */
+#define OPTEE_MSG_RPC_SHM_TYPE_APPL	0
+/* Memory only shared with non-secure kernel */
+#define OPTEE_MSG_RPC_SHM_TYPE_KERNEL	1
+
+/*
+ * Free shared memory previously allocated with OPTEE_MSG_RPC_CMD_SHM_ALLOC
+ *
+ * [in]  param[0].u.value.a		type of memory one of
+ *					OPTEE_MSG_RPC_SHM_TYPE_* above
+ * [in]  param[0].u.value.b		value of shared memory reference
+ *					returned in param[0].u.tmem.shm_ref
+ *					above
+ */
+#define OPTEE_MSG_RPC_CMD_SHM_FREE	7
+
+#endif /* _OPTEE_MSG_H */
diff --git a/xen/include/asm-arm/tee/optee_smc.h b/xen/include/asm-arm/tee/optee_smc.h
new file mode 100644
index 0000000000..26d100e215
--- /dev/null
+++ b/xen/include/asm-arm/tee/optee_smc.h
@@ -0,0 +1,457 @@
+/*
+ * Copyright (c) 2015-2016, Linaro Limited
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ * this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright notice,
+ * this list of conditions and the following disclaimer in the documentation
+ * and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#ifndef OPTEE_SMC_H
+#define OPTEE_SMC_H
+
+#include <asm/smccc.h>
+#include <xen/bitops.h>
+
+#define OPTEE_SMC_STD_CALL_VAL(func_num) \
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_STD_CALL, ARM_SMCCC_CONV_32, \
+			   ARM_SMCCC_OWNER_TRUSTED_OS, (func_num))
+#define OPTEE_SMC_FAST_CALL_VAL(func_num) \
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_CONV_32, \
+			   ARM_SMCCC_OWNER_TRUSTED_OS, (func_num))
+
+/*
+ * Function specified by SMC Calling convention.
+ */
+#define OPTEE_SMC_FUNCID_CALLS_COUNT	0xFF00
+#define OPTEE_SMC_CALLS_COUNT \
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_CONV_32, \
+			   ARM_SMCCC_OWNER_TRUSTED_OS_END, \
+			   OPTEE_SMC_FUNCID_CALLS_COUNT)
+
+/*
+ * Normal cached memory (write-back), shareable for SMP systems and not
+ * shareable for UP systems.
+ */
+#define OPTEE_SMC_SHM_CACHED		1
+
+/*
+ * a0..a7 is used as register names in the descriptions below, on arm32
+ * that translates to r0..r7 and on arm64 to w0..w7. In both cases it's
+ * 32-bit registers.
+ */
+
+/*
+ * Function specified by SMC Calling convention
+ *
+ * Return one of the following UIDs if using API specified in this file
+ * without further extentions:
+ * 65cb6b93-af0c-4617-8ed6-644a8d1140f8
+ * see also OPTEE_SMC_UID_* in optee_msg.h
+ */
+#define OPTEE_SMC_FUNCID_CALLS_UID OPTEE_MSG_FUNCID_CALLS_UID
+#define OPTEE_SMC_CALLS_UID \
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_CONV_32, \
+			   ARM_SMCCC_OWNER_TRUSTED_OS_END, \
+			   OPTEE_SMC_FUNCID_CALLS_UID)
+
+/*
+ * Function specified by SMC Calling convention
+ *
+ * Returns 2.0 if using API specified in this file without further extentions.
+ * see also OPTEE_MSG_REVISION_* in optee_msg.h
+ */
+#define OPTEE_SMC_FUNCID_CALLS_REVISION OPTEE_MSG_FUNCID_CALLS_REVISION
+#define OPTEE_SMC_CALLS_REVISION \
+	ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_CONV_32, \
+			   ARM_SMCCC_OWNER_TRUSTED_OS_END, \
+			   OPTEE_SMC_FUNCID_CALLS_REVISION)
+
+struct optee_smc_calls_revision_result {
+	unsigned long major;
+	unsigned long minor;
+	unsigned long reserved0;
+	unsigned long reserved1;
+};
+
+/*
+ * Get UUID of Trusted OS.
+ *
+ * Used by non-secure world to figure out which Trusted OS is installed.
+ * Note that returned UUID is the UUID of the Trusted OS, not of the API.
+ *
+ * Returns UUID in a0-4 in the same way as OPTEE_SMC_CALLS_UID
+ * described above.
+ */
+#define OPTEE_SMC_FUNCID_GET_OS_UUID OPTEE_MSG_FUNCID_GET_OS_UUID
+#define OPTEE_SMC_CALL_GET_OS_UUID \
+	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_OS_UUID)
+
+/*
+ * Get revision of Trusted OS.
+ *
+ * Used by non-secure world to figure out which version of the Trusted OS
+ * is installed. Note that the returned revision is the revision of the
+ * Trusted OS, not of the API.
+ *
+ * Returns revision in a0-1 in the same way as OPTEE_SMC_CALLS_REVISION
+ * described above.
+ */
+#define OPTEE_SMC_FUNCID_GET_OS_REVISION OPTEE_MSG_FUNCID_GET_OS_REVISION
+#define OPTEE_SMC_CALL_GET_OS_REVISION \
+	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_OS_REVISION)
+
+/*
+ * Call with struct optee_msg_arg as argument
+ *
+ * Call register usage:
+ * a0	SMC Function ID, OPTEE_SMC*CALL_WITH_ARG
+ * a1	Upper 32bit of a 64bit physical pointer to a struct optee_msg_arg
+ * a2	Lower 32bit of a 64bit physical pointer to a struct optee_msg_arg
+ * a3	Cache settings, not used if physical pointer is in a predefined shared
+ *	memory area else per OPTEE_SMC_SHM_*
+ * a4-6	Not used
+ * a7	Hypervisor Client ID register
+ *
+ * Normal return register usage:
+ * a0	Return value, OPTEE_SMC_RETURN_*
+ * a1-3	Not used
+ * a4-7	Preserved
+ *
+ * OPTEE_SMC_RETURN_ETHREAD_LIMIT return register usage:
+ * a0	Return value, OPTEE_SMC_RETURN_ETHREAD_LIMIT
+ * a1-3	Preserved
+ * a4-7	Preserved
+ *
+ * RPC return register usage:
+ * a0	Return value, OPTEE_SMC_RETURN_IS_RPC(val)
+ * a1-2	RPC parameters
+ * a3-7	Resume information, must be preserved
+ *
+ * Possible return values:
+ * OPTEE_SMC_RETURN_UNKNOWN_FUNCTION	Trusted OS does not recognize this
+ *					function.
+ * OPTEE_SMC_RETURN_OK			Call completed, result updated in
+ *					the previously supplied struct
+ *					optee_msg_arg.
+ * OPTEE_SMC_RETURN_ETHREAD_LIMIT	Number of Trusted OS threads exceeded,
+ *					try again later.
+ * OPTEE_SMC_RETURN_EBADADDR		Bad physcial pointer to struct
+ *					optee_msg_arg.
+ * OPTEE_SMC_RETURN_EBADCMD		Bad/unknown cmd in struct optee_msg_arg
+ * OPTEE_SMC_RETURN_IS_RPC()		Call suspended by RPC call to normal
+ *					world.
+ */
+#define OPTEE_SMC_FUNCID_CALL_WITH_ARG OPTEE_MSG_FUNCID_CALL_WITH_ARG
+#define OPTEE_SMC_CALL_WITH_ARG \
+	OPTEE_SMC_STD_CALL_VAL(OPTEE_SMC_FUNCID_CALL_WITH_ARG)
+
+/*
+ * Get Shared Memory Config
+ *
+ * Returns the Secure/Non-secure shared memory config.
+ *
+ * Call register usage:
+ * a0	SMC Function ID, OPTEE_SMC_GET_SHM_CONFIG
+ * a1-6	Not used
+ * a7	Hypervisor Client ID register
+ *
+ * Have config return register usage:
+ * a0	OPTEE_SMC_RETURN_OK
+ * a1	Physical address of start of SHM
+ * a2	Size of of SHM
+ * a3	Cache settings of memory, as defined by the
+ *	OPTEE_SMC_SHM_* values above
+ * a4-7	Preserved
+ *
+ * Not available register usage:
+ * a0	OPTEE_SMC_RETURN_ENOTAVAIL
+ * a1-3 Not used
+ * a4-7	Preserved
+ */
+#define OPTEE_SMC_FUNCID_GET_SHM_CONFIG	7
+#define OPTEE_SMC_GET_SHM_CONFIG \
+	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_SHM_CONFIG)
+
+struct optee_smc_get_shm_config_result {
+	unsigned long status;
+	unsigned long start;
+	unsigned long size;
+	unsigned long settings;
+};
+
+/*
+ * Exchanges capabilities between normal world and secure world
+ *
+ * Call register usage:
+ * a0	SMC Function ID, OPTEE_SMC_EXCHANGE_CAPABILITIES
+ * a1	bitfield of normal world capabilities OPTEE_SMC_NSEC_CAP_*
+ * a2-6	Not used
+ * a7	Hypervisor Client ID register
+ *
+ * Normal return register usage:
+ * a0	OPTEE_SMC_RETURN_OK
+ * a1	bitfield of secure world capabilities OPTEE_SMC_SEC_CAP_*
+ * a2-7	Preserved
+ *
+ * Error return register usage:
+ * a0	OPTEE_SMC_RETURN_ENOTAVAIL, can't use the capabilities from normal world
+ * a1	bitfield of secure world capabilities OPTEE_SMC_SEC_CAP_*
+ * a2-7 Preserved
+ */
+/* Normal world works as a uniprocessor system */
+#define OPTEE_SMC_NSEC_CAP_UNIPROCESSOR		BIT(0)
+/* Secure world has reserved shared memory for normal world to use */
+#define OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM	BIT(0)
+/* Secure world can communicate via previously unregistered shared memory */
+#define OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM	BIT(1)
+
+/*
+ * Secure world supports commands "register/unregister shared memory",
+ * secure world accepts command buffers located in any parts of non-secure RAM
+ */
+#define OPTEE_SMC_SEC_CAP_DYNAMIC_SHM		BIT(2)
+
+#define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES	9
+#define OPTEE_SMC_EXCHANGE_CAPABILITIES \
+	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES)
+
+struct optee_smc_exchange_capabilities_result {
+	unsigned long status;
+	unsigned long capabilities;
+	unsigned long reserved0;
+	unsigned long reserved1;
+};
+
+/*
+ * Disable and empties cache of shared memory objects
+ *
+ * Secure world can cache frequently used shared memory objects, for
+ * example objects used as RPC arguments. When secure world is idle this
+ * function returns one shared memory reference to free. To disable the
+ * cache and free all cached objects this function has to be called until
+ * it returns OPTEE_SMC_RETURN_ENOTAVAIL.
+ *
+ * Call register usage:
+ * a0	SMC Function ID, OPTEE_SMC_DISABLE_SHM_CACHE
+ * a1-6	Not used
+ * a7	Hypervisor Client ID register
+ *
+ * Normal return register usage:
+ * a0	OPTEE_SMC_RETURN_OK
+ * a1	Upper 32bit of a 64bit Shared memory cookie
+ * a2	Lower 32bit of a 64bit Shared memory cookie
+ * a3-7	Preserved
+ *
+ * Cache empty return register usage:
+ * a0	OPTEE_SMC_RETURN_ENOTAVAIL
+ * a1-7	Preserved
+ *
+ * Not idle return register usage:
+ * a0	OPTEE_SMC_RETURN_EBUSY
+ * a1-7	Preserved
+ */
+#define OPTEE_SMC_FUNCID_DISABLE_SHM_CACHE	10
+#define OPTEE_SMC_DISABLE_SHM_CACHE \
+	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_DISABLE_SHM_CACHE)
+
+struct optee_smc_disable_shm_cache_result {
+	unsigned long status;
+	unsigned long shm_upper32;
+	unsigned long shm_lower32;
+	unsigned long reserved0;
+};
+
+/*
+ * Enable cache of shared memory objects
+ *
+ * Secure world can cache frequently used shared memory objects, for
+ * example objects used as RPC arguments. When secure world is idle this
+ * function returns OPTEE_SMC_RETURN_OK and the cache is enabled. If
+ * secure world isn't idle OPTEE_SMC_RETURN_EBUSY is returned.
+ *
+ * Call register usage:
+ * a0	SMC Function ID, OPTEE_SMC_ENABLE_SHM_CACHE
+ * a1-6	Not used
+ * a7	Hypervisor Client ID register
+ *
+ * Normal return register usage:
+ * a0	OPTEE_SMC_RETURN_OK
+ * a1-7	Preserved
+ *
+ * Not idle return register usage:
+ * a0	OPTEE_SMC_RETURN_EBUSY
+ * a1-7	Preserved
+ */
+#define OPTEE_SMC_FUNCID_ENABLE_SHM_CACHE	11
+#define OPTEE_SMC_ENABLE_SHM_CACHE \
+	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_ENABLE_SHM_CACHE)
+
+/*
+ * Resume from RPC (for example after processing a foreign interrupt)
+ *
+ * Call register usage:
+ * a0	SMC Function ID, OPTEE_SMC_CALL_RETURN_FROM_RPC
+ * a1-3	Value of a1-3 when OPTEE_SMC_CALL_WITH_ARG returned
+ *	OPTEE_SMC_RETURN_RPC in a0
+ *
+ * Return register usage is the same as for OPTEE_SMC_*CALL_WITH_ARG above.
+ *
+ * Possible return values
+ * OPTEE_SMC_RETURN_UNKNOWN_FUNCTION	Trusted OS does not recognize this
+ *					function.
+ * OPTEE_SMC_RETURN_OK			Original call completed, result
+ *					updated in the previously supplied.
+ *					struct optee_msg_arg
+ * OPTEE_SMC_RETURN_RPC			Call suspended by RPC call to normal
+ *					world.
+ * OPTEE_SMC_RETURN_ERESUME		Resume failed, the opaque resume
+ *					information was corrupt.
+ */
+#define OPTEE_SMC_FUNCID_RETURN_FROM_RPC	3
+#define OPTEE_SMC_CALL_RETURN_FROM_RPC \
+	OPTEE_SMC_STD_CALL_VAL(OPTEE_SMC_FUNCID_RETURN_FROM_RPC)
+
+#define OPTEE_SMC_RETURN_RPC_PREFIX_MASK	0xFFFF0000
+#define OPTEE_SMC_RETURN_RPC_PREFIX		0xFFFF0000
+#define OPTEE_SMC_RETURN_RPC_FUNC_MASK		0x0000FFFF
+
+#define OPTEE_SMC_RETURN_GET_RPC_FUNC(ret) \
+	((ret) & OPTEE_SMC_RETURN_RPC_FUNC_MASK)
+
+#define OPTEE_SMC_RPC_VAL(func)		((func) | OPTEE_SMC_RETURN_RPC_PREFIX)
+
+/*
+ * Allocate memory for RPC parameter passing. The memory is used to hold a
+ * struct optee_msg_arg.
+ *
+ * "Call" register usage:
+ * a0	This value, OPTEE_SMC_RETURN_RPC_ALLOC
+ * a1	Size in bytes of required argument memory
+ * a2	Not used
+ * a3	Resume information, must be preserved
+ * a4-5	Not used
+ * a6-7	Resume information, must be preserved
+ *
+ * "Return" register usage:
+ * a0	SMC Function ID, OPTEE_SMC_CALL_RETURN_FROM_RPC.
+ * a1	Upper 32bits of 64bit physical pointer to allocated
+ *	memory, (a1 == 0 && a2 == 0) if size was 0 or if memory can't
+ *	be allocated.
+ * a2	Lower 32bits of 64bit physical pointer to allocated
+ *	memory, (a1 == 0 && a2 == 0) if size was 0 or if memory can't
+ *	be allocated
+ * a3	Preserved
+ * a4	Upper 32bits of 64bit Shared memory cookie used when freeing
+ *	the memory or doing an RPC
+ * a5	Lower 32bits of 64bit Shared memory cookie used when freeing
+ *	the memory or doing an RPC
+ * a6-7	Preserved
+ */
+#define OPTEE_SMC_RPC_FUNC_ALLOC	0
+#define OPTEE_SMC_RETURN_RPC_ALLOC \
+	OPTEE_SMC_RPC_VAL(OPTEE_SMC_RPC_FUNC_ALLOC)
+
+/*
+ * Free memory previously allocated by OPTEE_SMC_RETURN_RPC_ALLOC
+ *
+ * "Call" register usage:
+ * a0	This value, OPTEE_SMC_RETURN_RPC_FREE
+ * a1	Upper 32bits of 64bit shared memory cookie belonging to this
+ *	argument memory
+ * a2	Lower 32bits of 64bit shared memory cookie belonging to this
+ *	argument memory
+ * a3-7	Resume information, must be preserved
+ *
+ * "Return" register usage:
+ * a0	SMC Function ID, OPTEE_SMC_CALL_RETURN_FROM_RPC.
+ * a1-2	Not used
+ * a3-7	Preserved
+ */
+#define OPTEE_SMC_RPC_FUNC_FREE		2
+#define OPTEE_SMC_RETURN_RPC_FREE \
+	OPTEE_SMC_RPC_VAL(OPTEE_SMC_RPC_FUNC_FREE)
+
+/*
+ * Deliver foreign interrupt to normal world.
+ *
+ * "Call" register usage:
+ * a0	OPTEE_SMC_RETURN_RPC_FOREIGN_INTR
+ * a1-7	Resume information, must be preserved
+ *
+ * "Return" register usage:
+ * a0	SMC Function ID, OPTEE_SMC_CALL_RETURN_FROM_RPC.
+ * a1-7	Preserved
+ */
+#define OPTEE_SMC_RPC_FUNC_FOREIGN_INTR		4
+#define OPTEE_SMC_RETURN_RPC_FOREIGN_INTR \
+	OPTEE_SMC_RPC_VAL(OPTEE_SMC_RPC_FUNC_FOREIGN_INTR)
+
+/*
+ * Do an RPC request. The supplied struct optee_msg_arg tells which
+ * request to do and the parameters for the request. The following fields
+ * are used (the rest are unused):
+ * - cmd		the Request ID
+ * - ret		return value of the request, filled in by normal world
+ * - num_params		number of parameters for the request
+ * - params		the parameters
+ * - param_attrs	attributes of the parameters
+ *
+ * "Call" register usage:
+ * a0	OPTEE_SMC_RETURN_RPC_CMD
+ * a1	Upper 32bit of a 64bit Shared memory cookie holding a
+ *	struct optee_msg_arg, must be preserved, only the data should
+ *	be updated
+ * a2	Lower 32bit of a 64bit Shared memory cookie holding a
+ *	struct optee_msg_arg, must be preserved, only the data should
+ *	be updated
+ * a3-7	Resume information, must be preserved
+ *
+ * "Return" register usage:
+ * a0	SMC Function ID, OPTEE_SMC_CALL_RETURN_FROM_RPC.
+ * a1-2	Not used
+ * a3-7	Preserved
+ */
+#define OPTEE_SMC_RPC_FUNC_CMD		5
+#define OPTEE_SMC_RETURN_RPC_CMD \
+	OPTEE_SMC_RPC_VAL(OPTEE_SMC_RPC_FUNC_CMD)
+
+/* Returned in a0 */
+#define OPTEE_SMC_RETURN_UNKNOWN_FUNCTION 0xFFFFFFFF
+
+/* Returned in a0 only from Trusted OS functions */
+#define OPTEE_SMC_RETURN_OK		0x0
+#define OPTEE_SMC_RETURN_ETHREAD_LIMIT	0x1
+#define OPTEE_SMC_RETURN_EBUSY		0x2
+#define OPTEE_SMC_RETURN_ERESUME	0x3
+#define OPTEE_SMC_RETURN_EBADADDR	0x4
+#define OPTEE_SMC_RETURN_EBADCMD	0x5
+#define OPTEE_SMC_RETURN_ENOMEM		0x6
+#define OPTEE_SMC_RETURN_ENOTAVAIL	0x7
+#define OPTEE_SMC_RETURN_IS_RPC(ret)	__optee_smc_return_is_rpc((ret))
+
+static inline bool __optee_smc_return_is_rpc(u32 ret)
+{
+	return ret != OPTEE_SMC_RETURN_UNKNOWN_FUNCTION &&
+	       (ret & OPTEE_SMC_RETURN_RPC_PREFIX_MASK) ==
+			OPTEE_SMC_RETURN_RPC_PREFIX;
+}
+
+#endif /* OPTEE_SMC_H */
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 05/11] optee: add fast calls handling
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
                   ` (3 preceding siblings ...)
  2018-12-18 21:11 ` [PATCH v3 04/11] optee: add OP-TEE mediator skeleton Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-17 18:31   ` Julien Grall
  2018-12-18 21:11 ` [PATCH v3 06/11] optee: add std call handling Volodymyr Babchuk
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini, Volodymyr Babchuk

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

Some fast SMCCC calls to OP-TEE should be handled in a special way.
Capabilities exchange should be filtered out, so only caps
known to mediator are used. Also mediator disables static SHM
memory capability, because it can't share OP-TEE memory with a domain.
Only domain can share memory with OP-TEE, so it ensures that OP-TEE
supports dynamic SHM.

Basically, static SHM is a reserved memory region which is always
mapped into OP-TEE address space. It belongs to OP-TEE. Normally,
NW is allowed to access there, so it can communicate with OP-TEE.

On other hand, dynamic SHM is NW's own memory, which it can share
with OP-TEE. OP-TEE maps this memory dynamically, when it wants to
access it.

Because mediator can't share one static SHM region with all guests,
it just disables it for all.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
---

 Changes from v2:
  - Defined known capabilities explicitely
  - Fixed code style

 xen/arch/arm/tee/optee.c | 58 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 56 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 73ad25ee0b..584241b03a 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -22,6 +22,11 @@
 /* Client ID 0 is reserved for hypervisor itself */
 #define OPTEE_CLIENT_ID(domain) (domain->domain_id + 1)
 
+#define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
+#define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
+                              OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM |  \
+                              OPTEE_SMC_SEC_CAP_DYNAMIC_SHM)
+
 static bool optee_probe(void)
 {
     struct dt_device_node *node;
@@ -94,6 +99,18 @@ static void forward_call(struct cpu_user_regs *regs)
     set_user_reg(regs, 7, 0);
 }
 
+static void set_return(struct cpu_user_regs *regs, uint32_t ret)
+{
+    set_user_reg(regs, 0, ret);
+    set_user_reg(regs, 1, 0);
+    set_user_reg(regs, 2, 0);
+    set_user_reg(regs, 3, 0);
+    set_user_reg(regs, 4, 0);
+    set_user_reg(regs, 5, 0);
+    set_user_reg(regs, 6, 0);
+    set_user_reg(regs, 7, 0);
+}
+
 static void optee_domain_destroy(struct domain *d)
 {
     struct arm_smccc_res resp;
@@ -109,6 +126,39 @@ static void optee_domain_destroy(struct domain *d)
                   &resp);
 }
 
+static bool handle_exchange_capabilities(struct cpu_user_regs *regs)
+{
+    uint32_t caps;
+
+    /* Filter out unknown guest caps */
+    caps = get_user_reg(regs, 1);
+    caps &= OPTEE_KNOWN_NSEC_CAPS;
+    set_user_reg(regs, 1, caps);
+
+    forward_call(regs);
+    if ( get_user_reg(regs, 0) != OPTEE_SMC_RETURN_OK )
+        return true;
+
+    caps = get_user_reg(regs, 1);
+
+    /* Filter out unknown OP-TEE caps */
+    caps &= OPTEE_KNOWN_SEC_CAPS;
+
+    /* Drop static SHM_RPC cap */
+    caps &= ~OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM;
+
+    /* Don't allow guests to work without dynamic SHM */
+    if ( !(caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) )
+    {
+        set_return(regs, OPTEE_SMC_RETURN_ENOTAVAIL);
+        return true;
+    }
+
+    set_user_reg(regs, 1, caps);
+
+    return true;
+}
+
 static bool optee_handle_call(struct cpu_user_regs *regs)
 {
     switch ( get_user_reg(regs, 0) )
@@ -120,12 +170,16 @@ static bool optee_handle_call(struct cpu_user_regs *regs)
     case OPTEE_SMC_FUNCID_GET_OS_REVISION:
     case OPTEE_SMC_ENABLE_SHM_CACHE:
     case OPTEE_SMC_DISABLE_SHM_CACHE:
-    case OPTEE_SMC_GET_SHM_CONFIG:
-    case OPTEE_SMC_EXCHANGE_CAPABILITIES:
     case OPTEE_SMC_CALL_WITH_ARG:
     case OPTEE_SMC_CALL_RETURN_FROM_RPC:
         forward_call(regs);
         return true;
+    case OPTEE_SMC_GET_SHM_CONFIG:
+        /* No static SHM available for guests */
+        set_return(regs, OPTEE_SMC_RETURN_ENOTAVAIL);
+        return true;
+    case OPTEE_SMC_EXCHANGE_CAPABILITIES:
+        return handle_exchange_capabilities(regs);
     default:
         return false;
     }
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 04/11] optee: add OP-TEE mediator skeleton
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
                   ` (2 preceding siblings ...)
  2018-12-18 21:11 ` [PATCH v3 02/11] arm: add tee_enabled flag to xen_arch_domainconfig Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-15 18:15   ` Julien Grall
  2019-01-16 17:51   ` Julien Grall
  2018-12-18 21:11 ` [PATCH v3 05/11] optee: add fast calls handling Volodymyr Babchuk
                   ` (7 subsequent siblings)
  11 siblings, 2 replies; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini, Volodymyr Babchuk

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

Add very basic OP-TEE mediator. It can probe for OP-TEE presence,
tell it about domain creation/destruction and forward all known
calls.

This is all what is needed for Dom0 to work with OP-TEE as long
as Dom0 shares 1:1 mapped pages with OP-TEE. Any attempt to call
OP-TEE from DomU will fail and can lead to spectacular results. Also,
problems can arise if Dom0 uses pages mapped from other domains.
So, this patch should not be merged without next patches in the series.

This code issues two non-preemptible calls to OP-TEE: to create and
to destroy client context. They can't block in OP-TEE, but OP-TEE can
wait on a splinlocks, so there is no maximal execution time
guaranteed.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
---

 Changes from v2:
  - Fixed coding style
  - Introduced tee/Kconfig
  - Fixed error messages

 xen/arch/arm/Kconfig                |   2 +
 xen/arch/arm/tee/Kconfig            |   4 +
 xen/arch/arm/tee/Makefile           |   1 +
 xen/arch/arm/tee/optee.c            | 151 ++++++++++++++++++++++++++++
 xen/include/asm-arm/tee/optee_smc.h |  50 +++++++++
 5 files changed, 208 insertions(+)
 create mode 100644 xen/arch/arm/tee/Kconfig
 create mode 100644 xen/arch/arm/tee/optee.c

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index e527b2f885..99e6f0ebb2 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -237,3 +237,5 @@ source "arch/arm/platforms/Kconfig"
 source "common/Kconfig"
 
 source "drivers/Kconfig"
+
+source "arch/arm/tee/Kconfig"
diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig
new file mode 100644
index 0000000000..5b829db2e9
--- /dev/null
+++ b/xen/arch/arm/tee/Kconfig
@@ -0,0 +1,4 @@
+config OPTEE
+	bool "Enable OP-TEE mediator"
+	default n
+	depends on TEE
diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
index c54d4796ff..982c879684 100644
--- a/xen/arch/arm/tee/Makefile
+++ b/xen/arch/arm/tee/Makefile
@@ -1 +1,2 @@
 obj-y += tee.o
+obj-$(CONFIG_OPTEE) += optee.o
diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
new file mode 100644
index 0000000000..73ad25ee0b
--- /dev/null
+++ b/xen/arch/arm/tee/optee.c
@@ -0,0 +1,151 @@
+/*
+ * xen/arch/arm/tee/optee.c
+ *
+ * OP-TEE mediator
+ *
+ * Volodymyr Babchuk <volodymyr_babchuk@epam.com>
+ * Copyright (c) 2018 EPAM Systems.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <xen/device_tree.h>
+#include <xen/sched.h>
+#include <asm/smccc.h>
+#include <asm/tee/tee.h>
+
+#include <asm/tee/optee_msg.h>
+#include <asm/tee/optee_smc.h>
+
+/* Client ID 0 is reserved for hypervisor itself */
+#define OPTEE_CLIENT_ID(domain) (domain->domain_id + 1)
+
+static bool optee_probe(void)
+{
+    struct dt_device_node *node;
+    struct arm_smccc_res resp;
+
+    /* Check for entry in dtb  */
+    node = dt_find_compatible_node(NULL, NULL, "linaro,optee-tz");
+    if ( !node )
+        return false;
+
+    /* Check UID */
+    arm_smccc_smc(ARM_SMCCC_CALL_UID_FID(TRUSTED_OS_END), &resp);
+
+    if ( (uint32_t)resp.a0 != OPTEE_MSG_UID_0 ||
+         (uint32_t)resp.a1 != OPTEE_MSG_UID_1 ||
+         (uint32_t)resp.a2 != OPTEE_MSG_UID_2 ||
+         (uint32_t)resp.a3 != OPTEE_MSG_UID_3 )
+        return false;
+
+    return true;
+}
+
+static int optee_enable(struct domain *d)
+{
+    struct arm_smccc_res resp;
+
+    /*
+     * Inform OP-TEE about a new guest.
+     * This is a "Fast" call in terms of OP-TEE. This basically
+     * means that it can't be preempted, because there is no
+     * thread allocated for it in OP-TEE. It is close to atomic
+     * context in linux kernel: E.g. no blocking calls can be issued.
+     * Also, interrupts are disabled.
+     * Right now OP-TEE just frees allocated memory, so it should be
+     * really fast.
+     */
+    arm_smccc_smc(OPTEE_SMC_VM_CREATED, OPTEE_CLIENT_ID(d), 0, 0, 0, 0, 0, 0,
+                  &resp);
+    if ( resp.a0 != OPTEE_SMC_RETURN_OK )
+    {
+        gprintk(XENLOG_WARNING, "Unable to create OPTEE client: rc = 0x%X\n",
+                (uint32_t)resp.a0);
+        return -ENODEV;
+    }
+
+    return 0;
+}
+
+static void forward_call(struct cpu_user_regs *regs)
+{
+    struct arm_smccc_res resp;
+
+    arm_smccc_smc(get_user_reg(regs, 0),
+                  get_user_reg(regs, 1),
+                  get_user_reg(regs, 2),
+                  get_user_reg(regs, 3),
+                  get_user_reg(regs, 4),
+                  get_user_reg(regs, 5),
+                  get_user_reg(regs, 6),
+                  OPTEE_CLIENT_ID(current->domain),
+                  &resp);
+
+    set_user_reg(regs, 0, resp.a0);
+    set_user_reg(regs, 1, resp.a1);
+    set_user_reg(regs, 2, resp.a2);
+    set_user_reg(regs, 3, resp.a3);
+    set_user_reg(regs, 4, 0);
+    set_user_reg(regs, 5, 0);
+    set_user_reg(regs, 6, 0);
+    set_user_reg(regs, 7, 0);
+}
+
+static void optee_domain_destroy(struct domain *d)
+{
+    struct arm_smccc_res resp;
+
+    /* At this time all domain VCPUs should be stopped */
+
+    /*
+     * Inform OP-TEE that domain is shutting down. This is
+     * also a fast SMC call, like OPTEE_SMC_VM_CREATED, so
+     * it is also non-preemptible.
+     */
+    arm_smccc_smc(OPTEE_SMC_VM_DESTROYED, OPTEE_CLIENT_ID(d), 0, 0, 0, 0, 0, 0,
+                  &resp);
+}
+
+static bool optee_handle_call(struct cpu_user_regs *regs)
+{
+    switch ( get_user_reg(regs, 0) )
+    {
+    case OPTEE_SMC_CALLS_COUNT:
+    case OPTEE_SMC_CALLS_UID:
+    case OPTEE_SMC_CALLS_REVISION:
+    case OPTEE_SMC_CALL_GET_OS_UUID:
+    case OPTEE_SMC_FUNCID_GET_OS_REVISION:
+    case OPTEE_SMC_ENABLE_SHM_CACHE:
+    case OPTEE_SMC_DISABLE_SHM_CACHE:
+    case OPTEE_SMC_GET_SHM_CONFIG:
+    case OPTEE_SMC_EXCHANGE_CAPABILITIES:
+    case OPTEE_SMC_CALL_WITH_ARG:
+    case OPTEE_SMC_CALL_RETURN_FROM_RPC:
+        forward_call(regs);
+        return true;
+    default:
+        return false;
+    }
+}
+
+static const struct tee_mediator_ops optee_ops =
+{
+    .probe = optee_probe,
+    .enable = optee_enable,
+    .domain_destroy = optee_domain_destroy,
+    .handle_call = optee_handle_call,
+};
+
+REGISTER_TEE_MEDIATOR(optee, "OP-TEE", &optee_ops);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/tee/optee_smc.h b/xen/include/asm-arm/tee/optee_smc.h
index 26d100e215..1c5a2479e9 100644
--- a/xen/include/asm-arm/tee/optee_smc.h
+++ b/xen/include/asm-arm/tee/optee_smc.h
@@ -304,6 +304,56 @@ struct optee_smc_disable_shm_cache_result {
 #define OPTEE_SMC_ENABLE_SHM_CACHE \
 	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_ENABLE_SHM_CACHE)
 
+/*
+ * Inform OP-TEE about a new virtual machine
+ *
+ * Hypervisor issues this call during virtual machine (guest) creation.
+ * OP-TEE records VM_ID of new virtual machine and makes self ready
+ * to receive requests from it.
+ *
+ * Call requests usage:
+ * a0	SMC Function ID, OPTEE_SMC_VM_CREATED
+ * a1	VM_ID of newly created virtual machine
+ * a2-6 Not used
+ * a7	Hypervisor Client ID register. Must be 0, because only hypervisor
+ *      can issue this call
+ *
+ * Normal return register usage:
+ * a0	OPTEE_SMC_RETURN_OK
+ * a1-7	Preserved
+ *
+ * Error return:
+ * a0	OPTEE_SMC_RETURN_ENOTAVAIL	OP-TEE has no resources for
+ *					another VM
+ * a1-7	Preserved
+ *
+ */
+#define OPTEE_SMC_FUNCID_VM_CREATED	13
+#define OPTEE_SMC_VM_CREATED \
+	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_VM_CREATED)
+
+/*
+ * Inform OP-TEE about shutdown of a virtual machine
+ *
+ * Hypervisor issues this call during virtual machine (guest) destruction.
+ * OP-TEE will clean up all resources associated with this VM.
+ *
+ * Call requests usage:
+ * a0	SMC Function ID, OPTEE_SMC_VM_DESTROYED
+ * a1	VM_ID of virtual machine being shutted down
+ * a2-6 Not used
+ * a7	Hypervisor Client ID register. Must be 0, because only hypervisor
+ *      can issue this call
+ *
+ * Normal return register usage:
+ * a0	OPTEE_SMC_RETURN_OK
+ * a1-7	Preserved
+ *
+ */
+#define OPTEE_SMC_FUNCID_VM_DESTROYED	14
+#define OPTEE_SMC_VM_DESTROYED \
+	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_VM_DESTROYED)
+
 /*
  * Resume from RPC (for example after processing a foreign interrupt)
  *
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 06/11] optee: add std call handling
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
                   ` (4 preceding siblings ...)
  2018-12-18 21:11 ` [PATCH v3 05/11] optee: add fast calls handling Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-16 18:45   ` Julien Grall
  2018-12-18 21:11 ` [PATCH v3 07/11] optee: add support for RPC SHM buffers Volodymyr Babchuk
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini, Volodymyr Babchuk

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

The main way to communicate with OP-TEE is to issue standard SMCCC
call. "Standard" is a SMCCC term and it means that call can be
interrupted and OP-TEE can return control to NW before completing
the call.

In contrast with fast calls, where arguments and return values
are passed in registers, standard calls use shared memory. Register
pair a1,a2 holds 64-bit PA of command buffer, where all arguments
are stored and which is used to return data. OP-TEE internally
copies contents of this buffer into own secure memory before accessing
and validating any data in command buffer. This is done to make sure
that NW will not change contents of the validated parameters.

Mediator needs to do the same for number of reasons:

1. To make sure that guest will not change data after validation.
2. To translate IPAs to PAs in the command buffer (this is not done
   in this patch).
3. To hide translated address from guest, so it will not be able
   to do IPA->PA translation by misusing mediator.

During standard call OP-TEE can issue multiple "RPC returns", asking
NW to do some work for OP-TEE. NW then issues special call
OPTEE_SMC_CALL_RETURN_FROM_RPC to resume handling of the original call.
Thus, mediator needs to maintain context for original standard call
during multiple SMCCC calls.

Standard call is considered complete, when returned value is
not a RPC request.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
---

 Changes from v2:
  - renamed struct domain_ctx to struct optee_domain
  - fixed coding style
  - Now I use access_guest_memory_by_ipa() instead of mappings
    to read command buffer
  - Added tracking for in flight calls, so guest can't resume
    the same call from two CPUs simultaniously

 xen/arch/arm/tee/optee.c     | 319 ++++++++++++++++++++++++++++++++++-
 xen/include/asm-arm/domain.h |   3 +
 2 files changed, 320 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 584241b03a..dc90e2ed8e 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -12,6 +12,8 @@
  */
 
 #include <xen/device_tree.h>
+#include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/sched.h>
 #include <asm/smccc.h>
 #include <asm/tee/tee.h>
@@ -22,11 +24,38 @@
 /* Client ID 0 is reserved for hypervisor itself */
 #define OPTEE_CLIENT_ID(domain) (domain->domain_id + 1)
 
+/*
+ * Maximal number of concurrent standard calls from one guest. This
+ * corresponds to OPTEE configuration option CFG_NUM_THREADS, because
+ * OP-TEE spawns a thread for every standard call.
+ */
+#define MAX_STD_CALLS   16
+
 #define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
 #define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
                               OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM |  \
                               OPTEE_SMC_SEC_CAP_DYNAMIC_SHM)
 
+/*
+ * Call context. OP-TEE can issue multiple RPC returns during one call.
+ * We need to preserve context during them.
+ */
+struct optee_std_call {
+    struct list_head list;
+    struct optee_msg_arg *xen_arg;
+    paddr_t guest_arg_ipa;
+    int optee_thread_id;
+    int rpc_op;
+    bool in_flight;
+};
+
+/* Domain context */
+struct optee_domain {
+    struct list_head call_list;
+    atomic_t call_count;
+    spinlock_t lock;
+};
+
 static bool optee_probe(void)
 {
     struct dt_device_node *node;
@@ -52,6 +81,11 @@ static bool optee_probe(void)
 static int optee_enable(struct domain *d)
 {
     struct arm_smccc_res resp;
+    struct optee_domain *ctx;
+
+    ctx = xzalloc(struct optee_domain);
+    if ( !ctx )
+        return -ENOMEM;
 
     /*
      * Inform OP-TEE about a new guest.
@@ -69,9 +103,16 @@ static int optee_enable(struct domain *d)
     {
         gprintk(XENLOG_WARNING, "Unable to create OPTEE client: rc = 0x%X\n",
                 (uint32_t)resp.a0);
+        xfree(ctx);
         return -ENODEV;
     }
 
+    INIT_LIST_HEAD(&ctx->call_list);
+    atomic_set(&ctx->call_count, 0);
+    spin_lock_init(&ctx->lock);
+
+    d->arch.tee = ctx;
+
     return 0;
 }
 
@@ -111,9 +152,86 @@ static void set_return(struct cpu_user_regs *regs, uint32_t ret)
     set_user_reg(regs, 7, 0);
 }
 
+static struct optee_std_call *allocate_std_call(struct optee_domain *ctx)
+{
+    struct optee_std_call *call;
+    int count;
+
+    /* Make sure that guest does not execute more than MAX_STD_CALLS */
+    count = atomic_add_unless(&ctx->call_count, 1, MAX_STD_CALLS);
+    if ( count == MAX_STD_CALLS )
+        return NULL;
+
+    call = xzalloc(struct optee_std_call);
+    if ( !call )
+    {
+        atomic_dec(&ctx->call_count);
+        return NULL;
+    }
+
+    call->optee_thread_id = -1;
+    call->in_flight = true;
+
+    spin_lock(&ctx->lock);
+    list_add_tail(&call->list, &ctx->call_list);
+    spin_unlock(&ctx->lock);
+
+    return call;
+}
+
+static void free_std_call(struct optee_domain *ctx,
+                          struct optee_std_call *call)
+{
+    atomic_dec(&ctx->call_count);
+
+    spin_lock(&ctx->lock);
+    list_del(&call->list);
+    spin_unlock(&ctx->lock);
+
+    ASSERT(!call->in_flight);
+    xfree(call->xen_arg);
+    xfree(call);
+}
+
+static struct optee_std_call *get_std_call(struct optee_domain *ctx,
+                                           int thread_id)
+{
+    struct optee_std_call *call;
+
+    spin_lock(&ctx->lock);
+    list_for_each_entry( call, &ctx->call_list, list )
+    {
+        if ( call->optee_thread_id == thread_id )
+        {
+            if ( call->in_flight )
+            {
+                gprintk(XENLOG_WARNING, "Guest tries to execute call which is already in flight");
+                goto out;
+            }
+            call->in_flight = true;
+            spin_unlock(&ctx->lock);
+            return call;
+        }
+    }
+out:
+    spin_unlock(&ctx->lock);
+
+    return NULL;
+}
+
+static void put_std_call(struct optee_domain *ctx, struct optee_std_call *call)
+{
+    spin_lock(&ctx->lock);
+    ASSERT(call->in_flight);
+    call->in_flight = false;
+    spin_unlock(&ctx->lock);
+}
+
 static void optee_domain_destroy(struct domain *d)
 {
     struct arm_smccc_res resp;
+    struct optee_std_call *call, *call_tmp;
+    struct optee_domain *ctx = d->arch.tee;
 
     /* At this time all domain VCPUs should be stopped */
 
@@ -124,6 +242,199 @@ static void optee_domain_destroy(struct domain *d)
      */
     arm_smccc_smc(OPTEE_SMC_VM_DESTROYED, OPTEE_CLIENT_ID(d), 0, 0, 0, 0, 0, 0,
                   &resp);
+    ASSERT(!spin_is_locked(&ctx->lock));
+
+    list_for_each_entry_safe( call, call_tmp, &ctx->call_list, list )
+        free_std_call(ctx, call);
+
+    ASSERT(!atomic_read(&ctx->call_count));
+
+    xfree(d->arch.tee);
+}
+
+/*
+ * Copy command buffer into xen memory to:
+ * 1) Hide translated addresses from guest
+ * 2) Make sure that guest wouldn't change data in command buffer during call
+ */
+static bool copy_std_request(struct cpu_user_regs *regs,
+                             struct optee_std_call *call)
+{
+    paddr_t xen_addr;
+
+    call->guest_arg_ipa = (paddr_t)get_user_reg(regs, 1) << 32 |
+                            get_user_reg(regs, 2);
+
+    /*
+     * Command buffer should start at page boundary.
+     * This is OP-TEE ABI requirement.
+     */
+    if ( call->guest_arg_ipa & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
+        return false;
+
+    call->xen_arg = _xmalloc(OPTEE_MSG_NONCONTIG_PAGE_SIZE,
+                             OPTEE_MSG_NONCONTIG_PAGE_SIZE);
+    if ( !call->xen_arg )
+        return false;
+
+    BUILD_BUG_ON(OPTEE_MSG_NONCONTIG_PAGE_SIZE > PAGE_SIZE);
+
+    access_guest_memory_by_ipa(current->domain, call->guest_arg_ipa,
+                               call->xen_arg, OPTEE_MSG_NONCONTIG_PAGE_SIZE,
+                               false);
+
+    xen_addr = virt_to_maddr(call->xen_arg);
+
+    set_user_reg(regs, 1, xen_addr >> 32);
+    set_user_reg(regs, 2, xen_addr & 0xFFFFFFFF);
+
+    return true;
+}
+
+static void copy_std_request_back(struct optee_domain *ctx,
+                                  struct cpu_user_regs *regs,
+                                  struct optee_std_call *call)
+{
+    struct optee_msg_arg *guest_arg;
+    struct page_info *page;
+    unsigned int i;
+    uint32_t attr;
+
+    /* copy_std_request() validated IPA for us */
+    page = get_page_from_gfn(current->domain, paddr_to_pfn(call->guest_arg_ipa),
+                             NULL, P2M_ALLOC);
+    if ( !page )
+        return;
+
+    guest_arg = map_domain_page(page_to_mfn(page));
+
+    guest_arg->ret = call->xen_arg->ret;
+    guest_arg->ret_origin = call->xen_arg->ret_origin;
+    guest_arg->session = call->xen_arg->session;
+    for ( i = 0; i < call->xen_arg->num_params; i++ )
+    {
+        attr = call->xen_arg->params[i].attr;
+
+        switch ( attr & OPTEE_MSG_ATTR_TYPE_MASK )
+        {
+        case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT:
+        case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT:
+            guest_arg->params[i].u.tmem.size =
+                call->xen_arg->params[i].u.tmem.size;
+            continue;
+        case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT:
+        case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
+            guest_arg->params[i].u.value.a =
+                call->xen_arg->params[i].u.value.a;
+            guest_arg->params[i].u.value.b =
+                call->xen_arg->params[i].u.value.b;
+            continue;
+        case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
+        case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:
+            guest_arg->params[i].u.rmem.size =
+                call->xen_arg->params[i].u.rmem.size;
+            continue;
+        case OPTEE_MSG_ATTR_TYPE_NONE:
+        case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
+        case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT:
+            continue;
+        }
+    }
+
+    unmap_domain_page(guest_arg);
+    put_page(page);
+}
+
+static void execute_std_call(struct optee_domain *ctx,
+                             struct cpu_user_regs *regs,
+                             struct optee_std_call *call)
+{
+    register_t optee_ret;
+
+    forward_call(regs);
+
+    optee_ret = get_user_reg(regs, 0);
+    if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) )
+    {
+        call->optee_thread_id = get_user_reg(regs, 3);
+        call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret);
+        put_std_call(ctx, call);
+        return;
+    }
+
+    copy_std_request_back(ctx, regs, call);
+
+    put_std_call(ctx, call);
+    free_std_call(ctx, call);
+}
+
+static bool handle_std_call(struct optee_domain *ctx,
+                            struct cpu_user_regs *regs)
+{
+    struct optee_std_call *call = allocate_std_call(ctx);
+
+    if ( !call )
+        return false;
+
+    if ( !copy_std_request(regs, call) )
+        goto err;
+
+    /* Now we can safely examine contents of command buffer */
+    if ( OPTEE_MSG_GET_ARG_SIZE(call->xen_arg->num_params) >
+         OPTEE_MSG_NONCONTIG_PAGE_SIZE )
+        goto err;
+
+    switch ( call->xen_arg->cmd )
+    {
+    case OPTEE_MSG_CMD_OPEN_SESSION:
+    case OPTEE_MSG_CMD_CLOSE_SESSION:
+    case OPTEE_MSG_CMD_INVOKE_COMMAND:
+    case OPTEE_MSG_CMD_CANCEL:
+    case OPTEE_MSG_CMD_REGISTER_SHM:
+    case OPTEE_MSG_CMD_UNREGISTER_SHM:
+        break;
+    default:
+        goto err;
+    }
+
+    execute_std_call(ctx, regs, call);
+
+    return true;
+
+err:
+    put_std_call(ctx, call);
+    free_std_call(ctx, call);
+
+    return false;
+}
+
+static bool handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs)
+{
+    struct optee_std_call *call;
+    int optee_thread_id = get_user_reg(regs, 3);
+
+    call = get_std_call(ctx, optee_thread_id);
+
+    if ( !call )
+        return false;
+
+    switch ( call->rpc_op )
+    {
+    case OPTEE_SMC_RPC_FUNC_ALLOC:
+        /* TODO: Add handling */
+        break;
+    case OPTEE_SMC_RPC_FUNC_FREE:
+        /* TODO: Add handling */
+        break;
+    case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR:
+        break;
+    case OPTEE_SMC_RPC_FUNC_CMD:
+        /* TODO: Add handling */
+        break;
+    }
+
+    execute_std_call(ctx, regs, call);
+    return true;
 }
 
 static bool handle_exchange_capabilities(struct cpu_user_regs *regs)
@@ -161,6 +472,8 @@ static bool handle_exchange_capabilities(struct cpu_user_regs *regs)
 
 static bool optee_handle_call(struct cpu_user_regs *regs)
 {
+    struct optee_domain *ctx = current->domain->arch.tee;
+
     switch ( get_user_reg(regs, 0) )
     {
     case OPTEE_SMC_CALLS_COUNT:
@@ -170,8 +483,6 @@ static bool optee_handle_call(struct cpu_user_regs *regs)
     case OPTEE_SMC_FUNCID_GET_OS_REVISION:
     case OPTEE_SMC_ENABLE_SHM_CACHE:
     case OPTEE_SMC_DISABLE_SHM_CACHE:
-    case OPTEE_SMC_CALL_WITH_ARG:
-    case OPTEE_SMC_CALL_RETURN_FROM_RPC:
         forward_call(regs);
         return true;
     case OPTEE_SMC_GET_SHM_CONFIG:
@@ -180,6 +491,10 @@ static bool optee_handle_call(struct cpu_user_regs *regs)
         return true;
     case OPTEE_SMC_EXCHANGE_CAPABILITIES:
         return handle_exchange_capabilities(regs);
+    case OPTEE_SMC_CALL_WITH_ARG:
+        return handle_std_call(ctx, regs);
+    case OPTEE_SMC_CALL_RETURN_FROM_RPC:
+        return handle_rpc(ctx, regs);
     default:
         return false;
     }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 175de44927..88b48697bd 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -97,6 +97,9 @@ struct arch_domain
     struct vpl011 vpl011;
 #endif
 
+#ifdef CONFIG_TEE
+    void *tee;
+#endif
 }  __cacheline_aligned;
 
 struct arch_vcpu
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 08/11] optee: add support for arbitrary shared memory
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
                   ` (6 preceding siblings ...)
  2018-12-18 21:11 ` [PATCH v3 07/11] optee: add support for RPC SHM buffers Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-21 11:42   ` Julien Grall
  2018-12-18 21:11 ` [PATCH v3 09/11] optee: add support for RPC commands Volodymyr Babchuk
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini, Volodymyr Babchuk

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

Shared memory is widely used by NW to communicate with
TAs in OP-TEE. NW can share part of own memory with
TA or OP-TEE core, by registering it OP-TEE, or by providing
a temporal reference. Anyways, information about such memory
buffers are sent to OP-TEE as a list of pages. This mechanism
is described in optee_msg.h.

Mediator should step in when NW tries to share memory with
OP-TEE for two reasons:

1. Do address translation from IPA to PA.
2. Pin domain pages till they are mapped into OP-TEE or TA
   address space, so domain can't transfer this pages to
   other domain or balloon out them.

Address translation is done by translate_noncontig(...) function.
It allocates new buffer from xenheap and then walks on guest
provided list of pages, translates addresses and stores PAs into
newly allocated buffer. This buffer will be provided to OP-TEE
instead of original buffer from the guest. This buffer will
be free at the end of standard call.

In the same time this function pins pages and stores them in
struct optee_shm_buf object. This object will live all the time,
when given SHM buffer is known to OP-TEE. It will be freed
after guest unregisters shared buffer. At this time pages
will be unpinned.

We don't need to do any special reference counting because OP-TEE
tracks buffer on its side. So, mediator will unpin pages only
when OP-TEE returns successfully from OPTEE_MSG_CMD_UNREGISTER_SHM
call.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
---

 Changes from v2:
 - Made sure that guest does not tries to register shared buffer with
   the same cookie twice
 - Fixed coding style
 - Use access_guest_memory_by_ipa() instead of direct memory mapping

 xen/arch/arm/tee/optee.c | 274 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 274 insertions(+)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 771148e940..cfc3b34df7 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -37,6 +37,10 @@
  */
 #define MAX_RPC_SHMS    MAX_STD_CALLS
 
+/* Maximum total number of pages that guest can share with OP-TEE */
+#define MAX_TOTAL_SMH_BUF_PG    16384
+#define MAX_NONCONTIG_ENTRIES   5
+
 #define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
 #define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
                               OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM |  \
@@ -50,6 +54,9 @@ struct optee_std_call {
     struct list_head list;
     struct optee_msg_arg *xen_arg;
     paddr_t guest_arg_ipa;
+    /* Buffer for translated page addresses, shared with OP-TEE */
+    void *non_contig[MAX_NONCONTIG_ENTRIES];
+    int non_contig_order[MAX_NONCONTIG_ENTRIES];
     int optee_thread_id;
     int rpc_op;
     bool in_flight;
@@ -63,12 +70,23 @@ struct shm_rpc {
     uint64_t cookie;
 };
 
+/* Shared memory buffer for arbitrary data */
+struct optee_shm_buf {
+    struct list_head list;
+    uint64_t cookie;
+    unsigned int max_page_cnt;
+    unsigned int page_cnt;
+    struct page_info *pages[];
+};
+
 /* Domain context */
 struct optee_domain {
     struct list_head call_list;
     struct list_head shm_rpc_list;
+    struct list_head optee_shm_buf_list;
     atomic_t call_count;
     atomic_t shm_rpc_count;
+    atomic_t optee_shm_buf_pages;
     spinlock_t lock;
 };
 
@@ -125,9 +143,11 @@ static int optee_enable(struct domain *d)
 
     INIT_LIST_HEAD(&ctx->call_list);
     INIT_LIST_HEAD(&ctx->shm_rpc_list);
+    INIT_LIST_HEAD(&ctx->optee_shm_buf_list);
 
     atomic_set(&ctx->call_count, 0);
     atomic_set(&ctx->shm_rpc_count, 0);
+    atomic_set(&ctx->optee_shm_buf_pages, 0);
 
     spin_lock_init(&ctx->lock);
 
@@ -325,12 +345,91 @@ static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
     xfree(shm_rpc);
 }
 
+static struct optee_shm_buf *allocate_optee_shm_buf(struct optee_domain *ctx,
+                                                    uint64_t cookie,
+                                                    unsigned int pages_cnt)
+{
+    struct optee_shm_buf *optee_shm_buf, *optee_shm_buf_tmp;
+
+    while ( true )
+    {
+        int old = atomic_read(&ctx->optee_shm_buf_pages);
+        int new = old + pages_cnt;
+        if ( new >= MAX_TOTAL_SMH_BUF_PG )
+            return NULL;
+        if ( likely(old == atomic_cmpxchg(&ctx->optee_shm_buf_pages,
+                                          old, new)) )
+            break;
+    }
+
+    optee_shm_buf = xzalloc_bytes(sizeof(struct optee_shm_buf) +
+                            pages_cnt * sizeof(struct page *));
+    if ( !optee_shm_buf )
+        goto err;
+
+    optee_shm_buf->cookie = cookie;
+    optee_shm_buf->max_page_cnt = pages_cnt;
+
+    spin_lock(&ctx->lock);
+    /* Check if there is already SHM with the same cookie */
+    list_for_each_entry( optee_shm_buf_tmp, &ctx->optee_shm_buf_list, list )
+    {
+        if ( optee_shm_buf_tmp->cookie == cookie )
+        {
+            spin_unlock(&ctx->lock);
+            gprintk(XENLOG_WARNING, "Guest tries to use the same SHM buffer cookie");
+            goto err;
+        }
+    }
+
+    list_add_tail(&optee_shm_buf->list, &ctx->optee_shm_buf_list);
+    spin_unlock(&ctx->lock);
+
+    return optee_shm_buf;
+
+err:
+    atomic_sub(pages_cnt, &ctx->optee_shm_buf_pages);
+    xfree(optee_shm_buf);
+
+    return NULL;
+}
+
+static void free_optee_shm_buf(struct optee_domain *ctx, uint64_t cookie)
+{
+    struct optee_shm_buf *optee_shm_buf;
+    bool found = false;
+
+    spin_lock(&ctx->lock);
+    list_for_each_entry( optee_shm_buf, &ctx->optee_shm_buf_list, list )
+    {
+        if ( optee_shm_buf->cookie == cookie )
+        {
+            found = true;
+            list_del(&optee_shm_buf->list);
+            break;
+        }
+    }
+    spin_unlock(&ctx->lock);
+
+    if ( !found )
+        return;
+
+    for ( int i = 0; i < optee_shm_buf->page_cnt; i++ )
+        if ( optee_shm_buf->pages[i] )
+            put_page(optee_shm_buf->pages[i]);
+
+    atomic_sub(optee_shm_buf->max_page_cnt, &ctx->optee_shm_buf_pages);
+
+    xfree(optee_shm_buf);
+}
+
 static void optee_domain_destroy(struct domain *d)
 {
     struct arm_smccc_res resp;
     struct optee_std_call *call, *call_tmp;
     struct optee_domain *ctx = d->arch.tee;
     struct shm_rpc *shm_rpc, *shm_rpc_tmp;
+    struct optee_shm_buf *optee_shm_buf, *optee_shm_buf_tmp;
 
     /* At this time all domain VCPUs should be stopped */
 
@@ -349,12 +448,177 @@ static void optee_domain_destroy(struct domain *d)
     list_for_each_entry_safe( shm_rpc, shm_rpc_tmp, &ctx->shm_rpc_list, list )
         free_shm_rpc(ctx, shm_rpc->cookie);
 
+    list_for_each_entry_safe( optee_shm_buf, optee_shm_buf_tmp,
+                              &ctx->optee_shm_buf_list, list )
+        free_optee_shm_buf(ctx, optee_shm_buf->cookie);
+
     ASSERT(!atomic_read(&ctx->call_count));
     ASSERT(!atomic_read(&ctx->shm_rpc_count));
+    ASSERT(!atomic_read(&ctx->optee_shm_buf_pages));
 
     xfree(d->arch.tee);
 }
 
+#define PAGELIST_ENTRIES_PER_PAGE                       \
+    ((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1)
+
+static size_t get_pages_list_size(size_t num_entries)
+{
+    int pages = DIV_ROUND_UP(num_entries, PAGELIST_ENTRIES_PER_PAGE);
+
+    return pages * OPTEE_MSG_NONCONTIG_PAGE_SIZE;
+}
+
+static bool translate_noncontig(struct optee_domain *ctx,
+                                struct optee_std_call *call,
+                                struct optee_msg_param *param,
+                                int idx)
+{
+    /*
+     * Refer to OPTEE_MSG_ATTR_NONCONTIG description in optee_msg.h for details.
+     */
+    uint64_t size;
+    unsigned int page_offset;
+    unsigned int num_pages;
+    unsigned int order;
+    unsigned int entries_on_page = 0;
+    paddr_t gaddr;
+    struct page_info *guest_page;
+    struct {
+        uint64_t pages_list[PAGELIST_ENTRIES_PER_PAGE];
+        uint64_t next_page_data;
+    } *pages_data_guest, *pages_data_xen, *pages_data_xen_start;
+    struct optee_shm_buf *optee_shm_buf;
+
+    /* Offset of user buffer withing page */
+    page_offset = param->u.tmem.buf_ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
+
+    /* Size of the user buffer in bytes */
+    size = ROUNDUP(param->u.tmem.size + page_offset,
+                   OPTEE_MSG_NONCONTIG_PAGE_SIZE);
+
+    num_pages = DIV_ROUND_UP(size, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
+
+    order = get_order_from_bytes(get_pages_list_size(num_pages));
+
+    pages_data_xen_start = alloc_xenheap_pages(order, 0);
+    if ( !pages_data_xen_start )
+        return false;
+
+    optee_shm_buf = allocate_optee_shm_buf(ctx, param->u.tmem.shm_ref,
+                                           num_pages);
+    if ( !optee_shm_buf )
+        goto err_free;
+
+    gaddr = param->u.tmem.buf_ptr & ~(OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
+    guest_page = get_page_from_gfn(current->domain, paddr_to_pfn(gaddr), NULL,
+                                   P2M_ALLOC);
+
+    if ( !guest_page )
+        goto err_free;
+
+    pages_data_guest = map_domain_page(page_to_mfn(guest_page));
+    if ( !pages_data_guest )
+        goto err_free;
+
+    pages_data_xen = pages_data_xen_start;
+    while ( num_pages )
+    {
+        struct page_info *page;
+        page = get_page_from_gfn(current->domain,
+                  paddr_to_pfn(pages_data_guest->pages_list[entries_on_page]),
+                  NULL, P2M_ALLOC);
+
+        if ( !page )
+            goto err_unmap;
+
+        optee_shm_buf->pages[optee_shm_buf->page_cnt++] = page;
+        pages_data_xen->pages_list[entries_on_page] = page_to_maddr(page);
+        entries_on_page++;
+
+        if ( entries_on_page == PAGELIST_ENTRIES_PER_PAGE )
+        {
+            pages_data_xen->next_page_data = virt_to_maddr(pages_data_xen + 1);
+            pages_data_xen++;
+            gaddr = pages_data_guest->next_page_data;
+
+            unmap_domain_page(pages_data_guest);
+            put_page(guest_page);
+
+            guest_page = get_page_from_gfn(current->domain, paddr_to_pfn(gaddr),
+                                           NULL, P2M_ALLOC);
+            if ( !guest_page )
+                goto err_free;
+
+            pages_data_guest = map_domain_page(page_to_mfn(guest_page));
+            if ( !pages_data_guest )
+                goto err_free;
+            /* Roll over to the next page */
+            entries_on_page = 0;
+        }
+        num_pages--;
+    }
+
+    param->u.tmem.buf_ptr = virt_to_maddr(pages_data_xen_start) | page_offset;
+
+    call->non_contig[idx] = pages_data_xen_start;
+    call->non_contig_order[idx] = order;
+
+    unmap_domain_page(pages_data_guest);
+    put_page(guest_page);
+    return true;
+
+err_unmap:
+    unmap_domain_page(pages_data_guest);
+    put_page(guest_page);
+    free_optee_shm_buf(ctx, optee_shm_buf->cookie);
+
+err_free:
+    free_xenheap_pages(pages_data_xen_start, order);
+
+    return false;
+}
+
+static bool translate_params(struct optee_domain *ctx,
+                             struct optee_std_call *call)
+{
+    unsigned int i;
+    uint32_t attr;
+
+    for ( i = 0; i < call->xen_arg->num_params; i++ )
+    {
+        attr = call->xen_arg->params[i].attr;
+
+        switch ( attr & OPTEE_MSG_ATTR_TYPE_MASK )
+        {
+        case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT:
+        case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT:
+        case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT:
+            if ( attr & OPTEE_MSG_ATTR_NONCONTIG )
+            {
+                if ( !translate_noncontig(ctx, call,
+                                          call->xen_arg->params + i, i) )
+                    return false;
+            }
+            else
+            {
+                gprintk(XENLOG_WARNING, "Guest tries to use old tmem arg\n");
+                return false;
+            }
+            break;
+        case OPTEE_MSG_ATTR_TYPE_NONE:
+        case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT:
+        case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT:
+        case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
+        case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
+        case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
+        case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:
+            continue;
+        }
+    }
+    return true;
+}
+
 /*
  * Copy command buffer into xen memory to:
  * 1) Hide translated addresses from guest
@@ -469,6 +733,14 @@ static void execute_std_call(struct optee_domain *ctx,
 
     copy_std_request_back(ctx, regs, call);
 
+    /*
+     * If guest successfully unregistered own shared memory,
+     * then we can unpin it's pages
+     */
+    if ( call->xen_arg->cmd == OPTEE_MSG_CMD_UNREGISTER_SHM &&
+         call->xen_arg->ret == 0 )
+        free_optee_shm_buf(ctx, call->xen_arg->params[0].u.rmem.shm_ref);
+
     put_std_call(ctx, call);
     free_std_call(ctx, call);
 }
@@ -497,6 +769,8 @@ static bool handle_std_call(struct optee_domain *ctx,
     case OPTEE_MSG_CMD_CANCEL:
     case OPTEE_MSG_CMD_REGISTER_SHM:
     case OPTEE_MSG_CMD_UNREGISTER_SHM:
+        if( !translate_params(ctx, call) )
+            goto err;
         break;
     default:
         goto err;
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 07/11] optee: add support for RPC SHM buffers
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
                   ` (5 preceding siblings ...)
  2018-12-18 21:11 ` [PATCH v3 06/11] optee: add std call handling Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-17 18:49   ` Julien Grall
  2018-12-18 21:11 ` [PATCH v3 08/11] optee: add support for arbitrary shared memory Volodymyr Babchuk
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini, Volodymyr Babchuk

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

OP-TEE usually uses the same idea with command buffers (see
previous commit) to issue RPC requests. Problem is that initially
it has no buffer, where it can write request. So the first RPC
request it makes is special: it requests NW to allocate shared
buffer for other RPC requests. Usually this buffer is allocated
only once for every OP-TEE thread and it remains allocated all
the time until shutdown.

Mediator needs to pin this buffer(s) to make sure that domain can't
transfer it to someone else.

Life cycle of this buffer is controlled by OP-TEE. It asks guest
to create buffer and it asks it to free it.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
---

 Changes from v2:
  - Added check to ensure that guests does not return two SHM buffers
    with the same cookie
  - Fixed coding style
  - Storing RPC parameters during RPC return to make sure, that guest
    will not change them during call continuation
  
 xen/arch/arm/tee/optee.c | 140 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 138 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index dc90e2ed8e..771148e940 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -30,6 +30,12 @@
  * OP-TEE spawns a thread for every standard call.
  */
 #define MAX_STD_CALLS   16
+/*
+ * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
+ * for one SHM buffer per thread, so this also corresponds to OP-TEE
+ * option CFG_NUM_THREADS
+ */
+#define MAX_RPC_SHMS    MAX_STD_CALLS
 
 #define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
 #define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
@@ -47,12 +53,22 @@ struct optee_std_call {
     int optee_thread_id;
     int rpc_op;
     bool in_flight;
+    register_t rpc_params[2];
+};
+
+/* Pre-allocated SHM buffer for RPC commands */
+struct shm_rpc {
+    struct list_head list;
+    struct page_info *guest_page;
+    uint64_t cookie;
 };
 
 /* Domain context */
 struct optee_domain {
     struct list_head call_list;
+    struct list_head shm_rpc_list;
     atomic_t call_count;
+    atomic_t shm_rpc_count;
     spinlock_t lock;
 };
 
@@ -108,7 +124,11 @@ static int optee_enable(struct domain *d)
     }
 
     INIT_LIST_HEAD(&ctx->call_list);
+    INIT_LIST_HEAD(&ctx->shm_rpc_list);
+
     atomic_set(&ctx->call_count, 0);
+    atomic_set(&ctx->shm_rpc_count, 0);
+
     spin_lock_init(&ctx->lock);
 
     d->arch.tee = ctx;
@@ -227,11 +247,90 @@ static void put_std_call(struct optee_domain *ctx, struct optee_std_call *call)
     spin_unlock(&ctx->lock);
 }
 
+static struct shm_rpc *allocate_and_pin_shm_rpc(struct optee_domain *ctx,
+                                                paddr_t gaddr,
+                                                uint64_t cookie)
+{
+    struct shm_rpc *shm_rpc, *shm_rpc_tmp;
+    int count;
+
+    /* Make sure that guest does not allocate more than MAX_RPC_SHMS */
+    count = atomic_add_unless(&ctx->shm_rpc_count, 1, MAX_RPC_SHMS);
+    if ( count == MAX_RPC_SHMS )
+        return NULL;
+
+    shm_rpc = xzalloc(struct shm_rpc);
+    if ( !shm_rpc )
+        goto err;
+
+    /* This page will be shared with OP-TEE, so we need to pin it */
+    shm_rpc->guest_page = get_page_from_gfn(current->domain,
+                                            paddr_to_pfn(gaddr),
+                                            NULL,
+                                            P2M_ALLOC);
+    if ( !shm_rpc->guest_page )
+        goto err;
+
+    shm_rpc->cookie = cookie;
+
+    spin_lock(&ctx->lock);
+    /* Check if there is already SHM with the same cookie */
+    list_for_each_entry( shm_rpc_tmp, &ctx->shm_rpc_list, list )
+    {
+        if ( shm_rpc_tmp->cookie == cookie )
+        {
+            spin_unlock(&ctx->lock);
+            gprintk(XENLOG_WARNING, "Guest tries to use the same RPC SHM cookie");
+            goto err;
+        }
+    }
+
+    list_add_tail(&shm_rpc->list, &ctx->shm_rpc_list);
+    spin_unlock(&ctx->lock);
+
+    return shm_rpc;
+
+err:
+    atomic_dec(&ctx->shm_rpc_count);
+    put_page(shm_rpc->guest_page);
+    xfree(shm_rpc);
+
+    return NULL;
+}
+
+static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
+{
+    struct shm_rpc *shm_rpc;
+    bool found = false;
+
+    spin_lock(&ctx->lock);
+
+    list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
+    {
+        if ( shm_rpc->cookie == cookie )
+        {
+            found = true;
+            list_del(&shm_rpc->list);
+            break;
+        }
+    }
+    spin_unlock(&ctx->lock);
+
+    if ( !found )
+        return;
+
+    ASSERT(shm_rpc->guest_page);
+    put_page(shm_rpc->guest_page);
+
+    xfree(shm_rpc);
+}
+
 static void optee_domain_destroy(struct domain *d)
 {
     struct arm_smccc_res resp;
     struct optee_std_call *call, *call_tmp;
     struct optee_domain *ctx = d->arch.tee;
+    struct shm_rpc *shm_rpc, *shm_rpc_tmp;
 
     /* At this time all domain VCPUs should be stopped */
 
@@ -247,7 +346,11 @@ static void optee_domain_destroy(struct domain *d)
     list_for_each_entry_safe( call, call_tmp, &ctx->call_list, list )
         free_std_call(ctx, call);
 
+    list_for_each_entry_safe( shm_rpc, shm_rpc_tmp, &ctx->shm_rpc_list, list )
+        free_shm_rpc(ctx, shm_rpc->cookie);
+
     ASSERT(!atomic_read(&ctx->call_count));
+    ASSERT(!atomic_read(&ctx->shm_rpc_count));
 
     xfree(d->arch.tee);
 }
@@ -356,6 +459,8 @@ static void execute_std_call(struct optee_domain *ctx,
     optee_ret = get_user_reg(regs, 0);
     if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) )
     {
+        call->rpc_params[0] = get_user_reg(regs, 1);
+        call->rpc_params[1] = get_user_reg(regs, 2);
         call->optee_thread_id = get_user_reg(regs, 3);
         call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret);
         put_std_call(ctx, call);
@@ -408,6 +513,33 @@ err:
     return false;
 }
 
+static void handle_rpc_func_alloc(struct optee_domain *ctx,
+                                  struct cpu_user_regs *regs)
+{
+    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
+
+    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
+        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n");
+
+    if ( ptr )
+    {
+        uint64_t cookie = get_user_reg(regs, 4) << 32 | get_user_reg(regs, 5);
+        struct shm_rpc *shm_rpc;
+
+        shm_rpc = allocate_and_pin_shm_rpc(ctx, ptr, cookie);
+        if ( !shm_rpc )
+        {
+            gprintk(XENLOG_WARNING, "Failed to allocate shm_rpc object\n");
+            ptr = 0;
+        }
+        else
+            ptr = page_to_maddr(shm_rpc->guest_page);
+
+        set_user_reg(regs, 1, ptr >> 32);
+        set_user_reg(regs, 2, ptr & 0xFFFFFFFF);
+    }
+}
+
 static bool handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs)
 {
     struct optee_std_call *call;
@@ -421,11 +553,15 @@ static bool handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs)
     switch ( call->rpc_op )
     {
     case OPTEE_SMC_RPC_FUNC_ALLOC:
-        /* TODO: Add handling */
+        handle_rpc_func_alloc(ctx, regs);
         break;
     case OPTEE_SMC_RPC_FUNC_FREE:
-        /* TODO: Add handling */
+    {
+        uint64_t cookie = call->rpc_params[0] << 32 |
+                            (uint32_t)call->rpc_params[1];
+        free_shm_rpc(ctx, cookie);
         break;
+    }
     case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR:
         break;
     case OPTEE_SMC_RPC_FUNC_CMD:
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 09/11] optee: add support for RPC commands
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
                   ` (7 preceding siblings ...)
  2018-12-18 21:11 ` [PATCH v3 08/11] optee: add support for arbitrary shared memory Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-21 11:58   ` Julien Grall
  2018-12-18 21:11 ` [PATCH v3 11/11] libxl: arm: create optee firmware node in DT if tee=1 Volodymyr Babchuk
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini, Volodymyr Babchuk

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

OP-TEE can issue multiple RPC requests. We are interested mostly in
request that asks NW to allocate/free shared memory for OP-TEE
needs, because mediator need to do address translation in the same
way as it was done for shared buffers registered by NW.

As mediator now accesses shared command buffer, we need to shadow
it in the same way, as we shadow request buffers for STD calls.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
---

 Changes from v2:
 - Use access_guest_memory_by_ipa() instead of direct mapping

 xen/arch/arm/tee/optee.c | 136 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 130 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index cfc3b34df7..bf3535946d 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -67,6 +67,8 @@ struct optee_std_call {
 struct shm_rpc {
     struct list_head list;
     struct page_info *guest_page;
+    struct optee_msg_arg *xen_arg;
+    paddr_t guest_ipa;
     uint64_t cookie;
 };
 
@@ -290,6 +292,11 @@ static struct shm_rpc *allocate_and_pin_shm_rpc(struct optee_domain *ctx,
                                             P2M_ALLOC);
     if ( !shm_rpc->guest_page )
         goto err;
+    shm_rpc->guest_ipa = gaddr;
+
+    shm_rpc->xen_arg = alloc_xenheap_page();
+    if ( !shm_rpc->xen_arg )
+        goto err;
 
     shm_rpc->cookie = cookie;
 
@@ -313,6 +320,7 @@ static struct shm_rpc *allocate_and_pin_shm_rpc(struct optee_domain *ctx,
 err:
     atomic_dec(&ctx->shm_rpc_count);
     put_page(shm_rpc->guest_page);
+    free_xenheap_page(shm_rpc->xen_arg);
     xfree(shm_rpc);
 
     return NULL;
@@ -339,12 +347,32 @@ static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
     if ( !found )
         return;
 
+    free_xenheap_page(shm_rpc->xen_arg);
+
     ASSERT(shm_rpc->guest_page);
     put_page(shm_rpc->guest_page);
 
     xfree(shm_rpc);
 }
 
+static struct shm_rpc *find_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
+{
+    struct shm_rpc *shm_rpc;
+
+    spin_lock(&ctx->lock);
+    list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
+    {
+        if ( shm_rpc->cookie == cookie )
+        {
+                spin_unlock(&ctx->lock);
+                return shm_rpc;
+        }
+    }
+    spin_unlock(&ctx->lock);
+
+    return NULL;
+}
+
 static struct optee_shm_buf *allocate_optee_shm_buf(struct optee_domain *ctx,
                                                     uint64_t cookie,
                                                     unsigned int pages_cnt)
@@ -712,6 +740,33 @@ static void copy_std_request_back(struct optee_domain *ctx,
     put_page(page);
 }
 
+static void handle_rpc_return(struct optee_domain *ctx,
+                             struct cpu_user_regs *regs,
+                             struct optee_std_call *call)
+{
+    call->rpc_params[0] = get_user_reg(regs, 1);
+    call->rpc_params[1] = get_user_reg(regs, 2);
+    call->optee_thread_id = get_user_reg(regs, 3);
+    call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(get_user_reg(regs, 0));
+
+    if ( call->rpc_op == OPTEE_SMC_RPC_FUNC_CMD )
+    {
+        /* Copy RPC request from shadowed buffer to guest */
+        uint64_t cookie = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
+        struct shm_rpc *shm_rpc = find_shm_rpc(ctx, cookie);
+        if ( !shm_rpc )
+        {
+            gprintk(XENLOG_ERR, "Can't find SHM-RPC with cookie %lx\n", cookie);
+            return;
+        }
+        access_guest_memory_by_ipa(current->domain,
+                        shm_rpc->guest_ipa,
+                        shm_rpc->xen_arg,
+                        OPTEE_MSG_GET_ARG_SIZE(shm_rpc->xen_arg->num_params),
+                        true);
+    }
+}
+
 static void execute_std_call(struct optee_domain *ctx,
                              struct cpu_user_regs *regs,
                              struct optee_std_call *call)
@@ -723,10 +778,7 @@ static void execute_std_call(struct optee_domain *ctx,
     optee_ret = get_user_reg(regs, 0);
     if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) )
     {
-        call->rpc_params[0] = get_user_reg(regs, 1);
-        call->rpc_params[1] = get_user_reg(regs, 2);
-        call->optee_thread_id = get_user_reg(regs, 3);
-        call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret);
+        handle_rpc_return(ctx, regs, call);
         put_std_call(ctx, call);
         return;
     }
@@ -787,6 +839,78 @@ err:
     return false;
 }
 
+static void handle_rpc_cmd_alloc(struct optee_domain *ctx,
+                                 struct cpu_user_regs *regs,
+                                 struct optee_std_call *call,
+                                 struct shm_rpc *shm_rpc)
+{
+    if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT |
+                                            OPTEE_MSG_ATTR_NONCONTIG) )
+    {
+        gprintk(XENLOG_WARNING, "Invalid attrs for shared mem buffer\n");
+        return;
+    }
+
+    /* Last entry in non_contig array is used to hold RPC-allocated buffer */
+    if ( call->non_contig[MAX_NONCONTIG_ENTRIES - 1] )
+    {
+        free_xenheap_pages(call->non_contig[MAX_NONCONTIG_ENTRIES - 1],
+                           call->non_contig_order[MAX_NONCONTIG_ENTRIES - 1]);
+        call->non_contig[MAX_NONCONTIG_ENTRIES - 1] = NULL;
+    }
+    translate_noncontig(ctx, call, shm_rpc->xen_arg->params + 0,
+                        MAX_NONCONTIG_ENTRIES - 1);
+}
+
+static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs,
+                           struct optee_std_call *call)
+{
+    struct shm_rpc *shm_rpc;
+    uint64_t cookie;
+    size_t arg_size;
+
+    cookie = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
+
+    shm_rpc = find_shm_rpc(ctx, cookie);
+
+    if ( !shm_rpc )
+    {
+        gprintk(XENLOG_ERR, "Can't find SHM-RPC with cookie %lx\n", cookie);
+        return;
+    }
+
+    /* First, copy only header to read number of arguments */
+    access_guest_memory_by_ipa(current->domain, shm_rpc->guest_ipa,
+                               shm_rpc->xen_arg, sizeof(struct optee_msg_arg),
+                               false);
+
+    arg_size = OPTEE_MSG_GET_ARG_SIZE(shm_rpc->xen_arg->num_params);
+    if ( arg_size > OPTEE_MSG_NONCONTIG_PAGE_SIZE )
+        return;
+
+    /* Read the whole command structure */
+    access_guest_memory_by_ipa(current->domain, shm_rpc->guest_ipa,
+                               shm_rpc->xen_arg, arg_size, false);
+
+    switch (shm_rpc->xen_arg->cmd)
+    {
+    case OPTEE_MSG_RPC_CMD_GET_TIME:
+        break;
+    case OPTEE_MSG_RPC_CMD_WAIT_QUEUE:
+        break;
+    case OPTEE_MSG_RPC_CMD_SUSPEND:
+        break;
+    case OPTEE_MSG_RPC_CMD_SHM_ALLOC:
+        handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc);
+        break;
+    case OPTEE_MSG_RPC_CMD_SHM_FREE:
+        free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b);
+        break;
+    default:
+        break;
+    }
+}
+
 static void handle_rpc_func_alloc(struct optee_domain *ctx,
                                   struct cpu_user_regs *regs)
 {
@@ -807,7 +931,7 @@ static void handle_rpc_func_alloc(struct optee_domain *ctx,
             ptr = 0;
         }
         else
-            ptr = page_to_maddr(shm_rpc->guest_page);
+            ptr = virt_to_maddr(shm_rpc->xen_arg);
 
         set_user_reg(regs, 1, ptr >> 32);
         set_user_reg(regs, 2, ptr & 0xFFFFFFFF);
@@ -839,7 +963,7 @@ static bool handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs)
     case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR:
         break;
     case OPTEE_SMC_RPC_FUNC_CMD:
-        /* TODO: Add handling */
+        handle_rpc_cmd(ctx, regs, call);
         break;
     }
 
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 10/11] xl: add "tee" option for xl.cfg
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
                   ` (9 preceding siblings ...)
  2018-12-18 21:11 ` [PATCH v3 11/11] libxl: arm: create optee firmware node in DT if tee=1 Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-21 12:08   ` Julien Grall
  2018-12-19 13:25 ` [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Julien Grall
  11 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall; +Cc: Volodymyr Babchuk, Wei Liu, Ian Jackson, tee-dev

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

This boolean option controls if TEE access is enabled for the domain.
If access is enabled, xl will set appropriate flag in architecture
configuration to ask hypervisor to enable TEE support.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
---
 Changes from v2:
 - Use arch.tee_enabled instead of separate domctl

 docs/man/xl.cfg.pod.5.in    | 10 ++++++++++
 tools/libxl/libxl_arm.c     |  2 ++
 tools/libxl/libxl_create.c  |  1 +
 tools/libxl/libxl_types.idl |  1 +
 tools/xl/xl_parse.c         |  1 +
 5 files changed, 15 insertions(+)

diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index b1c0be14cd..9a7064c951 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -2793,6 +2793,16 @@ Currently, only the "sbsa_uart" model is supported for ARM.
 
 =back
 
+=over 4
+
+=item B<tee=BOOLEAN>
+
+Enable TEE support for the guest. Currently only OP-TEE is supported. If this
+option is enabled, xl will create guest, which can access TEE. Also
+OP-TEE node will be emitted into guest's device tree.
+
+=back
+
 =head3 x86
 
 =over 4
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 141e159043..f8b4ef55e0 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -89,6 +89,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
+    config->arch.tee_enabled = libxl_defbool_val(d_config->b_info.tee);
+
     return 0;
 }
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index fa573344bc..bc9c4ee9ef 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -219,6 +219,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
 
     libxl__arch_domain_build_info_setdefault(gc, b_info);
     libxl_defbool_setdefault(&b_info->dm_restrict, false);
+    libxl_defbool_setdefault(&b_info->tee, false);
 
     switch (b_info->type) {
     case LIBXL_DOMAIN_TYPE_HVM:
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 51cf06a3a2..a634f6e704 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -613,6 +613,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     # Alternate p2m is not bound to any architecture or guest type, as it is
     # supported by x86 HVM and ARM support is planned.
     ("altp2m", libxl_altp2m_mode),
+    ("tee",  libxl_defbool),
 
     ], dir=DIR_IN,
        copy_deprecated_fn="libxl__domain_build_info_copy_deprecated",
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 352cd214dd..af35f1cce0 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2549,6 +2549,7 @@ skip_usbdev:
     }
 
     xlu_cfg_get_defbool(config, "dm_restrict", &b_info->dm_restrict, 0);
+    xlu_cfg_get_defbool(config, "tee", &b_info->tee, 0);
 
     if (c_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         if (!xlu_cfg_get_string (config, "vga", &buf, 0)) {
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 11/11] libxl: arm: create optee firmware node in DT if tee=1
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
                   ` (8 preceding siblings ...)
  2018-12-18 21:11 ` [PATCH v3 09/11] optee: add support for RPC commands Volodymyr Babchuk
@ 2018-12-18 21:11 ` Volodymyr Babchuk
  2019-01-21 12:10   ` Julien Grall
  2018-12-18 21:11 ` [PATCH v3 10/11] xl: add "tee" option for xl.cfg Volodymyr Babchuk
  2018-12-19 13:25 ` [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Julien Grall
  11 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2018-12-18 21:11 UTC (permalink / raw)
  To: xen-devel, Julien Grall; +Cc: Volodymyr Babchuk, Wei Liu, Ian Jackson, tee-dev

From: Volodymyr Babchuk <vlad.babchuk@gmail.com>

If TEE support is enabled with "tee=1" option in xl.cfg,
then we need to inform guest about available TEE.

Currently only OP-TEE is supported, so we'll create DT
node in a way that is expected by optee driver in linux.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
---
 tools/libxl/libxl_arm.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index f8b4ef55e0..befccebd19 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -409,6 +409,32 @@ static int make_psci_node(libxl__gc *gc, void *fdt)
     return 0;
 }
 
+static int make_optee_node(libxl__gc *gc, void *fdt)
+{
+    int res;
+    LOG(DEBUG, "Creating OP-TEE node in dtb");
+
+    res = fdt_begin_node(fdt, "firmware");
+    if (res) return res;
+
+    res = fdt_begin_node(fdt, "optee");
+    if (res) return res;
+
+    res = fdt_property_compat(gc, fdt, 1, "linaro,optee-tz");
+    if (res) return res;
+
+    res = fdt_property_string(fdt, "method", "smc");
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+}
+
 static int make_memory_nodes(libxl__gc *gc, void *fdt,
                              const struct xc_dom_image *dom)
 {
@@ -922,6 +948,9 @@ next_resize:
         if (info->arch_arm.vuart == LIBXL_VUART_TYPE_SBSA_UART)
             FDT( make_vpl011_uart_node(gc, fdt, ainfo, dom) );
 
+        if (libxl_defbool_val(info->tee))
+            FDT( make_optee_node(gc, fdt));
+
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
 
-- 
2.20.0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN
  2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
                   ` (10 preceding siblings ...)
  2018-12-18 21:11 ` [PATCH v3 10/11] xl: add "tee" option for xl.cfg Volodymyr Babchuk
@ 2018-12-19 13:25 ` Julien Grall
  11 siblings, 0 replies; 47+ messages in thread
From: Julien Grall @ 2018-12-19 13:25 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Konrad Rzeszutek Wilk,
	Volodymyr Babchuk, Andrew Cooper, Ian Jackson, tee-dev,
	George Dunlap, Jan Beulich, Volodymyr Babchuk

(+Juergen)

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> Hello all,

Hi Volodymyr,

> 
> Sorry for late submussion. I was busy with other projects.
Thank you for posting a new version of the series. I am afraid neither Stefano 
nor I will have time to review the series by Friday (Code Freeze for Xen 4.12).

We can continue to review it during the code freeze and aim to merge as soon as 
the tree re-open.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 02/11] arm: add tee_enabled flag to xen_arch_domainconfig
  2018-12-18 21:11 ` [PATCH v3 02/11] arm: add tee_enabled flag to xen_arch_domainconfig Volodymyr Babchuk
@ 2019-01-15 17:35   ` Julien Grall
  2019-01-16 17:22     ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-15 17:35 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini

Hi,

On 12/18/18 9:11 PM, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> This flag enables TEE support for a domain.
> 
> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> ---
>   xen/arch/arm/domain.c         | 4 ++++
>   xen/arch/arm/domctl.c         | 1 +
>   xen/include/public/arch-arm.h | 3 +++
>   3 files changed, 8 insertions(+)
> ,
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 11b618515b..f04041931d 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -702,6 +702,10 @@ int arch_domain_create(struct domain *d,
>       if ( (rc = domain_vtimer_init(d, &config->arch)) != 0 )
>           goto fail;
>   
> +    if ( config->arch.tee_enabled )
> +        if ( (rc = tee_enable(d)) != 0 )

This function does not yet exist. But I think it would make sense to 
fold this patch in the next one.

> +            goto fail;
> +
>       update_domain_wallclock_time(d);
>   
>       /*
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 20691528a6..f019e035e8 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -13,6 +13,7 @@
>   #include <xen/sched.h>
>   #include <xen/types.h>
>   #include <xsm/xsm.h>
> +#include <asm-arm/tee/tee.h>
>   #include <public/domctl.h>
>   
>   void arch_get_domain_info(const struct domain *d,
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index eb424e8286..b7a010e99e 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -323,6 +323,9 @@ struct xen_arch_domainconfig {
>        *
>        */
>       uint32_t clock_frequency;
> +
> +    /* IN */
> +    uint8_t tee_enabled;

Can you move this after gic_version? So we don't introduce more padding.

>   };
>   #endif /* __XEN__ || __XEN_TOOLS__ */
>   
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 01/11] arm: add generic TEE mediator framework
  2018-12-18 21:11 ` [PATCH v3 01/11] arm: add generic TEE mediator framework Volodymyr Babchuk
@ 2019-01-15 17:50   ` Julien Grall
  0 siblings, 0 replies; 47+ messages in thread
From: Julien Grall @ 2019-01-15 17:50 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan, tee-dev,
	Jan Beulich

Hi Volodymyr,

On 12/18/18 9:11 PM, Volodymyr Babchuk wrote:
> This patch adds basic framework for TEE mediators. Guests can't talk
> to TEE directly, we need some entity that will intercept request
> and decide what to do with them. "TEE mediator" is a such entity.
> 
> This is how it works: user can build XEN with multiple TEE mediators
> (see the next patches, where OP-TEE mediator is introduced).
> TEE mediator register self with REGISTER_TEE_MEDIATOR() macro in the
> same way, as device drivers use DT_DEVICE_START()/DT_DEVICE_END()
> macros.
> In run-time, during initialization, framework calls probe() function
> for each available mediator driver to find which TEE is installed
> on the platform. Then generic vSMC handler will call selected mediator
> when it intercept SMC/HVC that belongs to TEE OS or TEE application.
> 
> Currently TEE mediator is enabled only for Dom0.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
>   Changes from v2:
>    - Removed empty tee/Kconfig file
> 
>   Changes from v1:
>    - Removed tee_remove() function
>    - CONFIG_TEE depends on EXPERT
>    - tee_domain_created() converted to tee_enable()
>    - tee_init() is called using initcall() mechanism
>    - tee_handle_smc() renamed to tee_handle_call()
> 
>   Changes from "RFC" version:
>    - renamed CONFIG_ARM_TEE to CONFIG_TEE
>    - changed discovery mechanism: instead of UUID mathing, TEE-specific
>       probing is used
> 
>   MAINTAINERS                   |  6 +++
>   xen/arch/arm/Kconfig          |  7 +++
>   xen/arch/arm/Makefile         |  1 +
>   xen/arch/arm/domain.c         |  4 ++
>   xen/arch/arm/domain_build.c   |  4 ++
>   xen/arch/arm/setup.c          |  1 +
>   xen/arch/arm/shutdown.c       |  1 +
>   xen/arch/arm/tee/Makefile     |  1 +
>   xen/arch/arm/tee/tee.c        | 69 ++++++++++++++++++++++++++
>   xen/arch/arm/vsmc.c           |  5 ++
>   xen/arch/arm/xen.lds.S        |  7 +++
>   xen/include/asm-arm/tee/tee.h | 91 +++++++++++++++++++++++++++++++++++
>   12 files changed, 197 insertions(+)
>   create mode 100644 xen/arch/arm/tee/Makefile
>   create mode 100644 xen/arch/arm/tee/tee.c
>   create mode 100644 xen/include/asm-arm/tee/tee.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 96a0518f49..eac2b40fdf 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -376,6 +376,12 @@ F:	config/Stubdom.mk.in
>   F:	m4/stubdom.m4
>   F:	stubdom/
>   
> +TEE MEDIATORS
> +M:	Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> +S:	Supported
> +F:	xen/arch/arm/tee/
> +F:	xen/include/asm-arm/tee
> +
>   TOOLSTACK
>   M:	Ian Jackson <ian.jackson@eu.citrix.com>
>   M:	Wei Liu <wei.liu2@citrix.com>
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 581de67b6b..e527b2f885 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -105,6 +105,13 @@ config HARDEN_BRANCH_PREDICTOR
>   
>   	  If unsure, say Y.
>   
> +config TEE
> +	bool "Enable TEE mediators support" if EXPERT = "y"
> +	default n
> +	help
> +	  This option enables generic TEE mediators support. It allows guests
> +	  to access real TEE via one of TEE mediators implemented in XEN.
> +
>   endmenu
>   
>   menu "ARM errata workaround via the alternative framework"
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index cb902cb6fe..5c2aa34557 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -5,6 +5,7 @@ subdir-$(CONFIG_ACPI) += acpi
>   ifneq ($(CONFIG_NO_PLAT),y)
>   subdir-y += platforms
>   endif
> +subdir-$(CONFIG_TEE) += tee
>   
>   obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
>   obj-y += bootfdt.init.o
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 1d926dcb29..11b618515b 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -32,6 +32,7 @@
>   #include <asm/platform.h>
>   #include <asm/procinfo.h>
>   #include <asm/regs.h>
> +#include <asm/tee/tee.h>
>   #include <asm/vfp.h>
>   #include <asm/vgic.h>
>   #include <asm/vtimer.h>
> @@ -931,6 +932,9 @@ int domain_relinquish_resources(struct domain *d)
>            */
>           domain_vpl011_deinit(d);
>   
> +        /* Free TEE mediator resources */
> +        tee_domain_destroy(d);
> +

AFAIR, OP-TEE may have a lot of resources attached to it. So I think it 
would be best if we introduce a new RELMEM_* for it and make 
tee_domain_destroy return an int to handle preemption.

>           d->arch.relmem = RELMEM_xen;
>           /* Fallthrough */
>   
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index b0ec3f0b72..39a887b505 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -24,6 +24,7 @@
>   #include <asm/setup.h>
>   #include <asm/cpufeature.h>
>   #include <asm/domain_build.h>
> +#include <asm/tee/tee.h>
>   
>   #include <xen/irq.h>
>   #include <xen/grant_table.h>
> @@ -1962,6 +1963,9 @@ static int __init construct_domain(struct domain *d, struct kernel_info *kinfo)
>       set_current(saved_current);
>       p2m_restore_state(saved_current);
>   
> +    /* Enable TEE */
> +    tee_enable(d);

Why do you need to call TEE here? This should be done through 
arch_domain_create as for any other domain.

> +
>       memset(regs, 0, sizeof(*regs));
>   
>       regs->pc = (register_t)kinfo->entry;
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index e83221ab79..cad568d432 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -48,6 +48,7 @@
>   #include <asm/platform.h>
>   #include <asm/procinfo.h>
>   #include <asm/setup.h>
> +#include <asm/tee/tee.h>

You don't add specific TEE code in that file. So why do you need to 
include it?

>   #include <xsm/xsm.h>
>   #include <asm/acpi.h>
>   
> diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
> index b32f07ec0e..30c69506ff 100644
> --- a/xen/arch/arm/shutdown.c
> +++ b/xen/arch/arm/shutdown.c
> @@ -5,6 +5,7 @@
>   #include <xen/smp.h>
>   #include <asm/platform.h>
>   #include <asm/psci.h>
> +#include <asm/tee/tee.h>

Ditto.

>   
>   static void noreturn halt_this_cpu(void *arg)
>   {
> diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
> new file mode 100644
> index 0000000000..c54d4796ff
> --- /dev/null
> +++ b/xen/arch/arm/tee/Makefile
> @@ -0,0 +1 @@
> +obj-y += tee.o
> diff --git a/xen/arch/arm/tee/tee.c b/xen/arch/arm/tee/tee.c
> new file mode 100644
> index 0000000000..7fd0148b8a
> --- /dev/null
> +++ b/xen/arch/arm/tee/tee.c
> @@ -0,0 +1,69 @@
> +/*
> + * xen/arch/arm/tee/tee.c
> + *
> + * Generic part of TEE mediator subsystem
> + *
> + * Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> + * Copyright (c) 2018 EPAM Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include <xen/init.h>

This should be after xen/errno.h

> +#include <xen/errno.h>
> +#include <xen/types.h>
> +#include <asm/tee/tee.h>
> +
> +extern const struct tee_mediator_desc _steemediator[], _eteemediator[];
> +static const struct tee_mediator_ops *mediator_ops;
> +
> +bool tee_handle_call(struct cpu_user_regs *regs)
> +{
> +    if ( !mediator_ops )
> +        return false;
> +
> +    return mediator_ops->handle_call(regs);
> +}
> +
> +int tee_enable(struct domain *d)

I would prefer if you stay consistent with the naming of 
tee_domain_destroy. So this should be called tee_domain_init.

> 
> +    if ( !mediator_ops )
> +        return -ENODEV;
> +
> +    return mediator_ops->enable(d);
> +}
> +
> +void tee_domain_destroy(struct domain *d)
> +{
> +    if ( !mediator_ops )
> +        return;
> +
> +    return mediator_ops->domain_destroy(d);
> +}
> +
> +static int __init tee_init(void)
> +{
> +    const struct tee_mediator_desc *desc;
> +
> +    for ( desc = _steemediator; desc != _eteemediator; desc++ )
For clarity:

{

> +        if ( desc->ops->probe() )
> +        {
> +            printk(XENLOG_INFO "Using TEE mediator for %s\n", desc->name);
> +            mediator_ops = desc->ops;
> +            return 0;
> +        }

}

And add a newline before return.

> +    return 0;
> +}
> +
> +__initcall(tee_init);
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/vsmc.c b/xen/arch/arm/vsmc.c
> index c4ccae6030..d0199c7874 100644
> --- a/xen/arch/arm/vsmc.c
> +++ b/xen/arch/arm/vsmc.c
> @@ -23,6 +23,7 @@
>   #include <asm/monitor.h>
>   #include <asm/regs.h>
>   #include <asm/smccc.h>
> +#include <asm/tee/tee.h>
>   #include <asm/traps.h>
>   #include <asm/vpsci.h>
>   
> @@ -272,6 +273,10 @@ static bool vsmccc_handle_call(struct cpu_user_regs *regs)
>           case ARM_SMCCC_OWNER_STANDARD:
>               handled = handle_sssc(regs);
>               break;
> +        case ARM_SMCCC_OWNER_TRUSTED_APP ... ARM_SMCCC_OWNER_TRUSTED_APP_END:
> +        case ARM_SMCCC_OWNER_TRUSTED_OS ... ARM_SMCCC_OWNER_TRUSTED_OS_END:
> +            handled = tee_handle_call(regs);
> +            break;
>           }
>       }
>   
> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
> index 1e72906477..e664c4441a 100644
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -137,6 +137,13 @@ SECTIONS
>         _aedevice = .;
>     } :text
>   
> +  . = ALIGN(8);
> +  .teemediator.info : {
> +      _steemediator = .;
> +      *(.teemediator.info)
> +      _eteemediator = .;
> +  } :text
> +
>     . = ALIGN(PAGE_SIZE);             /* Init code and data */
>     __init_begin = .;
>     .init.text : {
> diff --git a/xen/include/asm-arm/tee/tee.h b/xen/include/asm-arm/tee/tee.h
> new file mode 100644
> index 0000000000..0e8b576372
> --- /dev/null
> +++ b/xen/include/asm-arm/tee/tee.h
> @@ -0,0 +1,91 @@
> +/*
> + * xen/include/asm-arm/tee/tee.h
> + *
> + * Generic part of TEE mediator subsystem
> + *
> + * Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> + * Copyright (c) 2018 EPAM Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#ifndef __ARCH_ARM_TEE_TEE_H__
> +#define __ARCH_ARM_TEE_TEE_H__
> +
> +#include <xen/lib.h>
> +#include <xen/types.h>

Newline here please.

> +#include <asm/regs.h>
> +
> +#ifdef CONFIG_TEE
> +
> +struct tee_mediator_ops {

See my comments above regarding some of the prototype.

> +    /*
> +     * Probe for TEE. Should return true if TEE found and
> +     * mediator is initialized.
> +     */
> +    bool (*probe)(void);
> +
> +    /*
> +     * Called during domain construction if toolstack requests to enable
> +     * TEE support so mediator can inform TEE about new
> +     * guest and create own structures for the new domain.
> +     */
> +    int (*enable)(struct domain *d);
> +
> +    /*
> +     * Called during domain destruction to inform TEE that guest is now dead
> +     * and to destroy all resources allocated for the domain being destroyed.
> +     */
> +    void (*domain_destroy)(struct domain *d);
> +
> +    /* Handle SMCCC call for current domain. */
> +    bool (*handle_call)(struct cpu_user_regs *regs);
> +};
> +
> +struct tee_mediator_desc {
> +    /* Name of the TEE. Just for debugging purposes. */
> +    const char *name;
> +
> +    /* Mediator callbacks as described above. */
> +    const struct tee_mediator_ops *ops;
> +};
> +
> +bool tee_handle_call(struct cpu_user_regs *regs);
> +int tee_enable(struct domain *d);
> +void tee_domain_destroy(struct domain *d);
> +
> +#define REGISTER_TEE_MEDIATOR(_name, _namestr, _ops)          \
> +static const struct tee_mediator_desc __tee_desc_##_name __used     \
> +__section(".teemediator.info") = {                                  \
> +    .name = _namestr,                                               \
> +    .ops = _ops                                                     \
> +}
> +
> +#else
> +
> +static inline bool tee_handle_call(struct cpu_user_regs *regs)
> +{
> +    return false;
> +}
> +
> +static inline int tee_enable(struct domain *d)
> +{
> +    return -ENODEV;
> +}
> +
> +static inline void tee_domain_destroy(struct domain *d) {}
> +
> +#endif  /* CONFIG_TEE */
> +
> +#endif /* __ARCH_ARM_TEE_TEE_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 04/11] optee: add OP-TEE mediator skeleton
  2018-12-18 21:11 ` [PATCH v3 04/11] optee: add OP-TEE mediator skeleton Volodymyr Babchuk
@ 2019-01-15 18:15   ` Julien Grall
  2019-01-16 17:31     ` Volodymyr Babchuk
  2019-01-16 17:51   ` Julien Grall
  1 sibling, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-15 18:15 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, tee-dev, nd, Stefano Stabellini

Hi Volodymyr,

On 12/18/18 9:11 PM, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> Add very basic OP-TEE mediator. It can probe for OP-TEE presence,
> tell it about domain creation/destruction and forward all known
> calls.
> 
> This is all what is needed for Dom0 to work with OP-TEE as long
> as Dom0 shares 1:1 mapped pages with OP-TEE. Any attempt to call
> OP-TEE from DomU will fail and can lead to spectacular results. Also,
> problems can arise if Dom0 uses pages mapped from other domains.
> So, this patch should not be merged without next patches in the series.

The last sentence is not necessary in the commit message. You can add it 
after "---". So it is stripped down when the patch is committed, yet we 
know it should not be merged :).

> 
> This code issues two non-preemptible calls to OP-TEE: to create and
> to destroy client context. They can't block in OP-TEE, but OP-TEE can
> wait on a splinlocks, so there is no maximal execution time
> guaranteed.

This paragraph is worrying. From the discussion we had on the previous 
version and some discussions at Linaro Connect with OP-TEE folks, I 
gathered this call should not take a long time.

In general spinlock should have no contention or wait should be limited. 
In particular, in the context of OP-TEE I would expect something is 
really wrong if you wait on a spinlock for a really long time as you 
block the normal world. So maybe this is just a misunderstanding of the 
commit message?

> 
> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> ---
> 
>   Changes from v2:
>    - Fixed coding style
>    - Introduced tee/Kconfig
>    - Fixed error messages
> 
>   xen/arch/arm/Kconfig                |   2 +
>   xen/arch/arm/tee/Kconfig            |   4 +
>   xen/arch/arm/tee/Makefile           |   1 +
>   xen/arch/arm/tee/optee.c            | 151 ++++++++++++++++++++++++++++
>   xen/include/asm-arm/tee/optee_smc.h |  50 +++++++++
>   5 files changed, 208 insertions(+)
>   create mode 100644 xen/arch/arm/tee/Kconfig
>   create mode 100644 xen/arch/arm/tee/optee.c
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index e527b2f885..99e6f0ebb2 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -237,3 +237,5 @@ source "arch/arm/platforms/Kconfig"
>   source "common/Kconfig"
>   
>   source "drivers/Kconfig"
> +
> +source "arch/arm/tee/Kconfig"
> diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig
> new file mode 100644
> index 0000000000..5b829db2e9
> --- /dev/null
> +++ b/xen/arch/arm/tee/Kconfig
> @@ -0,0 +1,4 @@
> +config OPTEE
> +	bool "Enable OP-TEE mediator"
> +	default n
> +	depends on TEE
> diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
> index c54d4796ff..982c879684 100644
> --- a/xen/arch/arm/tee/Makefile
> +++ b/xen/arch/arm/tee/Makefile
> @@ -1 +1,2 @@
>   obj-y += tee.o
> +obj-$(CONFIG_OPTEE) += optee.o
> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
> new file mode 100644
> index 0000000000..73ad25ee0b
> --- /dev/null
> +++ b/xen/arch/arm/tee/optee.c
> @@ -0,0 +1,151 @@
> +/*
> + * xen/arch/arm/tee/optee.c
> + *
> + * OP-TEE mediator
> + *
> + * Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> + * Copyright (c) 2018 EPAM Systems.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include <xen/device_tree.h>
> +#include <xen/sched.h>
> +#include <asm/smccc.h>
> +#include <asm/tee/tee.h>
> +
> +#include <asm/tee/optee_msg.h>
> +#include <asm/tee/optee_smc.h>
> +
> +/* Client ID 0 is reserved for hypervisor itself */
> +#define OPTEE_CLIENT_ID(domain) (domain->domain_id + 1)

The second 'domain' should be between () to avoid macro-expansion issue.

> +
> +static bool optee_probe(void)
> +{
> +    struct dt_device_node *node;
> +    struct arm_smccc_res resp;
> +
> +    /* Check for entry in dtb  */
> +    node = dt_find_compatible_node(NULL, NULL, "linaro,optee-tz");
> +    if ( !node )
> +        return false;
> +
> +    /* Check UID */
> +    arm_smccc_smc(ARM_SMCCC_CALL_UID_FID(TRUSTED_OS_END), &resp);
> +
> +    if ( (uint32_t)resp.a0 != OPTEE_MSG_UID_0 ||
> +         (uint32_t)resp.a1 != OPTEE_MSG_UID_1 ||
> +         (uint32_t)resp.a2 != OPTEE_MSG_UID_2 ||
> +         (uint32_t)resp.a3 != OPTEE_MSG_UID_3 )
> +        return false;
> +
> +    return true;
> +}
> +
> +static int optee_enable(struct domain *d)
> +{
> +    struct arm_smccc_res resp;
> +
> +    /*
> +     * Inform OP-TEE about a new guest.
> +     * This is a "Fast" call in terms of OP-TEE. This basically
> +     * means that it can't be preempted, because there is no
> +     * thread allocated for it in OP-TEE. It is close to atomic
> +     * context in linux kernel: E.g. no blocking calls can be issued.
> +     * Also, interrupts are disabled.
> +     * Right now OP-TEE just frees allocated memory, so it should be
> +     * really fast.

The last sentence looks wrong in this context.

> +     */
> +    arm_smccc_smc(OPTEE_SMC_VM_CREATED, OPTEE_CLIENT_ID(d), 0, 0, 0, 0, 0, 0,
> +                  &resp);
> +    if ( resp.a0 != OPTEE_SMC_RETURN_OK )
> +    {
> +        gprintk(XENLOG_WARNING, "Unable to create OPTEE client: rc = 0x%X\n",
> +                (uint32_t)resp.a0);
> +        return -ENODEV;
> +    }
> +
> +    return 0;
> +}
> +
> +static void forward_call(struct cpu_user_regs *regs)
> +{
> +    struct arm_smccc_res resp;
> +
> +    arm_smccc_smc(get_user_reg(regs, 0),
> +                  get_user_reg(regs, 1),
> +                  get_user_reg(regs, 2),
> +                  get_user_reg(regs, 3),
> +                  get_user_reg(regs, 4),
> +                  get_user_reg(regs, 5),
> +                  get_user_reg(regs, 6),
> +                  OPTEE_CLIENT_ID(current->domain),
> +                  &resp);
> +
> +    set_user_reg(regs, 0, resp.a0);
> +    set_user_reg(regs, 1, resp.a1);
> +    set_user_reg(regs, 2, resp.a2);
> +    set_user_reg(regs, 3, resp.a3);
> +    set_user_reg(regs, 4, 0);
> +    set_user_reg(regs, 5, 0);
> +    set_user_reg(regs, 6, 0);
> +    set_user_reg(regs, 7, 0);
> +}
> +
> +static void optee_domain_destroy(struct domain *d)
> +{
> +    struct arm_smccc_res resp;
> +
> +    /* At this time all domain VCPUs should be stopped */
> +
> +    /*
> +     * Inform OP-TEE that domain is shutting down. This is
> +     * also a fast SMC call, like OPTEE_SMC_VM_CREATED, so
> +     * it is also non-preemptible.
> +     */
> +    arm_smccc_smc(OPTEE_SMC_VM_DESTROYED, OPTEE_CLIENT_ID(d), 0, 0, 0, 0, 0, 0,
> +                  &resp);
> +}
> +
> +static bool optee_handle_call(struct cpu_user_regs *regs)
> +{
> +    switch ( get_user_reg(regs, 0) )
> +    {
> +    case OPTEE_SMC_CALLS_COUNT:
> +    case OPTEE_SMC_CALLS_UID:
> +    case OPTEE_SMC_CALLS_REVISION:
> +    case OPTEE_SMC_CALL_GET_OS_UUID:
> +    case OPTEE_SMC_FUNCID_GET_OS_REVISION:
> +    case OPTEE_SMC_ENABLE_SHM_CACHE:
> +    case OPTEE_SMC_DISABLE_SHM_CACHE:
> +    case OPTEE_SMC_GET_SHM_CONFIG:
> +    case OPTEE_SMC_EXCHANGE_CAPABILITIES:
> +    case OPTEE_SMC_CALL_WITH_ARG:
> +    case OPTEE_SMC_CALL_RETURN_FROM_RPC:
> +        forward_call(regs);
> +        return true;
> +    default:
> +        return false;
> +    }
> +}
> +
> +static const struct tee_mediator_ops optee_ops =
> +{
> +    .probe = optee_probe,
> +    .enable = optee_enable,
> +    .domain_destroy = optee_domain_destroy,
> +    .handle_call = optee_handle_call,
> +};
> +
> +REGISTER_TEE_MEDIATOR(optee, "OP-TEE", &optee_ops);
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/tee/optee_smc.h b/xen/include/asm-arm/tee/optee_smc.h
> index 26d100e215..1c5a2479e9 100644
> --- a/xen/include/asm-arm/tee/optee_smc.h
> +++ b/xen/include/asm-arm/tee/optee_smc.h
> @@ -304,6 +304,56 @@ struct optee_smc_disable_shm_cache_result {
>   #define OPTEE_SMC_ENABLE_SHM_CACHE \
>   	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_ENABLE_SHM_CACHE)
>   
> +/*
> + * Inform OP-TEE about a new virtual machine
> + *
> + * Hypervisor issues this call during virtual machine (guest) creation.
> + * OP-TEE records VM_ID of new virtual machine and makes self ready
> + * to receive requests from it.
> + *
> + * Call requests usage:
> + * a0	SMC Function ID, OPTEE_SMC_VM_CREATED
> + * a1	VM_ID of newly created virtual machine
> + * a2-6 Not used
> + * a7	Hypervisor Client ID register. Must be 0, because only hypervisor
> + *      can issue this call
> + *
> + * Normal return register usage:
> + * a0	OPTEE_SMC_RETURN_OK
> + * a1-7	Preserved
> + *
> + * Error return:
> + * a0	OPTEE_SMC_RETURN_ENOTAVAIL	OP-TEE has no resources for
> + *					another VM
> + * a1-7	Preserved
> + *
> + */
> +#define OPTEE_SMC_FUNCID_VM_CREATED	13
> +#define OPTEE_SMC_VM_CREATED \
> +	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_VM_CREATED)
> +
> +/*
> + * Inform OP-TEE about shutdown of a virtual machine
> + *
> + * Hypervisor issues this call during virtual machine (guest) destruction.
> + * OP-TEE will clean up all resources associated with this VM.
> + *
> + * Call requests usage:
> + * a0	SMC Function ID, OPTEE_SMC_VM_DESTROYED
> + * a1	VM_ID of virtual machine being shutted down
> + * a2-6 Not used
> + * a7	Hypervisor Client ID register. Must be 0, because only hypervisor
> + *      can issue this call
> + *
> + * Normal return register usage:
> + * a0	OPTEE_SMC_RETURN_OK
> + * a1-7	Preserved
> + *
> + */
> +#define OPTEE_SMC_FUNCID_VM_DESTROYED	14
> +#define OPTEE_SMC_VM_DESTROYED \
> +	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_VM_DESTROYED)
> +
>   /*
>    * Resume from RPC (for example after processing a foreign interrupt)
>    *
> 

Cheers,

-- 
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 02/11] arm: add tee_enabled flag to xen_arch_domainconfig
  2019-01-15 17:35   ` Julien Grall
@ 2019-01-16 17:22     ` Volodymyr Babchuk
  2019-01-16 17:27       ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-16 17:22 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Hello Julien,

Julien Grall writes:

> Hi,
>
> On 12/18/18 9:11 PM, Volodymyr Babchuk wrote:
>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>
>> This flag enables TEE support for a domain.
>>
>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>> ---
>>   xen/arch/arm/domain.c         | 4 ++++
>>   xen/arch/arm/domctl.c         | 1 +
>>   xen/include/public/arch-arm.h | 3 +++
>>   3 files changed, 8 insertions(+)
>> ,
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 11b618515b..f04041931d 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -702,6 +702,10 @@ int arch_domain_create(struct domain *d,
>>       if ( (rc = domain_vtimer_init(d, &config->arch)) != 0 )
>>           goto fail;
>>   +    if ( config->arch.tee_enabled )
>> +        if ( (rc = tee_enable(d)) != 0 )
>
> This function does not yet exist. But I think it would make sense to
> fold this patch in the next one.

If you were talking about tee_enable(), then it was introduced in the
previous patch.

Sure, I'll squash this patch into the previous one.

>> +            goto fail;
>> +
>>       update_domain_wallclock_time(d);
>>         /*
>> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
>> index 20691528a6..f019e035e8 100644
>> --- a/xen/arch/arm/domctl.c
>> +++ b/xen/arch/arm/domctl.c
>> @@ -13,6 +13,7 @@
>>   #include <xen/sched.h>
>>   #include <xen/types.h>
>>   #include <xsm/xsm.h>
>> +#include <asm-arm/tee/tee.h>
>>   #include <public/domctl.h>
>>     void arch_get_domain_info(const struct domain *d,
>> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
>> index eb424e8286..b7a010e99e 100644
>> --- a/xen/include/public/arch-arm.h
>> +++ b/xen/include/public/arch-arm.h
>> @@ -323,6 +323,9 @@ struct xen_arch_domainconfig {
>>        *
>>        */
>>       uint32_t clock_frequency;
>> +
>> +    /* IN */
>> +    uint8_t tee_enabled;
>
> Can you move this after gic_version? So we don't introduce more padding.
Sure.

>>   };
>>   #endif /* __XEN__ || __XEN_TOOLS__ */
>>
>>
>
> Cheers,


--
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 02/11] arm: add tee_enabled flag to xen_arch_domainconfig
  2019-01-16 17:22     ` Volodymyr Babchuk
@ 2019-01-16 17:27       ` Julien Grall
  0 siblings, 0 replies; 47+ messages in thread
From: Julien Grall @ 2019-01-16 17:27 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev



On 16/01/2019 17:22, Volodymyr Babchuk wrote:
> 
> Hello Julien,
> 
> Julien Grall writes:
> 
>> Hi,
>>
>> On 12/18/18 9:11 PM, Volodymyr Babchuk wrote:
>>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>>
>>> This flag enables TEE support for a domain.
>>>
>>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>> ---
>>>    xen/arch/arm/domain.c         | 4 ++++
>>>    xen/arch/arm/domctl.c         | 1 +
>>>    xen/include/public/arch-arm.h | 3 +++
>>>    3 files changed, 8 insertions(+)
>>> ,
>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>> index 11b618515b..f04041931d 100644
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -702,6 +702,10 @@ int arch_domain_create(struct domain *d,
>>>        if ( (rc = domain_vtimer_init(d, &config->arch)) != 0 )
>>>            goto fail;
>>>    +    if ( config->arch.tee_enabled )
>>> +        if ( (rc = tee_enable(d)) != 0 )
>>
>> This function does not yet exist. But I think it would make sense to
>> fold this patch in the next one.
> 
> If you were talking about tee_enable(), then it was introduced in the
> previous patch.

Oh, somehow the patch are not correctly ordered in my inbox. Sorry for the noise.

> 
> Sure, I'll squash this patch into the previous one.
Although, I still think squashing the two would be the best.


> 
>>> +            goto fail;
>>> +
>>>        update_domain_wallclock_time(d);
>>>          /*
>>> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
>>> index 20691528a6..f019e035e8 100644
>>> --- a/xen/arch/arm/domctl.c
>>> +++ b/xen/arch/arm/domctl.c
>>> @@ -13,6 +13,7 @@
>>>    #include <xen/sched.h>
>>>    #include <xen/types.h>
>>>    #include <xsm/xsm.h>
>>> +#include <asm-arm/tee/tee.h>
>>>    #include <public/domctl.h>
>>>      void arch_get_domain_info(const struct domain *d,
>>> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
>>> index eb424e8286..b7a010e99e 100644
>>> --- a/xen/include/public/arch-arm.h
>>> +++ b/xen/include/public/arch-arm.h
>>> @@ -323,6 +323,9 @@ struct xen_arch_domainconfig {
>>>         *
>>>         */
>>>        uint32_t clock_frequency;
>>> +
>>> +    /* IN */
>>> +    uint8_t tee_enabled;
>>
>> Can you move this after gic_version? So we don't introduce more padding.
> Sure.
> 
>>>    };
>>>    #endif /* __XEN__ || __XEN_TOOLS__ */
>>>
>>>
>>
>> Cheers,
> 
> 
> --
> Best regards,Volodymyr Babchuk
> 
Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 04/11] optee: add OP-TEE mediator skeleton
  2019-01-15 18:15   ` Julien Grall
@ 2019-01-16 17:31     ` Volodymyr Babchuk
  0 siblings, 0 replies; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-16 17:31 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, Volodymyr Babchuk, tee-dev, xen-devel, nd,
	Volodymyr Babchuk


Hi Julien,

Julien Grall writes:

> Hi Volodymyr,
>
> On 12/18/18 9:11 PM, Volodymyr Babchuk wrote:
>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>
>> Add very basic OP-TEE mediator. It can probe for OP-TEE presence,
>> tell it about domain creation/destruction and forward all known
>> calls.
>>
>> This is all what is needed for Dom0 to work with OP-TEE as long
>> as Dom0 shares 1:1 mapped pages with OP-TEE. Any attempt to call
>> OP-TEE from DomU will fail and can lead to spectacular results. Also,
>> problems can arise if Dom0 uses pages mapped from other domains.
>> So, this patch should not be merged without next patches in the series.
>
> The last sentence is not necessary in the commit message. You can add it
> after "---". So it is stripped down when the patch is committed, yet we
> know it should not be merged :).
Oh, okay.

>>
>> This code issues two non-preemptible calls to OP-TEE: to create and
>> to destroy client context. They can't block in OP-TEE, but OP-TEE can
>> wait on a splinlocks, so there is no maximal execution time
>> guaranteed.
>
> This paragraph is worrying. From the discussion we had on the previous
> version and some discussions at Linaro Connect with OP-TEE folks, I
> gathered this call should not take a long time.
Yes, this is right. It **should** not take a long time. And currently I
can't see why it would take a long time anyways. But, you know,
theoretically, there is no upper limit on waiting for spinlock.

> In general spinlock should have no contention or wait should be limited.
> In particular, in the context of OP-TEE I would expect something is
> really wrong if you wait on a spinlock for a really long time as you
> block the normal world. So maybe this is just a misunderstanding of the
> commit message?
Yes, I think it is a misunderstanding. OP-TEE uses spinlocks in the same
way as XEN or kernel - for short atomic operations. But as spinlock()
function have no timeout argument, theoretically it would block for a
long time.

I think, I'll remove that clause from the commit message. Just for
clarity :)

>
>>
>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>> ---
>>
>>   Changes from v2:
>>    - Fixed coding style
>>    - Introduced tee/Kconfig
>>    - Fixed error messages
>>
>>   xen/arch/arm/Kconfig                |   2 +
>>   xen/arch/arm/tee/Kconfig            |   4 +
>>   xen/arch/arm/tee/Makefile           |   1 +
>>   xen/arch/arm/tee/optee.c            | 151 ++++++++++++++++++++++++++++
>>   xen/include/asm-arm/tee/optee_smc.h |  50 +++++++++
>>   5 files changed, 208 insertions(+)
>>   create mode 100644 xen/arch/arm/tee/Kconfig
>>   create mode 100644 xen/arch/arm/tee/optee.c
>>
>> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
>> index e527b2f885..99e6f0ebb2 100644
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -237,3 +237,5 @@ source "arch/arm/platforms/Kconfig"
>>   source "common/Kconfig"
>>
>>   source "drivers/Kconfig"
>> +
>> +source "arch/arm/tee/Kconfig"
>> diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig
>> new file mode 100644
>> index 0000000000..5b829db2e9
>> --- /dev/null
>> +++ b/xen/arch/arm/tee/Kconfig
>> @@ -0,0 +1,4 @@
>> +config OPTEE
>> +	bool "Enable OP-TEE mediator"
>> +	default n
>> +	depends on TEE
>> diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
>> index c54d4796ff..982c879684 100644
>> --- a/xen/arch/arm/tee/Makefile
>> +++ b/xen/arch/arm/tee/Makefile
>> @@ -1 +1,2 @@
>>   obj-y += tee.o
>> +obj-$(CONFIG_OPTEE) += optee.o
>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>> new file mode 100644
>> index 0000000000..73ad25ee0b
>> --- /dev/null
>> +++ b/xen/arch/arm/tee/optee.c
>> @@ -0,0 +1,151 @@
>> +/*
>> + * xen/arch/arm/tee/optee.c
>> + *
>> + * OP-TEE mediator
>> + *
>> + * Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>> + * Copyright (c) 2018 EPAM Systems.
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + */
>> +
>> +#include <xen/device_tree.h>
>> +#include <xen/sched.h>
>> +#include <asm/smccc.h>
>> +#include <asm/tee/tee.h>
>> +
>> +#include <asm/tee/optee_msg.h>
>> +#include <asm/tee/optee_smc.h>
>> +
>> +/* Client ID 0 is reserved for hypervisor itself */
>> +#define OPTEE_CLIENT_ID(domain) (domain->domain_id + 1)
>
> The second 'domain' should be between () to avoid macro-expansion issue.
>
>> +
>> +static bool optee_probe(void)
>> +{
>> +    struct dt_device_node *node;
>> +    struct arm_smccc_res resp;
>> +
>> +    /* Check for entry in dtb  */
>> +    node = dt_find_compatible_node(NULL, NULL, "linaro,optee-tz");
>> +    if ( !node )
>> +        return false;
>> +
>> +    /* Check UID */
>> +    arm_smccc_smc(ARM_SMCCC_CALL_UID_FID(TRUSTED_OS_END), &resp);
>> +
>> +    if ( (uint32_t)resp.a0 != OPTEE_MSG_UID_0 ||
>> +         (uint32_t)resp.a1 != OPTEE_MSG_UID_1 ||
>> +         (uint32_t)resp.a2 != OPTEE_MSG_UID_2 ||
>> +         (uint32_t)resp.a3 != OPTEE_MSG_UID_3 )
>> +        return false;
>> +
>> +    return true;
>> +}
>> +
>> +static int optee_enable(struct domain *d)
>> +{
>> +    struct arm_smccc_res resp;
>> +
>> +    /*
>> +     * Inform OP-TEE about a new guest.
>> +     * This is a "Fast" call in terms of OP-TEE. This basically
>> +     * means that it can't be preempted, because there is no
>> +     * thread allocated for it in OP-TEE. It is close to atomic
>> +     * context in linux kernel: E.g. no blocking calls can be issued.
>> +     * Also, interrupts are disabled.
>> +     * Right now OP-TEE just frees allocated memory, so it should be
>> +     * really fast.
>
> The last sentence looks wrong in this context.
>
>> +     */
>> +    arm_smccc_smc(OPTEE_SMC_VM_CREATED, OPTEE_CLIENT_ID(d), 0, 0, 0, 0, 0, 0,
>> +                  &resp);
>> +    if ( resp.a0 != OPTEE_SMC_RETURN_OK )
>> +    {
>> +        gprintk(XENLOG_WARNING, "Unable to create OPTEE client: rc = 0x%X\n",
>> +                (uint32_t)resp.a0);
>> +        return -ENODEV;
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +static void forward_call(struct cpu_user_regs *regs)
>> +{
>> +    struct arm_smccc_res resp;
>> +
>> +    arm_smccc_smc(get_user_reg(regs, 0),
>> +                  get_user_reg(regs, 1),
>> +                  get_user_reg(regs, 2),
>> +                  get_user_reg(regs, 3),
>> +                  get_user_reg(regs, 4),
>> +                  get_user_reg(regs, 5),
>> +                  get_user_reg(regs, 6),
>> +                  OPTEE_CLIENT_ID(current->domain),
>> +                  &resp);
>> +
>> +    set_user_reg(regs, 0, resp.a0);
>> +    set_user_reg(regs, 1, resp.a1);
>> +    set_user_reg(regs, 2, resp.a2);
>> +    set_user_reg(regs, 3, resp.a3);
>> +    set_user_reg(regs, 4, 0);
>> +    set_user_reg(regs, 5, 0);
>> +    set_user_reg(regs, 6, 0);
>> +    set_user_reg(regs, 7, 0);
>> +}
>> +
>> +static void optee_domain_destroy(struct domain *d)
>> +{
>> +    struct arm_smccc_res resp;
>> +
>> +    /* At this time all domain VCPUs should be stopped */
>> +
>> +    /*
>> +     * Inform OP-TEE that domain is shutting down. This is
>> +     * also a fast SMC call, like OPTEE_SMC_VM_CREATED, so
>> +     * it is also non-preemptible.
>> +     */
>> +    arm_smccc_smc(OPTEE_SMC_VM_DESTROYED, OPTEE_CLIENT_ID(d), 0, 0, 0, 0, 0, 0,
>> +                  &resp);
>> +}
>> +
>> +static bool optee_handle_call(struct cpu_user_regs *regs)
>> +{
>> +    switch ( get_user_reg(regs, 0) )
>> +    {
>> +    case OPTEE_SMC_CALLS_COUNT:
>> +    case OPTEE_SMC_CALLS_UID:
>> +    case OPTEE_SMC_CALLS_REVISION:
>> +    case OPTEE_SMC_CALL_GET_OS_UUID:
>> +    case OPTEE_SMC_FUNCID_GET_OS_REVISION:
>> +    case OPTEE_SMC_ENABLE_SHM_CACHE:
>> +    case OPTEE_SMC_DISABLE_SHM_CACHE:
>> +    case OPTEE_SMC_GET_SHM_CONFIG:
>> +    case OPTEE_SMC_EXCHANGE_CAPABILITIES:
>> +    case OPTEE_SMC_CALL_WITH_ARG:
>> +    case OPTEE_SMC_CALL_RETURN_FROM_RPC:
>> +        forward_call(regs);
>> +        return true;
>> +    default:
>> +        return false;
>> +    }
>> +}
>> +
>> +static const struct tee_mediator_ops optee_ops =
>> +{
>> +    .probe = optee_probe,
>> +    .enable = optee_enable,
>> +    .domain_destroy = optee_domain_destroy,
>> +    .handle_call = optee_handle_call,
>> +};
>> +
>> +REGISTER_TEE_MEDIATOR(optee, "OP-TEE", &optee_ops);
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>> diff --git a/xen/include/asm-arm/tee/optee_smc.h b/xen/include/asm-arm/tee/optee_smc.h
>> index 26d100e215..1c5a2479e9 100644
>> --- a/xen/include/asm-arm/tee/optee_smc.h
>> +++ b/xen/include/asm-arm/tee/optee_smc.h
>> @@ -304,6 +304,56 @@ struct optee_smc_disable_shm_cache_result {
>>   #define OPTEE_SMC_ENABLE_SHM_CACHE \
>>   	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_ENABLE_SHM_CACHE)
>>
>> +/*
>> + * Inform OP-TEE about a new virtual machine
>> + *
>> + * Hypervisor issues this call during virtual machine (guest) creation.
>> + * OP-TEE records VM_ID of new virtual machine and makes self ready
>> + * to receive requests from it.
>> + *
>> + * Call requests usage:
>> + * a0	SMC Function ID, OPTEE_SMC_VM_CREATED
>> + * a1	VM_ID of newly created virtual machine
>> + * a2-6 Not used
>> + * a7	Hypervisor Client ID register. Must be 0, because only hypervisor
>> + *      can issue this call
>> + *
>> + * Normal return register usage:
>> + * a0	OPTEE_SMC_RETURN_OK
>> + * a1-7	Preserved
>> + *
>> + * Error return:
>> + * a0	OPTEE_SMC_RETURN_ENOTAVAIL	OP-TEE has no resources for
>> + *					another VM
>> + * a1-7	Preserved
>> + *
>> + */
>> +#define OPTEE_SMC_FUNCID_VM_CREATED	13
>> +#define OPTEE_SMC_VM_CREATED \
>> +	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_VM_CREATED)
>> +
>> +/*
>> + * Inform OP-TEE about shutdown of a virtual machine
>> + *
>> + * Hypervisor issues this call during virtual machine (guest) destruction.
>> + * OP-TEE will clean up all resources associated with this VM.
>> + *
>> + * Call requests usage:
>> + * a0	SMC Function ID, OPTEE_SMC_VM_DESTROYED
>> + * a1	VM_ID of virtual machine being shutted down
>> + * a2-6 Not used
>> + * a7	Hypervisor Client ID register. Must be 0, because only hypervisor
>> + *      can issue this call
>> + *
>> + * Normal return register usage:
>> + * a0	OPTEE_SMC_RETURN_OK
>> + * a1-7	Preserved
>> + *
>> + */
>> +#define OPTEE_SMC_FUNCID_VM_DESTROYED	14
>> +#define OPTEE_SMC_VM_DESTROYED \
>> +	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_VM_DESTROYED)
>> +
>>   /*
>>    * Resume from RPC (for example after processing a foreign interrupt)
>>    *
>>
>
> Cheers,


--
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 04/11] optee: add OP-TEE mediator skeleton
  2018-12-18 21:11 ` [PATCH v3 04/11] optee: add OP-TEE mediator skeleton Volodymyr Babchuk
  2019-01-15 18:15   ` Julien Grall
@ 2019-01-16 17:51   ` Julien Grall
  1 sibling, 0 replies; 47+ messages in thread
From: Julien Grall @ 2019-01-16 17:51 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
> +static void optee_domain_destroy(struct domain *d)
> +{
> +    struct arm_smccc_res resp;
> +
> +    /* At this time all domain VCPUs should be stopped */

This function is called unconditionally, i.e this can be called even if the call 
of optee_enable didn't happen. So you need to cater that case.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 06/11] optee: add std call handling
  2018-12-18 21:11 ` [PATCH v3 06/11] optee: add std call handling Volodymyr Babchuk
@ 2019-01-16 18:45   ` Julien Grall
  2019-01-17 15:21     ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-16 18:45 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> The main way to communicate with OP-TEE is to issue standard SMCCC
> call. "Standard" is a SMCCC term and it means that call can be
> interrupted and OP-TEE can return control to NW before completing
> the call.
> 
> In contrast with fast calls, where arguments and return values
> are passed in registers, standard calls use shared memory. Register
> pair a1,a2 holds 64-bit PA of command buffer, where all arguments
> are stored and which is used to return data. OP-TEE internally
> copies contents of this buffer into own secure memory before accessing
> and validating any data in command buffer. This is done to make sure
> that NW will not change contents of the validated parameters.
> 
> Mediator needs to do the same for number of reasons:
> 
> 1. To make sure that guest will not change data after validation.
> 2. To translate IPAs to PAs in the command buffer (this is not done
>     in this patch).
> 3. To hide translated address from guest, so it will not be able
>     to do IPA->PA translation by misusing mediator.
> 
> During standard call OP-TEE can issue multiple "RPC returns", asking
> NW to do some work for OP-TEE. NW then issues special call
> OPTEE_SMC_CALL_RETURN_FROM_RPC to resume handling of the original call.
> Thus, mediator needs to maintain context for original standard call
> during multiple SMCCC calls.
> 
> Standard call is considered complete, when returned value is
> not a RPC request.
> 
> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> ---
> 
>   Changes from v2:
>    - renamed struct domain_ctx to struct optee_domain
>    - fixed coding style
>    - Now I use access_guest_memory_by_ipa() instead of mappings
>      to read command buffer
>    - Added tracking for in flight calls, so guest can't resume
>      the same call from two CPUs simultaniously
> 
>   xen/arch/arm/tee/optee.c     | 319 ++++++++++++++++++++++++++++++++++-
>   xen/include/asm-arm/domain.h |   3 +
>   2 files changed, 320 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
> index 584241b03a..dc90e2ed8e 100644
> --- a/xen/arch/arm/tee/optee.c
> +++ b/xen/arch/arm/tee/optee.c
> @@ -12,6 +12,8 @@
>    */
>   
>   #include <xen/device_tree.h>
> +#include <xen/domain_page.h>
> +#include <xen/guest_access.h>
>   #include <xen/sched.h>
>   #include <asm/smccc.h>
>   #include <asm/tee/tee.h>
> @@ -22,11 +24,38 @@
>   /* Client ID 0 is reserved for hypervisor itself */
>   #define OPTEE_CLIENT_ID(domain) (domain->domain_id + 1)
>   
> +/*
> + * Maximal number of concurrent standard calls from one guest. This
> + * corresponds to OPTEE configuration option CFG_NUM_THREADS, because
> + * OP-TEE spawns a thread for every standard call.

Looking at OP-TEE, CFG_NUM_THREADS will vary depending on the platform. Is there 
any way to probe that number of threads from Xen?

In any case, I think we should update the comment to reflect that this seems to 
be the maximum CFG_NUM_THREADS supported by any upstream platform.

> + */
> +#define MAX_STD_CALLS   16
> +
>   #define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
>   #define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
>                                 OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM |  \
>                                 OPTEE_SMC_SEC_CAP_DYNAMIC_SHM)
>   
> +/*
> + * Call context. OP-TEE can issue multiple RPC returns during one call.
> + * We need to preserve context during them.
> + */
> +struct optee_std_call {
> +    struct list_head list;
> +    struct optee_msg_arg *xen_arg;
> +    paddr_t guest_arg_ipa;
> +    int optee_thread_id;
> +    int rpc_op;
> +    bool in_flight;
> +};
> +
> +/* Domain context */
> +struct optee_domain {
> +    struct list_head call_list;
> +    atomic_t call_count;
> +    spinlock_t lock;
> +};
> +
>   static bool optee_probe(void)
>   {
>       struct dt_device_node *node;
> @@ -52,6 +81,11 @@ static bool optee_probe(void)
>   static int optee_enable(struct domain *d)
>   {
>       struct arm_smccc_res resp;
> +    struct optee_domain *ctx;
> +
> +    ctx = xzalloc(struct optee_domain);
> +    if ( !ctx )
> +        return -ENOMEM;
>   
>       /*
>        * Inform OP-TEE about a new guest.
> @@ -69,9 +103,16 @@ static int optee_enable(struct domain *d)
>       {
>           gprintk(XENLOG_WARNING, "Unable to create OPTEE client: rc = 0x%X\n",
>                   (uint32_t)resp.a0);
> +        xfree(ctx);
>           return -ENODEV;
>       }
>   
> +    INIT_LIST_HEAD(&ctx->call_list);
> +    atomic_set(&ctx->call_count, 0);
> +    spin_lock_init(&ctx->lock);
> +
> +    d->arch.tee = ctx;
> +
>       return 0;
>   }
>   
> @@ -111,9 +152,86 @@ static void set_return(struct cpu_user_regs *regs, uint32_t ret)
>       set_user_reg(regs, 7, 0);
>   }
>   
> +static struct optee_std_call *allocate_std_call(struct optee_domain *ctx)
> +{
> +    struct optee_std_call *call;
> +    int count;
> +
> +    /* Make sure that guest does not execute more than MAX_STD_CALLS */
> +    count = atomic_add_unless(&ctx->call_count, 1, MAX_STD_CALLS);
> +    if ( count == MAX_STD_CALLS )
> +        return NULL;
> +
> +    call = xzalloc(struct optee_std_call);
> +    if ( !call )
> +    {
> +        atomic_dec(&ctx->call_count);
> +        return NULL;
> +    }
> +
> +    call->optee_thread_id = -1;
> +    call->in_flight = true;
> +
> +    spin_lock(&ctx->lock);
> +    list_add_tail(&call->list, &ctx->call_list);
> +    spin_unlock(&ctx->lock);
> +
> +    return call;
> +}
> +
> +static void free_std_call(struct optee_domain *ctx,
> +                          struct optee_std_call *call)
> +{
> +    atomic_dec(&ctx->call_count);
> +
> +    spin_lock(&ctx->lock);
> +    list_del(&call->list);
> +    spin_unlock(&ctx->lock);
> +
> +    ASSERT(!call->in_flight);
> +    xfree(call->xen_arg);
> +    xfree(call);
> +}
> +
> +static struct optee_std_call *get_std_call(struct optee_domain *ctx,
> +                                           int thread_id)
> +{
> +    struct optee_std_call *call;
> +
> +    spin_lock(&ctx->lock);
> +    list_for_each_entry( call, &ctx->call_list, list )
> +    {
> +        if ( call->optee_thread_id == thread_id )
> +        {
> +            if ( call->in_flight )
> +            {
> +                gprintk(XENLOG_WARNING, "Guest tries to execute call which is already in flight");
> +                goto out;
> +            }
> +            call->in_flight = true;
> +            spin_unlock(&ctx->lock);
> +            return call;
> +        }
> +    }
> +out:
> +    spin_unlock(&ctx->lock);
> +
> +    return NULL;
> +}
> +
> +static void put_std_call(struct optee_domain *ctx, struct optee_std_call *call)
> +{
> +    spin_lock(&ctx->lock);
> +    ASSERT(call->in_flight);
> +    call->in_flight = false;
> +    spin_unlock(&ctx->lock);
> +}
> +
>   static void optee_domain_destroy(struct domain *d)
>   {
>       struct arm_smccc_res resp;
> +    struct optee_std_call *call, *call_tmp;
> +    struct optee_domain *ctx = d->arch.tee;
>   	
>       /* At this time all domain VCPUs should be stopped */
>   
> @@ -124,6 +242,199 @@ static void optee_domain_destroy(struct domain *d)
>        */
>       arm_smccc_smc(OPTEE_SMC_VM_DESTROYED, OPTEE_CLIENT_ID(d), 0, 0, 0, 0, 0, 0,
>                     &resp);

This function can be called without enable and should be idempotent. So I woudld 
check d->arch.tee before and...

> +    ASSERT(!spin_is_locked(&ctx->lock));
> +
> +    list_for_each_entry_safe( call, call_tmp, &ctx->call_list, list )
> +        free_std_call(ctx, call);
> +
> +    ASSERT(!atomic_read(&ctx->call_count));
> +
> +    xfree(d->arch.tee);

use XFREE here.

> +}
> +
> +/*
> + * Copy command buffer into xen memory to:
> + * 1) Hide translated addresses from guest
> + * 2) Make sure that guest wouldn't change data in command buffer during call
> + */
> +static bool copy_std_request(struct cpu_user_regs *regs,
> +                             struct optee_std_call *call)
> +{
> +    paddr_t xen_addr;
> +
> +    call->guest_arg_ipa = (paddr_t)get_user_reg(regs, 1) << 32 |
> +                            get_user_reg(regs, 2);

NIT: The indentation looks weird here.

> +
> +    /*
> +     * Command buffer should start at page boundary.
> +     * This is OP-TEE ABI requirement.
> +     */
> +    if ( call->guest_arg_ipa & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
> +        return false;
> +
> +    call->xen_arg = _xmalloc(OPTEE_MSG_NONCONTIG_PAGE_SIZE,
> +                             OPTEE_MSG_NONCONTIG_PAGE_SIZE);
> +    if ( !call->xen_arg )
> +        return false;
> +
> +    BUILD_BUG_ON(OPTEE_MSG_NONCONTIG_PAGE_SIZE > PAGE_SIZE);

As you use _xmalloc, you should not need this. This is only necessary if you use 
alloc_xenheap_page.

I am wondering whether it is wise to allocate the memory from xenheap and not 
domheap. While on Arm64 (for now) xenheap and domheap are the same, on Arm32 
they are different. The xenheap is at most 1GB, so pretty limited.

Furthermore, using domheap would have the advantage to allow in the future 
accounting the allocation to the guest and add more safety (there are discussion 
to make domheap per domain).

> +
> +    access_guest_memory_by_ipa(current->domain, call->guest_arg_ipa,
> +                               call->xen_arg, OPTEE_MSG_NONCONTIG_PAGE_SIZE,
> +                               false);

You need to check the return of access_guest_memory_by_ipa as this function can 
fail.

> +
> +    xen_addr = virt_to_maddr(call->xen_arg);
> +
> +    set_user_reg(regs, 1, xen_addr >> 32);
> +    set_user_reg(regs, 2, xen_addr & 0xFFFFFFFF);
> +
> +    return true;
> +}
> +
> +static void copy_std_request_back(struct optee_domain *ctx,
> +                                  struct cpu_user_regs *regs,
> +                                  struct optee_std_call *call)

Can you add a comment on top of the function explaining what it does?

> +{
> +    struct optee_msg_arg *guest_arg;
> +    struct page_info *page;
> +    unsigned int i;
> +    uint32_t attr;
> +
> +    /* copy_std_request() validated IPA for us */

Not really, the guest is free to modify the stage-2 mapping on another vCPU 
while this is happening. I agree that the guest will shoot himself, but we at 
least need to not have weird behavior happening.

In that case, I would check that the type is p2m_ram_rw as you don't want to 
write in read-only or foreign mapping.

Also, as copy_std_request() and copy_std_request_back may not be called in the 
same "thread" it would be useful if you specify a bit more the interaction.

> +    page = get_page_from_gfn(current->domain, paddr_to_pfn(call->guest_arg_ipa),

Please use gfn_x(gaddr_to_gfn(...)) to clarify this is a gfn. The gfn_x will be 
unnecessary soon with a cleanup that is currently under review.

> +                             NULL, P2M_ALLOC);
> +    if ( !page )
> +        return;
> +
> +    guest_arg = map_domain_page(page_to_mfn(page));

So here you assume that PAGE_SIZE == OPTEE_MSG_NONCONTIG_PAGE_SIZE. Can you add 
a BUILD_BUG_ON just above (with a comment) so we don't get some nasty surprise 
with 64K support.

Also, you should be able to use __map_domain_page(page) here.

> +
> +    guest_arg->ret = call->xen_arg->ret;
> +    guest_arg->ret_origin = call->xen_arg->ret_origin;
> +    guest_arg->session = call->xen_arg->session;

NIT: newline here please.

> +    for ( i = 0; i < call->xen_arg->num_params; i++ )
> +    {
> +        attr = call->xen_arg->params[i].attr;
> +
> +        switch ( attr & OPTEE_MSG_ATTR_TYPE_MASK )
> +        {
> +        case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT:
> +        case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT:
> +            guest_arg->params[i].u.tmem.size =
> +                call->xen_arg->params[i].u.tmem.size;
> +            continue;
> +        case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT:
> +        case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
> +            guest_arg->params[i].u.value.a =
> +                call->xen_arg->params[i].u.value.a;
> +            guest_arg->params[i].u.value.b =
> +                call->xen_arg->params[i].u.value.b;
> +            continue;
> +        case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
> +        case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:
> +            guest_arg->params[i].u.rmem.size =
> +                call->xen_arg->params[i].u.rmem.size;
> +            continue;
> +        case OPTEE_MSG_ATTR_TYPE_NONE:
> +        case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
> +        case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT:
> +            continue;
> +        }
> +    }
> +
> +    unmap_domain_page(guest_arg);
> +    put_page(page);
> +}
> +
> +static void execute_std_call(struct optee_domain *ctx,
> +                             struct cpu_user_regs *regs,
> +                             struct optee_std_call *call)
> +{
> +    register_t optee_ret;
> +
> +    forward_call(regs);
> +
> +    optee_ret = get_user_reg(regs, 0);
> +    if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) )
> +    {
> +        call->optee_thread_id = get_user_reg(regs, 3);
> +        call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret);
> +        put_std_call(ctx, call);
> +        return;
> +    }
> +
> +    copy_std_request_back(ctx, regs, call);
> +
> +    put_std_call(ctx, call);
> +    free_std_call(ctx, call);
> +}

Most of the code in this patch is self-explaining, which is quite nice :). 
However, I think this function would require explaining a bit the logic. For 
instance in which case the call will be freed.

> +
> +static bool handle_std_call(struct optee_domain *ctx,
> +                            struct cpu_user_regs *regs)
> +{
> +    struct optee_std_call *call = allocate_std_call(ctx);
> +
> +    if ( !call )
> +        return false;
> +
> +    if ( !copy_std_request(regs, call) )
> +        goto err;
> +
> +    /* Now we can safely examine contents of command buffer */
> +    if ( OPTEE_MSG_GET_ARG_SIZE(call->xen_arg->num_params) >
> +         OPTEE_MSG_NONCONTIG_PAGE_SIZE )
> +        goto err;
> +
> +    switch ( call->xen_arg->cmd )
> +    {
> +    case OPTEE_MSG_CMD_OPEN_SESSION:
> +    case OPTEE_MSG_CMD_CLOSE_SESSION:
> +    case OPTEE_MSG_CMD_INVOKE_COMMAND:
> +    case OPTEE_MSG_CMD_CANCEL:
> +    case OPTEE_MSG_CMD_REGISTER_SHM:
> +    case OPTEE_MSG_CMD_UNREGISTER_SHM:
> +        break;
> +    default:
> +        goto err;
> +    }
> +
> +    execute_std_call(ctx, regs, call);
> +
> +    return true;

This function is a bit odd to read. I think it would be more clear if you move 
this code before the break.

> +
> +err:
> +    put_std_call(ctx, call);
> +    free_std_call(ctx, call);
> +
> +    return false;
> +}
> +
> +static bool handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs)
> +{
> +    struct optee_std_call *call;
> +    int optee_thread_id = get_user_reg(regs, 3);
> +
> +    call = get_std_call(ctx, optee_thread_id);
> +
> +    if ( !call )
> +        return false;
> +
> +    switch ( call->rpc_op )
> +    {
> +    case OPTEE_SMC_RPC_FUNC_ALLOC:
> +        /* TODO: Add handling */
> +        break;
> +    case OPTEE_SMC_RPC_FUNC_FREE:
> +        /* TODO: Add handling */
> +        break;
> +    case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR:
> +        break;
> +    case OPTEE_SMC_RPC_FUNC_CMD:
> +        /* TODO: Add handling */
> +        break;
> +    }
> +
> +    execute_std_call(ctx, regs, call);
> +    return true;
>   }
>   
>   static bool handle_exchange_capabilities(struct cpu_user_regs *regs)
> @@ -161,6 +472,8 @@ static bool handle_exchange_capabilities(struct cpu_user_regs *regs)
>   
>   static bool optee_handle_call(struct cpu_user_regs *regs)
>   {
> +    struct optee_domain *ctx = current->domain->arch.tee;
> +
>       switch ( get_user_reg(regs, 0) )
>       {
>       case OPTEE_SMC_CALLS_COUNT:
> @@ -170,8 +483,6 @@ static bool optee_handle_call(struct cpu_user_regs *regs)
>       case OPTEE_SMC_FUNCID_GET_OS_REVISION:
>       case OPTEE_SMC_ENABLE_SHM_CACHE:
>       case OPTEE_SMC_DISABLE_SHM_CACHE:
> -    case OPTEE_SMC_CALL_WITH_ARG:
> -    case OPTEE_SMC_CALL_RETURN_FROM_RPC:
>           forward_call(regs);
>           return true;
>       case OPTEE_SMC_GET_SHM_CONFIG:
> @@ -180,6 +491,10 @@ static bool optee_handle_call(struct cpu_user_regs *regs)
>           return true;
>       case OPTEE_SMC_EXCHANGE_CAPABILITIES:
>           return handle_exchange_capabilities(regs);
> +    case OPTEE_SMC_CALL_WITH_ARG:
> +        return handle_std_call(ctx, regs);
> +    case OPTEE_SMC_CALL_RETURN_FROM_RPC:
> +        return handle_rpc(ctx, regs);
>       default:
>           return false;
>       }
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 175de44927..88b48697bd 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -97,6 +97,9 @@ struct arch_domain
>       struct vpl011 vpl011;
>   #endif
>   
> +#ifdef CONFIG_TEE
> +    void *tee;
> +#endif

Did you look whether there are any hole in arch_domain that could be re-used?

>   }  __cacheline_aligned;
>   
>   struct arch_vcpu
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 06/11] optee: add std call handling
  2019-01-16 18:45   ` Julien Grall
@ 2019-01-17 15:21     ` Volodymyr Babchuk
  2019-01-17 18:18       ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-17 15:21 UTC (permalink / raw)
  To: Julien Grall, Jens Wiklander
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Hello Julien,

Julien Grall writes:

> Hi Volodymyr,
>
> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>
>> The main way to communicate with OP-TEE is to issue standard SMCCC
>> call. "Standard" is a SMCCC term and it means that call can be
>> interrupted and OP-TEE can return control to NW before completing
>> the call.
>>
>> In contrast with fast calls, where arguments and return values
>> are passed in registers, standard calls use shared memory. Register
>> pair a1,a2 holds 64-bit PA of command buffer, where all arguments
>> are stored and which is used to return data. OP-TEE internally
>> copies contents of this buffer into own secure memory before accessing
>> and validating any data in command buffer. This is done to make sure
>> that NW will not change contents of the validated parameters.
>>
>> Mediator needs to do the same for number of reasons:
>>
>> 1. To make sure that guest will not change data after validation.
>> 2. To translate IPAs to PAs in the command buffer (this is not done
>>     in this patch).
>> 3. To hide translated address from guest, so it will not be able
>>     to do IPA->PA translation by misusing mediator.
>>
>> During standard call OP-TEE can issue multiple "RPC returns", asking
>> NW to do some work for OP-TEE. NW then issues special call
>> OPTEE_SMC_CALL_RETURN_FROM_RPC to resume handling of the original call.
>> Thus, mediator needs to maintain context for original standard call
>> during multiple SMCCC calls.
>>
>> Standard call is considered complete, when returned value is
>> not a RPC request.
>>
>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>> ---
>>
>>   Changes from v2:
>>    - renamed struct domain_ctx to struct optee_domain
>>    - fixed coding style
>>    - Now I use access_guest_memory_by_ipa() instead of mappings
>>      to read command buffer
>>    - Added tracking for in flight calls, so guest can't resume
>>      the same call from two CPUs simultaniously
>>
>>   xen/arch/arm/tee/optee.c     | 319 ++++++++++++++++++++++++++++++++++-
>>   xen/include/asm-arm/domain.h |   3 +
>>   2 files changed, 320 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>> index 584241b03a..dc90e2ed8e 100644
>> --- a/xen/arch/arm/tee/optee.c
>> +++ b/xen/arch/arm/tee/optee.c
>> @@ -12,6 +12,8 @@
>>    */
>>     #include <xen/device_tree.h>
>> +#include <xen/domain_page.h>
>> +#include <xen/guest_access.h>
>>   #include <xen/sched.h>
>>   #include <asm/smccc.h>
>>   #include <asm/tee/tee.h>
>> @@ -22,11 +24,38 @@
>>   /* Client ID 0 is reserved for hypervisor itself */
>>   #define OPTEE_CLIENT_ID(domain) (domain->domain_id + 1)
>>   +/*
>> + * Maximal number of concurrent standard calls from one guest. This
>> + * corresponds to OPTEE configuration option CFG_NUM_THREADS, because
>> + * OP-TEE spawns a thread for every standard call.
>
> Looking at OP-TEE, CFG_NUM_THREADS will vary depending on the
> platform. Is there any way to probe that number of threads from Xen?
Yes, this is per-platform configuration.
Easiest way is to add Fast SMC to OP-TEE that will report this
parameter.
Jens, what do you think about adding additional call?

> In any case, I think we should update the comment to reflect that this
> seems to be the maximum CFG_NUM_THREADS supported by any upstream
> platform.

Actually, OP-TEE protocol have possibility to handle limited number of
threads correctly. OP-TEE can report that all threads are busy and
client will wait for a free one. XEN can do the same, although this is not
implemented in this patch series. But I can add this.

Basically this means that all can work correctly even with
MAX_STD_CALLS==1. It just will be not so efficient.

>> + */
>> +#define MAX_STD_CALLS   16
>> +
>>   #define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
>>   #define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
>>                                 OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM |  \
>>                                 OPTEE_SMC_SEC_CAP_DYNAMIC_SHM)
>>   +/*
>> + * Call context. OP-TEE can issue multiple RPC returns during one call.
>> + * We need to preserve context during them.
>> + */
>> +struct optee_std_call {
>> +    struct list_head list;
>> +    struct optee_msg_arg *xen_arg;
>> +    paddr_t guest_arg_ipa;
>> +    int optee_thread_id;
>> +    int rpc_op;
>> +    bool in_flight;
>> +};
>> +
>> +/* Domain context */
>> +struct optee_domain {
>> +    struct list_head call_list;
>> +    atomic_t call_count;
>> +    spinlock_t lock;
>> +};
>> +
>>   static bool optee_probe(void)
>>   {
>>       struct dt_device_node *node;
>> @@ -52,6 +81,11 @@ static bool optee_probe(void)
>>   static int optee_enable(struct domain *d)
>>   {
>>       struct arm_smccc_res resp;
>> +    struct optee_domain *ctx;
>> +
>> +    ctx = xzalloc(struct optee_domain);
>> +    if ( !ctx )
>> +        return -ENOMEM;
>>         /*
>>        * Inform OP-TEE about a new guest.
>> @@ -69,9 +103,16 @@ static int optee_enable(struct domain *d)
>>       {
>>           gprintk(XENLOG_WARNING, "Unable to create OPTEE client: rc = 0x%X\n",
>>                   (uint32_t)resp.a0);
>> +        xfree(ctx);
>>           return -ENODEV;
>>       }
>>   +    INIT_LIST_HEAD(&ctx->call_list);
>> +    atomic_set(&ctx->call_count, 0);
>> +    spin_lock_init(&ctx->lock);
>> +
>> +    d->arch.tee = ctx;
>> +
>>       return 0;
>>   }
>>   @@ -111,9 +152,86 @@ static void set_return(struct cpu_user_regs
>> *regs, uint32_t ret)
>>       set_user_reg(regs, 7, 0);
>>   }
>>   +static struct optee_std_call *allocate_std_call(struct
>> optee_domain *ctx)
>> +{
>> +    struct optee_std_call *call;
>> +    int count;
>> +
>> +    /* Make sure that guest does not execute more than MAX_STD_CALLS */
>> +    count = atomic_add_unless(&ctx->call_count, 1, MAX_STD_CALLS);
>> +    if ( count == MAX_STD_CALLS )
>> +        return NULL;
>> +
>> +    call = xzalloc(struct optee_std_call);
>> +    if ( !call )
>> +    {
>> +        atomic_dec(&ctx->call_count);
>> +        return NULL;
>> +    }
>> +
>> +    call->optee_thread_id = -1;
>> +    call->in_flight = true;
>> +
>> +    spin_lock(&ctx->lock);
>> +    list_add_tail(&call->list, &ctx->call_list);
>> +    spin_unlock(&ctx->lock);
>> +
>> +    return call;
>> +}
>> +
>> +static void free_std_call(struct optee_domain *ctx,
>> +                          struct optee_std_call *call)
>> +{
>> +    atomic_dec(&ctx->call_count);
>> +
>> +    spin_lock(&ctx->lock);
>> +    list_del(&call->list);
>> +    spin_unlock(&ctx->lock);
>> +
>> +    ASSERT(!call->in_flight);
>> +    xfree(call->xen_arg);
>> +    xfree(call);
>> +}
>> +
>> +static struct optee_std_call *get_std_call(struct optee_domain *ctx,
>> +                                           int thread_id)
>> +{
>> +    struct optee_std_call *call;
>> +
>> +    spin_lock(&ctx->lock);
>> +    list_for_each_entry( call, &ctx->call_list, list )
>> +    {
>> +        if ( call->optee_thread_id == thread_id )
>> +        {
>> +            if ( call->in_flight )
>> +            {
>> +                gprintk(XENLOG_WARNING, "Guest tries to execute call which is already in flight");
>> +                goto out;
>> +            }
>> +            call->in_flight = true;
>> +            spin_unlock(&ctx->lock);
>> +            return call;
>> +        }
>> +    }
>> +out:
>> +    spin_unlock(&ctx->lock);
>> +
>> +    return NULL;
>> +}
>> +
>> +static void put_std_call(struct optee_domain *ctx, struct optee_std_call *call)
>> +{
>> +    spin_lock(&ctx->lock);
>> +    ASSERT(call->in_flight);
>> +    call->in_flight = false;
>> +    spin_unlock(&ctx->lock);
>> +}
>> +
>>   static void optee_domain_destroy(struct domain *d)
>>   {
>>       struct arm_smccc_res resp;
>> +    struct optee_std_call *call, *call_tmp;
>> +    struct optee_domain *ctx = d->arch.tee;
>>
>>       /* At this time all domain VCPUs should be stopped */
>>   @@ -124,6 +242,199 @@ static void optee_domain_destroy(struct
>> domain *d)
>>        */
>>       arm_smccc_smc(OPTEE_SMC_VM_DESTROYED, OPTEE_CLIENT_ID(d), 0, 0, 0, 0, 0, 0,
>>                     &resp);
>
> This function can be called without enable and should be
> idempotent. So I woudld check d->arch.tee before and...
Nothing bad will happen if it called without call to enable.

But yes, I need to check for ctx != NULL.

>> +    ASSERT(!spin_is_locked(&ctx->lock));
>> +
>> +    list_for_each_entry_safe( call, call_tmp, &ctx->call_list, list )
>> +        free_std_call(ctx, call);
>> +
>> +    ASSERT(!atomic_read(&ctx->call_count));
>> +
>> +    xfree(d->arch.tee);
>
> use XFREE here.
Oh, looks like I missed this macro introduction. Thank you for pointing
out.

>> +}
>> +
>> +/*
>> + * Copy command buffer into xen memory to:
>> + * 1) Hide translated addresses from guest
>> + * 2) Make sure that guest wouldn't change data in command buffer during call
>> + */
>> +static bool copy_std_request(struct cpu_user_regs *regs,
>> +                             struct optee_std_call *call)
>> +{
>> +    paddr_t xen_addr;
>> +
>> +    call->guest_arg_ipa = (paddr_t)get_user_reg(regs, 1) << 32 |
>> +                            get_user_reg(regs, 2);
>
> NIT: The indentation looks weird here.
>
>> +
>> +    /*
>> +     * Command buffer should start at page boundary.
>> +     * This is OP-TEE ABI requirement.
>> +     */
>> +    if ( call->guest_arg_ipa & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>> +        return false;
>> +
>> +    call->xen_arg = _xmalloc(OPTEE_MSG_NONCONTIG_PAGE_SIZE,
>> +                             OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>> +    if ( !call->xen_arg )
>> +        return false;
>> +
>> +    BUILD_BUG_ON(OPTEE_MSG_NONCONTIG_PAGE_SIZE > PAGE_SIZE);
>
> As you use _xmalloc, you should not need this. This is only necessary
> if you use alloc_xenheap_page.
>
Yes, right. This is leftover from time when I mapped guest page into
XEN. I'll move it down, to place where I map that page.

> I am wondering whether it is wise to allocate the memory from xenheap
> and not domheap. While on Arm64 (for now) xenheap and domheap are the
> same, on Arm32 they are different. The xenheap is at most 1GB, so
> pretty limited.
Honestly, I have no opinion there. What are limitations of domheap in
this case?


> Furthermore, using domheap would have the advantage to allow in the
> future accounting the allocation to the guest and add more safety
> (there are discussion to make domheap per domain).
>
>> +
>> +    access_guest_memory_by_ipa(current->domain, call->guest_arg_ipa,
>> +                               call->xen_arg, OPTEE_MSG_NONCONTIG_PAGE_SIZE,
>> +                               false);
>
> You need to check the return of access_guest_memory_by_ipa as this
> function can fail.
Will do.

>> +
>> +    xen_addr = virt_to_maddr(call->xen_arg);
>> +
>> +    set_user_reg(regs, 1, xen_addr >> 32);
>> +    set_user_reg(regs, 2, xen_addr & 0xFFFFFFFF);
>> +
>> +    return true;
>> +}
>> +
>> +static void copy_std_request_back(struct optee_domain *ctx,
>> +                                  struct cpu_user_regs *regs,
>> +                                  struct optee_std_call *call)
>
> Can you add a comment on top of the function explaining what it does?
Yes, sure.

"Copy OP-TEE response back into guest's request buffer" will be sufficient?

>> +{
>> +    struct optee_msg_arg *guest_arg;
>> +    struct page_info *page;
>> +    unsigned int i;
>> +    uint32_t attr;
>> +
>> +    /* copy_std_request() validated IPA for us */
>
> Not really, the guest is free to modify the stage-2 mapping on another
> vCPU while this is happening. I agree that the guest will shoot
> himself, but we at least need to not have weird behavior happening.
>
> In that case, I would check that the type is p2m_ram_rw as you don't
> want to write in read-only or foreign mapping.
How I can do this atomically? I.e. what if guest will mangle p2m right
after the check?

> Also, as copy_std_request() and copy_std_request_back may not be
> called in the same "thread" it would be useful if you specify a bit
> more the interaction.
I not sure what do you mean there.

>
>> +    page = get_page_from_gfn(current->domain, paddr_to_pfn(call->guest_arg_ipa),
>
> Please use gfn_x(gaddr_to_gfn(...)) to clarify this is a gfn. The
> gfn_x will be unnecessary soon with a cleanup that is currently under
> review.
So there are chances, that it will be removed when time will come to
merge this patch series? :)

>
>> +                             NULL, P2M_ALLOC);
>> +    if ( !page )
>> +        return;
>> +
>> +    guest_arg = map_domain_page(page_to_mfn(page));
>
> So here you assume that PAGE_SIZE ==
> OPTEE_MSG_NONCONTIG_PAGE_SIZE. Can you add a BUILD_BUG_ON just above
> (with a comment) so we don't get some nasty surprise with 64K support.
Yes, sure.

> Also, you should be able to use __map_domain_page(page) here.
Oh, thank you for pointing to this macro. Not the best descriptive name,
I must say.

>
>> +
>> +    guest_arg->ret = call->xen_arg->ret;
>> +    guest_arg->ret_origin = call->xen_arg->ret_origin;
>> +    guest_arg->session = call->xen_arg->session;
>
> NIT: newline here please.
>
>> +    for ( i = 0; i < call->xen_arg->num_params; i++ )
>> +    {
>> +        attr = call->xen_arg->params[i].attr;
>> +
>> +        switch ( attr & OPTEE_MSG_ATTR_TYPE_MASK )
>> +        {
>> +        case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT:
>> +        case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT:
>> +            guest_arg->params[i].u.tmem.size =
>> +                call->xen_arg->params[i].u.tmem.size;
>> +            continue;
>> +        case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT:
>> +        case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
>> +            guest_arg->params[i].u.value.a =
>> +                call->xen_arg->params[i].u.value.a;
>> +            guest_arg->params[i].u.value.b =
>> +                call->xen_arg->params[i].u.value.b;
>> +            continue;
>> +        case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
>> +        case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:
>> +            guest_arg->params[i].u.rmem.size =
>> +                call->xen_arg->params[i].u.rmem.size;
>> +            continue;
>> +        case OPTEE_MSG_ATTR_TYPE_NONE:
>> +        case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
>> +        case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT:
>> +            continue;
>> +        }
>> +    }
>> +
>> +    unmap_domain_page(guest_arg);
>> +    put_page(page);
>> +}
>> +
>> +static void execute_std_call(struct optee_domain *ctx,
>> +                             struct cpu_user_regs *regs,
>> +                             struct optee_std_call *call)
>> +{
>> +    register_t optee_ret;
>> +
>> +    forward_call(regs);
>> +
>> +    optee_ret = get_user_reg(regs, 0);
>> +    if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) )
>> +    {
>> +        call->optee_thread_id = get_user_reg(regs, 3);
>> +        call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret);
>> +        put_std_call(ctx, call);
>> +        return;
>> +    }
>> +
>> +    copy_std_request_back(ctx, regs, call);
>> +
>> +    put_std_call(ctx, call);
>> +    free_std_call(ctx, call);
>> +}
>
> Most of the code in this patch is self-explaining, which is quite nice
> :). However, I think this function would require explaining a bit the
> logic. For instance in which case the call will be freed.
Thank you :)
Yes, this depends on if call was completed or it was interrupted due to
RPC, in which case it will be resumed later. I'll add comment.

>> +
>> +static bool handle_std_call(struct optee_domain *ctx,
>> +                            struct cpu_user_regs *regs)
>> +{
>> +    struct optee_std_call *call = allocate_std_call(ctx);
>> +
>> +    if ( !call )
>> +        return false;
>> +
>> +    if ( !copy_std_request(regs, call) )
>> +        goto err;
>> +
>> +    /* Now we can safely examine contents of command buffer */
>> +    if ( OPTEE_MSG_GET_ARG_SIZE(call->xen_arg->num_params) >
>> +         OPTEE_MSG_NONCONTIG_PAGE_SIZE )
>> +        goto err;
>> +
>> +    switch ( call->xen_arg->cmd )
>> +    {
>> +    case OPTEE_MSG_CMD_OPEN_SESSION:
>> +    case OPTEE_MSG_CMD_CLOSE_SESSION:
>> +    case OPTEE_MSG_CMD_INVOKE_COMMAND:
>> +    case OPTEE_MSG_CMD_CANCEL:
>> +    case OPTEE_MSG_CMD_REGISTER_SHM:
>> +    case OPTEE_MSG_CMD_UNREGISTER_SHM:
>> +        break;
>> +    default:
>> +        goto err;
>> +    }
>> +
>> +    execute_std_call(ctx, regs, call);
>> +
>> +    return true;
>
> This function is a bit odd to read. I think it would be more clear if
> you move this code before the break.
Yes, you are right.

>> +
>> +err:
>> +    put_std_call(ctx, call);
>> +    free_std_call(ctx, call);
>> +
>> +    return false;
>> +}
>> +
>> +static bool handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs)
>> +{
>> +    struct optee_std_call *call;
>> +    int optee_thread_id = get_user_reg(regs, 3);
>> +
>> +    call = get_std_call(ctx, optee_thread_id);
>> +
>> +    if ( !call )
>> +        return false;
>> +
>> +    switch ( call->rpc_op )
>> +    {
>> +    case OPTEE_SMC_RPC_FUNC_ALLOC:
>> +        /* TODO: Add handling */
>> +        break;
>> +    case OPTEE_SMC_RPC_FUNC_FREE:
>> +        /* TODO: Add handling */
>> +        break;
>> +    case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR:
>> +        break;
>> +    case OPTEE_SMC_RPC_FUNC_CMD:
>> +        /* TODO: Add handling */
>> +        break;
>> +    }
>> +
>> +    execute_std_call(ctx, regs, call);
>> +    return true;
>>   }
>>     static bool handle_exchange_capabilities(struct cpu_user_regs
>> *regs)
>> @@ -161,6 +472,8 @@ static bool handle_exchange_capabilities(struct cpu_user_regs *regs)
>>     static bool optee_handle_call(struct cpu_user_regs *regs)
>>   {
>> +    struct optee_domain *ctx = current->domain->arch.tee;
>> +
>>       switch ( get_user_reg(regs, 0) )
>>       {
>>       case OPTEE_SMC_CALLS_COUNT:
>> @@ -170,8 +483,6 @@ static bool optee_handle_call(struct cpu_user_regs *regs)
>>       case OPTEE_SMC_FUNCID_GET_OS_REVISION:
>>       case OPTEE_SMC_ENABLE_SHM_CACHE:
>>       case OPTEE_SMC_DISABLE_SHM_CACHE:
>> -    case OPTEE_SMC_CALL_WITH_ARG:
>> -    case OPTEE_SMC_CALL_RETURN_FROM_RPC:
>>           forward_call(regs);
>>           return true;
>>       case OPTEE_SMC_GET_SHM_CONFIG:
>> @@ -180,6 +491,10 @@ static bool optee_handle_call(struct cpu_user_regs *regs)
>>           return true;
>>       case OPTEE_SMC_EXCHANGE_CAPABILITIES:
>>           return handle_exchange_capabilities(regs);
>> +    case OPTEE_SMC_CALL_WITH_ARG:
>> +        return handle_std_call(ctx, regs);
>> +    case OPTEE_SMC_CALL_RETURN_FROM_RPC:
>> +        return handle_rpc(ctx, regs);
>>       default:
>>           return false;
>>       }
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 175de44927..88b48697bd 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -97,6 +97,9 @@ struct arch_domain
>>       struct vpl011 vpl011;
>>   #endif
>>   +#ifdef CONFIG_TEE
>> +    void *tee;
>> +#endif
>
> Did you look whether there are any hole in arch_domain that could be re-used?
I thought about that. But what are chances to find 64bit-wide,
64bit-aligned hole in a structure? If I remember C standard correctly,
there are no reasons for compiler to leave such holes.

I'm talking about aarch64 there, but I assume, that this is true for aarch32/armv7.

>>   }  __cacheline_aligned;
>>     struct arch_vcpu
>>
>
> Cheers,


--
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 06/11] optee: add std call handling
  2019-01-17 15:21     ` Volodymyr Babchuk
@ 2019-01-17 18:18       ` Julien Grall
  2019-01-17 19:13         ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-17 18:18 UTC (permalink / raw)
  To: Volodymyr Babchuk, Jens Wiklander
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev

Hi Volodymyr,

On 17/01/2019 15:21, Volodymyr Babchuk wrote:
> Julien Grall writes:
> 
>> Hi Volodymyr,
>>
>> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>>
>>> The main way to communicate with OP-TEE is to issue standard SMCCC
>>> call. "Standard" is a SMCCC term and it means that call can be
>>> interrupted and OP-TEE can return control to NW before completing
>>> the call.
>>>
>>> In contrast with fast calls, where arguments and return values
>>> are passed in registers, standard calls use shared memory. Register
>>> pair a1,a2 holds 64-bit PA of command buffer, where all arguments
>>> are stored and which is used to return data. OP-TEE internally
>>> copies contents of this buffer into own secure memory before accessing
>>> and validating any data in command buffer. This is done to make sure
>>> that NW will not change contents of the validated parameters.
>>>
>>> Mediator needs to do the same for number of reasons:
>>>
>>> 1. To make sure that guest will not change data after validation.
>>> 2. To translate IPAs to PAs in the command buffer (this is not done
>>>      in this patch).
>>> 3. To hide translated address from guest, so it will not be able
>>>      to do IPA->PA translation by misusing mediator.
>>>
>>> During standard call OP-TEE can issue multiple "RPC returns", asking
>>> NW to do some work for OP-TEE. NW then issues special call
>>> OPTEE_SMC_CALL_RETURN_FROM_RPC to resume handling of the original call.
>>> Thus, mediator needs to maintain context for original standard call
>>> during multiple SMCCC calls.
>>>
>>> Standard call is considered complete, when returned value is
>>> not a RPC request.
>>>
>>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>> ---
>>>
>>>    Changes from v2:
>>>     - renamed struct domain_ctx to struct optee_domain
>>>     - fixed coding style
>>>     - Now I use access_guest_memory_by_ipa() instead of mappings
>>>       to read command buffer
>>>     - Added tracking for in flight calls, so guest can't resume
>>>       the same call from two CPUs simultaniously
>>>
>>>    xen/arch/arm/tee/optee.c     | 319 ++++++++++++++++++++++++++++++++++-
>>>    xen/include/asm-arm/domain.h |   3 +
>>>    2 files changed, 320 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>> index 584241b03a..dc90e2ed8e 100644
>>> --- a/xen/arch/arm/tee/optee.c
>>> +++ b/xen/arch/arm/tee/optee.c
>>> @@ -12,6 +12,8 @@
>>>     */
>>>      #include <xen/device_tree.h>
>>> +#include <xen/domain_page.h>
>>> +#include <xen/guest_access.h>
>>>    #include <xen/sched.h>
>>>    #include <asm/smccc.h>
>>>    #include <asm/tee/tee.h>
>>> @@ -22,11 +24,38 @@
>>>    /* Client ID 0 is reserved for hypervisor itself */
>>>    #define OPTEE_CLIENT_ID(domain) (domain->domain_id + 1)
>>>    +/*
>>> + * Maximal number of concurrent standard calls from one guest. This
>>> + * corresponds to OPTEE configuration option CFG_NUM_THREADS, because
>>> + * OP-TEE spawns a thread for every standard call.
>>
>> Looking at OP-TEE, CFG_NUM_THREADS will vary depending on the
>> platform. Is there any way to probe that number of threads from Xen?
> Yes, this is per-platform configuration.
> Easiest way is to add Fast SMC to OP-TEE that will report this
> parameter.
> Jens, what do you think about adding additional call?
> 
>> In any case, I think we should update the comment to reflect that this
>> seems to be the maximum CFG_NUM_THREADS supported by any upstream
>> platform.
> 
> Actually, OP-TEE protocol have possibility to handle limited number of
> threads correctly. OP-TEE can report that all threads are busy and
> client will wait for a free one. XEN can do the same, although this is not
> implemented in this patch series. But I can add this.

Could you expand by wait? Will it block in OP-TEE/Xen or does it return to the 
guest?

> 
> Basically this means that all can work correctly even with
> MAX_STD_CALLS==1. It just will be not so efficient.

Given the OS is not aware of the number of threads, The problem would be the 
same without Xen between. Am I right?

[...]

>>> +
>>> +    /*
>>> +     * Command buffer should start at page boundary.
>>> +     * This is OP-TEE ABI requirement.
>>> +     */
>>> +    if ( call->guest_arg_ipa & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>> +        return false;
>>> +
>>> +    call->xen_arg = _xmalloc(OPTEE_MSG_NONCONTIG_PAGE_SIZE,
>>> +                             OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>>> +    if ( !call->xen_arg )
>>> +        return false;
>>> +
>>> +    BUILD_BUG_ON(OPTEE_MSG_NONCONTIG_PAGE_SIZE > PAGE_SIZE);
>>
>> As you use _xmalloc, you should not need this. This is only necessary
>> if you use alloc_xenheap_page.
>>
> Yes, right. This is leftover from time when I mapped guest page into
> XEN. I'll move it down, to place where I map that page.
> 
>> I am wondering whether it is wise to allocate the memory from xenheap
>> and not domheap. While on Arm64 (for now) xenheap and domheap are the
>> same, on Arm32 they are different. The xenheap is at most 1GB, so
>> pretty limited.
> Honestly, I have no opinion there. What are limitations of domheap in
> this case?

domheap pages may not always be mapped in Xen page-tables. So you have to call 
map_domain_page/unmap_domain_page at every use.

In practice, on Arm64, those operations are today a NOP because the memory is 
always mapped. On Arm32, domheap is never mapped so those operations will 
require to modify the page-tables.

There would be potentially ways to optimize the Arm32 case. So I think this is 
not a big concern as it would allow to account the memory to the domain and take 
advantage of potential new safety feature around domheap.

BTW, I am not asking to implement the accounting :). You can still allocate 
domheap memory without a domain assigned. I am only giving the advantages of 
using domheap over xenheap :).

>>> +
>>> +    xen_addr = virt_to_maddr(call->xen_arg);
>>> +
>>> +    set_user_reg(regs, 1, xen_addr >> 32);
>>> +    set_user_reg(regs, 2, xen_addr & 0xFFFFFFFF);
>>> +
>>> +    return true;
>>> +}
>>> +
>>> +static void copy_std_request_back(struct optee_domain *ctx,
>>> +                                  struct cpu_user_regs *regs,
>>> +                                  struct optee_std_call *call)
>>
>> Can you add a comment on top of the function explaining what it does?
> Yes, sure.
> 
> "Copy OP-TEE response back into guest's request buffer" will be sufficient?

See more below.

> 
>>> +{
>>> +    struct optee_msg_arg *guest_arg;
>>> +    struct page_info *page;
>>> +    unsigned int i;
>>> +    uint32_t attr;
>>> +
>>> +    /* copy_std_request() validated IPA for us */
>>
>> Not really, the guest is free to modify the stage-2 mapping on another
>> vCPU while this is happening. I agree that the guest will shoot
>> himself, but we at least need to not have weird behavior happening.
>>
>> In that case, I would check that the type is p2m_ram_rw as you don't
>> want to write in read-only or foreign mapping.
> How I can do this atomically? I.e. what if guest will mangle p2m right
> after the check?

What you want to prevent is Xen writing to the wrong page. The guest should not 
play with page that are shared with an higher exception level.

get_page_from_gfn() takes a reference on the current page, that will be release 
by put_page(). Between that you are sure the page can not disappear under your feet.

Furthermore, AFAIK, there are no way for an Arm guest to modify the p2m type of 
a page once inserted. It can only remove or replace with a newly allocated page 
the mapping. If the guest instructs to
	- remove the page, as you have a reference that page will not disappear.
	- replace the page with a new one, then the guest will not be able to see the 
result. Tough luck, but it was not meant to do that :).

> 
>> Also, as copy_std_request() and copy_std_request_back may not be
>> called in the same "thread" it would be useful if you specify a bit
>> more the interaction.
> I not sure what do you mean there.

What I meant is the following can happen:

      guest vCPU A         |   Xen
			
      Initiate call 1
	                      copy_std_request()
                               call OP-TEE
				-> Call "preempted"
                      	      return to guest
      Resume call 1
			      resume call in OP-TEE
			      copy_std_back_request()

AFAICT, the call could even resume from a different vCPU. It is not entirely 
trivial to understand this from just reading the code and the comment 
"copy_std_request() validated IPA for us" leads to think copy_std_request() was 
called right before. This may not be the case. So can you detail a bit more the 
interaction in the code?

> 
>>
>>> +    page = get_page_from_gfn(current->domain, paddr_to_pfn(call->guest_arg_ipa),
>>
>> Please use gfn_x(gaddr_to_gfn(...)) to clarify this is a gfn. The
>> gfn_x will be unnecessary soon with a cleanup that is currently under
>> review.
> So there are chances, that it will be removed when time will come to
> merge this patch series? :)

There is a chance to be merged. Even if it is wasn't, I would prefer to use 
gaddr_to_gfn(..) as your are dealing with a guest physical address. Ideally I 
would like paddr_to_pfn completely disappear in Arm code as the return is not 
typesafe (GFN vs MFN).

[...]

>>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>>> index 175de44927..88b48697bd 100644
>>> --- a/xen/include/asm-arm/domain.h
>>> +++ b/xen/include/asm-arm/domain.h
>>> @@ -97,6 +97,9 @@ struct arch_domain
>>>        struct vpl011 vpl011;
>>>    #endif
>>>    +#ifdef CONFIG_TEE
>>> +    void *tee;
>>> +#endif
>>
>> Did you look whether there are any hole in arch_domain that could be re-used?
> I thought about that. But what are chances to find 64bit-wide,
> 64bit-aligned hole in a structure? If I remember C standard correctly,
> there are no reasons for compiler to leave such holes.

It depends on the alignment requested for each structure. Have a look at pahole 
to see the number (and size) of holes we have in some structures ;).

Anyway, I can't see anything promising in p2m_domain so far.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 05/11] optee: add fast calls handling
  2018-12-18 21:11 ` [PATCH v3 05/11] optee: add fast calls handling Volodymyr Babchuk
@ 2019-01-17 18:31   ` Julien Grall
  2019-01-17 19:22     ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-17 18:31 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> Some fast SMCCC calls to OP-TEE should be handled in a special way.
> Capabilities exchange should be filtered out, so only caps
> known to mediator are used. Also mediator disables static SHM
> memory capability, because it can't share OP-TEE memory with a domain.
> Only domain can share memory with OP-TEE, so it ensures that OP-TEE
> supports dynamic SHM.
> 
> Basically, static SHM is a reserved memory region which is always
> mapped into OP-TEE address space. It belongs to OP-TEE. Normally,
> NW is allowed to access there, so it can communicate with OP-TEE.
> 
> On other hand, dynamic SHM is NW's own memory, which it can share
> with OP-TEE. OP-TEE maps this memory dynamically, when it wants to
> access it.
> 
> Because mediator can't share one static SHM region with all guests,
> it just disables it for all.

I haven't seen an answer to my question on the previous version. For reminder:

Would it make sense to still allow the hardware domain to access static SHM?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 07/11] optee: add support for RPC SHM buffers
  2018-12-18 21:11 ` [PATCH v3 07/11] optee: add support for RPC SHM buffers Volodymyr Babchuk
@ 2019-01-17 18:49   ` Julien Grall
  2019-01-17 19:48     ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-17 18:49 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> OP-TEE usually uses the same idea with command buffers (see
> previous commit) to issue RPC requests. Problem is that initially
> it has no buffer, where it can write request. So the first RPC
> request it makes is special: it requests NW to allocate shared
> buffer for other RPC requests. Usually this buffer is allocated
> only once for every OP-TEE thread and it remains allocated all
> the time until shutdown.

By shutdown you mean for the OS or the thread?

> 
> Mediator needs to pin this buffer(s) to make sure that domain can't
> transfer it to someone else.
> 
> Life cycle of this buffer is controlled by OP-TEE. It asks guest
> to create buffer and it asks it to free it.
> 
> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> ---
> 
>   Changes from v2:
>    - Added check to ensure that guests does not return two SHM buffers
>      with the same cookie
>    - Fixed coding style
>    - Storing RPC parameters during RPC return to make sure, that guest
>      will not change them during call continuation
>    
>   xen/arch/arm/tee/optee.c | 140 ++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 138 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
> index dc90e2ed8e..771148e940 100644
> --- a/xen/arch/arm/tee/optee.c
> +++ b/xen/arch/arm/tee/optee.c
> @@ -30,6 +30,12 @@
>    * OP-TEE spawns a thread for every standard call.
>    */
>   #define MAX_STD_CALLS   16
> +/*
> + * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
> + * for one SHM buffer per thread, so this also corresponds to OP-TEE
> + * option CFG_NUM_THREADS
> + */

Same as patch #6 regarding CFG_NUM_THREADS.

NIT: Missing full stop

> +#define MAX_RPC_SHMS    MAX_STD_CALLS
>   
>   #define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
>   #define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
> @@ -47,12 +53,22 @@ struct optee_std_call {
>       int optee_thread_id;
>       int rpc_op;
>       bool in_flight;
> +    register_t rpc_params[2];
> +};
> +
> +/* Pre-allocated SHM buffer for RPC commands */
> +struct shm_rpc {
> +    struct list_head list;
> +    struct page_info *guest_page;
> +    uint64_t cookie;
>   };
>   
>   /* Domain context */
>   struct optee_domain {
>       struct list_head call_list;
> +    struct list_head shm_rpc_list;
>       atomic_t call_count;
> +    atomic_t shm_rpc_count;
>       spinlock_t lock;
>   };
>   
> @@ -108,7 +124,11 @@ static int optee_enable(struct domain *d)
>       }
>   
>       INIT_LIST_HEAD(&ctx->call_list);
> +    INIT_LIST_HEAD(&ctx->shm_rpc_list);
> +
>       atomic_set(&ctx->call_count, 0);
> +    atomic_set(&ctx->shm_rpc_count, 0);
> +
>       spin_lock_init(&ctx->lock);
>   
>       d->arch.tee = ctx;
> @@ -227,11 +247,90 @@ static void put_std_call(struct optee_domain *ctx, struct optee_std_call *call)
>       spin_unlock(&ctx->lock);
>   }
>   
> +static struct shm_rpc *allocate_and_pin_shm_rpc(struct optee_domain *ctx,
> +                                                paddr_t gaddr,

As I said on v3, I would prefer if you use gfn_t here. This would introduce more 
safety.

> +                                                uint64_t cookie)
> +{
> +    struct shm_rpc *shm_rpc, *shm_rpc_tmp;
> +    int count;
> +
> +    /* Make sure that guest does not allocate more than MAX_RPC_SHMS */

NIT: Missing full stop.

> +    count = atomic_add_unless(&ctx->shm_rpc_count, 1, MAX_RPC_SHMS);
> +    if ( count == MAX_RPC_SHMS )
> +        return NULL;
> +
> +    shm_rpc = xzalloc(struct shm_rpc);
> +    if ( !shm_rpc )
> +        goto err;
> +
> +    /* This page will be shared with OP-TEE, so we need to pin it */

Ditto.

> +    shm_rpc->guest_page = get_page_from_gfn(current->domain,
> +                                            paddr_to_pfn(gaddr),
> +                                            NULL,
> +                                            P2M_ALLOC);

I think it would be wrong to share any page other than p2m_ram_rw with OP-TEE.

> +    if ( !shm_rpc->guest_page )
> +        goto err;
> +
> +    shm_rpc->cookie = cookie;
> +
> +    spin_lock(&ctx->lock);
> +    /* Check if there is already SHM with the same cookie */

"already a SHM"

NIT: Missing full stop.

> +    list_for_each_entry( shm_rpc_tmp, &ctx->shm_rpc_list, list )
> +    {
> +        if ( shm_rpc_tmp->cookie == cookie )
> +        {
> +            spin_unlock(&ctx->lock);
> +            gprintk(XENLOG_WARNING, "Guest tries to use the same RPC SHM cookie");

I would use gdprintk(...) and add the cookie number for debug.

> +            goto err;
> +        }
> +    }
> +
> +    list_add_tail(&shm_rpc->list, &ctx->shm_rpc_list);
> +    spin_unlock(&ctx->lock);
> +
> +    return shm_rpc;
> +
> +err:
> +    atomic_dec(&ctx->shm_rpc_count);
> +    put_page(shm_rpc->guest_page);
> +    xfree(shm_rpc);
> +
> +    return NULL;
> +}
> +
> +static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
> +{
> +    struct shm_rpc *shm_rpc;
> +    bool found = false;
> +
> +    spin_lock(&ctx->lock);
> +
> +    list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
> +    {
> +        if ( shm_rpc->cookie == cookie )
> +        {
> +            found = true;
> +            list_del(&shm_rpc->list);
> +            break;
> +        }
> +    }
> +    spin_unlock(&ctx->lock);

I think you are missing an atomic_dec(&ctx->shm_rpc_count) here.

> +
> +    if ( !found )
> +        return;
> +
> +    ASSERT(shm_rpc->guest_page);
> +    put_page(shm_rpc->guest_page);
> +
> +    xfree(shm_rpc);
> +}
> +
>   static void optee_domain_destroy(struct domain *d)
>   {
>       struct arm_smccc_res resp;
>       struct optee_std_call *call, *call_tmp;
>       struct optee_domain *ctx = d->arch.tee;
> +    struct shm_rpc *shm_rpc, *shm_rpc_tmp;
>   
>       /* At this time all domain VCPUs should be stopped */
>   
> @@ -247,7 +346,11 @@ static void optee_domain_destroy(struct domain *d)
>       list_for_each_entry_safe( call, call_tmp, &ctx->call_list, list )
>           free_std_call(ctx, call);
>   
> +    list_for_each_entry_safe( shm_rpc, shm_rpc_tmp, &ctx->shm_rpc_list, list )
> +        free_shm_rpc(ctx, shm_rpc->cookie);
> +
>       ASSERT(!atomic_read(&ctx->call_count));
> +    ASSERT(!atomic_read(&ctx->shm_rpc_count));
>   
>       xfree(d->arch.tee);
>   }
> @@ -356,6 +459,8 @@ static void execute_std_call(struct optee_domain *ctx,
>       optee_ret = get_user_reg(regs, 0);
>       if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) )
>       {
> +        call->rpc_params[0] = get_user_reg(regs, 1);
> +        call->rpc_params[1] = get_user_reg(regs, 2);
>           call->optee_thread_id = get_user_reg(regs, 3);
>           call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret);
>           put_std_call(ctx, call);
> @@ -408,6 +513,33 @@ err:
>       return false;
>   }
>   
> +static void handle_rpc_func_alloc(struct optee_domain *ctx,
> +                                  struct cpu_user_regs *regs)
> +{
> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
> +
> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n");

Should not you bail-out in that case? Also, I would turn it to a gdprintk.

> +
> +    if ( ptr )
> +    {
> +        uint64_t cookie = get_user_reg(regs, 4) << 32 | get_user_reg(regs, 5);
> +        struct shm_rpc *shm_rpc;
> +
> +        shm_rpc = allocate_and_pin_shm_rpc(ctx, ptr, cookie);
> +        if ( !shm_rpc )
> +        {
> +            gprintk(XENLOG_WARNING, "Failed to allocate shm_rpc object\n");
> +            ptr = 0;
> +        }
> +        else
> +            ptr = page_to_maddr(shm_rpc->guest_page);
> +
> +        set_user_reg(regs, 1, ptr >> 32);
> +        set_user_reg(regs, 2, ptr & 0xFFFFFFFF);
> +    }
> +}
> +
>   static bool handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs)
>   {
>       struct optee_std_call *call;
> @@ -421,11 +553,15 @@ static bool handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs)
>       switch ( call->rpc_op )
>       {
>       case OPTEE_SMC_RPC_FUNC_ALLOC:
> -        /* TODO: Add handling */
> +        handle_rpc_func_alloc(ctx, regs);
>           break;
>       case OPTEE_SMC_RPC_FUNC_FREE:
> -        /* TODO: Add handling */
> +    {
> +        uint64_t cookie = call->rpc_params[0] << 32 |
> +                            (uint32_t)call->rpc_params[1];

The indentation looks weird here.

> +        free_shm_rpc(ctx, cookie);
>           break;
> +    }
>       case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR:
>           break;
>       case OPTEE_SMC_RPC_FUNC_CMD:
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 06/11] optee: add std call handling
  2019-01-17 18:18       ` Julien Grall
@ 2019-01-17 19:13         ` Volodymyr Babchuk
  2019-01-17 20:00           ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-17 19:13 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Hi Julien,

Julien Grall writes:

> Hi Volodymyr,
>
> On 17/01/2019 15:21, Volodymyr Babchuk wrote:
>> Julien Grall writes:
>>
>>> Hi Volodymyr,
>>>
>>> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>>>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>>>
>>>> The main way to communicate with OP-TEE is to issue standard SMCCC
>>>> call. "Standard" is a SMCCC term and it means that call can be
>>>> interrupted and OP-TEE can return control to NW before completing
>>>> the call.
>>>>
>>>> In contrast with fast calls, where arguments and return values
>>>> are passed in registers, standard calls use shared memory. Register
>>>> pair a1,a2 holds 64-bit PA of command buffer, where all arguments
>>>> are stored and which is used to return data. OP-TEE internally
>>>> copies contents of this buffer into own secure memory before accessing
>>>> and validating any data in command buffer. This is done to make sure
>>>> that NW will not change contents of the validated parameters.
>>>>
>>>> Mediator needs to do the same for number of reasons:
>>>>
>>>> 1. To make sure that guest will not change data after validation.
>>>> 2. To translate IPAs to PAs in the command buffer (this is not done
>>>>      in this patch).
>>>> 3. To hide translated address from guest, so it will not be able
>>>>      to do IPA->PA translation by misusing mediator.
>>>>
>>>> During standard call OP-TEE can issue multiple "RPC returns", asking
>>>> NW to do some work for OP-TEE. NW then issues special call
>>>> OPTEE_SMC_CALL_RETURN_FROM_RPC to resume handling of the original call.
>>>> Thus, mediator needs to maintain context for original standard call
>>>> during multiple SMCCC calls.
>>>>
>>>> Standard call is considered complete, when returned value is
>>>> not a RPC request.
>>>>
>>>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>>> ---
>>>>
>>>>    Changes from v2:
>>>>     - renamed struct domain_ctx to struct optee_domain
>>>>     - fixed coding style
>>>>     - Now I use access_guest_memory_by_ipa() instead of mappings
>>>>       to read command buffer
>>>>     - Added tracking for in flight calls, so guest can't resume
>>>>       the same call from two CPUs simultaniously
>>>>
>>>>    xen/arch/arm/tee/optee.c     | 319 ++++++++++++++++++++++++++++++++++-
>>>>    xen/include/asm-arm/domain.h |   3 +
>>>>    2 files changed, 320 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>>> index 584241b03a..dc90e2ed8e 100644
>>>> --- a/xen/arch/arm/tee/optee.c
>>>> +++ b/xen/arch/arm/tee/optee.c
>>>> @@ -12,6 +12,8 @@
>>>>     */
>>>>      #include <xen/device_tree.h>
>>>> +#include <xen/domain_page.h>
>>>> +#include <xen/guest_access.h>
>>>>    #include <xen/sched.h>
>>>>    #include <asm/smccc.h>
>>>>    #include <asm/tee/tee.h>
>>>> @@ -22,11 +24,38 @@
>>>>    /* Client ID 0 is reserved for hypervisor itself */
>>>>    #define OPTEE_CLIENT_ID(domain) (domain->domain_id + 1)
>>>>    +/*
>>>> + * Maximal number of concurrent standard calls from one guest. This
>>>> + * corresponds to OPTEE configuration option CFG_NUM_THREADS, because
>>>> + * OP-TEE spawns a thread for every standard call.
>>>
>>> Looking at OP-TEE, CFG_NUM_THREADS will vary depending on the
>>> platform. Is there any way to probe that number of threads from Xen?
>> Yes, this is per-platform configuration.
>> Easiest way is to add Fast SMC to OP-TEE that will report this
>> parameter.
>> Jens, what do you think about adding additional call?
>>
>>> In any case, I think we should update the comment to reflect that this
>>> seems to be the maximum CFG_NUM_THREADS supported by any upstream
>>> platform.
>>
>> Actually, OP-TEE protocol have possibility to handle limited number of
>> threads correctly. OP-TEE can report that all threads are busy and
>> client will wait for a free one. XEN can do the same, although this is not
>> implemented in this patch series. But I can add this.
>
> Could you expand by wait? Will it block in OP-TEE/Xen or does it
> return to the guest?
It returns to the guest with response code
OPTEE_SMC_RETURN_ETHREAD_LIMIT. Linux driver blocks calling application
thread until one of the calls to OP-TEE is finished. Then driver awakens
one of the blocked threads, so it can perform the call.

>>
>> Basically this means that all can work correctly even with
>> MAX_STD_CALLS==1. It just will be not so efficient.
>
> Given the OS is not aware of the number of threads, The problem would
> be the same without Xen between. Am I right?
Exactly.

> [...]
>
>>>> +
>>>> +    /*
>>>> +     * Command buffer should start at page boundary.
>>>> +     * This is OP-TEE ABI requirement.
>>>> +     */
>>>> +    if ( call->guest_arg_ipa & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>>> +        return false;
>>>> +
>>>> +    call->xen_arg = _xmalloc(OPTEE_MSG_NONCONTIG_PAGE_SIZE,
>>>> +                             OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>>>> +    if ( !call->xen_arg )
>>>> +        return false;
>>>> +
>>>> +    BUILD_BUG_ON(OPTEE_MSG_NONCONTIG_PAGE_SIZE > PAGE_SIZE);
>>>
>>> As you use _xmalloc, you should not need this. This is only necessary
>>> if you use alloc_xenheap_page.
>>>
>> Yes, right. This is leftover from time when I mapped guest page into
>> XEN. I'll move it down, to place where I map that page.
>>
>>> I am wondering whether it is wise to allocate the memory from xenheap
>>> and not domheap. While on Arm64 (for now) xenheap and domheap are the
>>> same, on Arm32 they are different. The xenheap is at most 1GB, so
>>> pretty limited.
>> Honestly, I have no opinion there. What are limitations of domheap in
>> this case?
>
> domheap pages may not always be mapped in Xen page-tables. So you have
> to call map_domain_page/unmap_domain_page at every use.

> In practice, on Arm64, those operations are today a NOP because the
> memory is always mapped. On Arm32, domheap is never mapped so those
> operations will require to modify the page-tables.

> There would be potentially ways to optimize the Arm32 case. So I think
> this is not a big concern as it would allow to account the memory to
> the domain and take advantage of potential new safety feature around
> domheap.
>
> BTW, I am not asking to implement the accounting :). You can still
> allocate domheap memory without a domain assigned. I am only giving
> the advantages of using domheap over xenheap :).
Actually, I considered using domheap in the beginning. But for some
reason I though that domheap is limited by a total memory assigned for a
domain. Looks like I was mistaken.
So yes, I'll used domheap for a large allocations.

[...]
>>>> +{
>>>> +    struct optee_msg_arg *guest_arg;
>>>> +    struct page_info *page;
>>>> +    unsigned int i;
>>>> +    uint32_t attr;
>>>> +
>>>> +    /* copy_std_request() validated IPA for us */
>>>
>>> Not really, the guest is free to modify the stage-2 mapping on another
>>> vCPU while this is happening. I agree that the guest will shoot
>>> himself, but we at least need to not have weird behavior happening.
>>>
>>> In that case, I would check that the type is p2m_ram_rw as you don't
>>> want to write in read-only or foreign mapping.
>> How I can do this atomically? I.e. what if guest will mangle p2m right
>> after the check?
>
> What you want to prevent is Xen writing to the wrong page. The guest
> should not play with page that are shared with an higher exception
> level.
>
> get_page_from_gfn() takes a reference on the current page, that will
> be release by put_page(). Between that you are sure the page can not
> disappear under your feet.
Ahhh, right. Thank you.

>
> Furthermore, AFAIK, there are no way for an Arm guest to modify the
> p2m type of a page once inserted. It can only remove or replace with a
> newly allocated page the mapping. If the guest instructs to
> 	- remove the page, as you have a reference that page will not disappear.
> 	- replace the page with a new one, then the guest will not be
> able to see the result. Tough luck, but it was not meant to do that
> :).
>
>>
>>> Also, as copy_std_request() and copy_std_request_back may not be
>>> called in the same "thread" it would be useful if you specify a bit
>>> more the interaction.
>> I not sure what do you mean there.
>
> What I meant is the following can happen:
>
>      guest vCPU A         |   Xen
>
>      Initiate call 1
> 	                      copy_std_request()
>                               call OP-TEE
> 				-> Call "preempted"
>                      	      return to guest
>      Resume call 1
> 			      resume call in OP-TEE
> 			      copy_std_back_request()
>
Yes, this is right.

> AFAICT, the call could even resume from a different vCPU.
This is also true.

> It is not
> entirely trivial to understand this from just reading the code and the
> comment "copy_std_request() validated IPA for us" leads to think
> copy_std_request() was called right before. This may not be the
> case. So can you detail a bit more the interaction in the code?
Yes, there can be an issue. My previous approach pined guest page for a
call duration, so I was sure that IPA is still valid. But now this is
not true anymore. So yes, this comment is misleading. I'll fix both the
code and the comment.

[...]
>>>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>>>> index 175de44927..88b48697bd 100644
>>>> --- a/xen/include/asm-arm/domain.h
>>>> +++ b/xen/include/asm-arm/domain.h
>>>> @@ -97,6 +97,9 @@ struct arch_domain
>>>>        struct vpl011 vpl011;
>>>>    #endif
>>>>    +#ifdef CONFIG_TEE
>>>> +    void *tee;
>>>> +#endif
>>>
>>> Did you look whether there are any hole in arch_domain that could be re-used?
>> I thought about that. But what are chances to find 64bit-wide,
>> 64bit-aligned hole in a structure? If I remember C standard correctly,
>> there are no reasons for compiler to leave such holes.
>
> It depends on the alignment requested for each structure. Have a look
> at pahole to see the number (and size) of holes we have in some
> structures ;).
Wow, 108 bytes hole in rcu_ctrlblk :) And yes, only two 8 byte holes in
the whole xen.


--
Best regards, Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 05/11] optee: add fast calls handling
  2019-01-17 18:31   ` Julien Grall
@ 2019-01-17 19:22     ` Volodymyr Babchuk
  2019-01-17 20:02       ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-17 19:22 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Hello Julien,

Julien Grall writes:

[...]
>> Because mediator can't share one static SHM region with all guests,
>> it just disables it for all.
>
> I haven't seen an answer to my question on the previous version. For reminder:
I'm sorry, I missed this somehow.


> Would it make sense to still allow the hardware domain to access static SHM?
Not really. If linux drivers sees that OP-TEE support dynamic SHM, it
uses it by default. And all new versions of OP-TEE support dynamic SHM.

So, it will add unnecessary code to enable feature which will not be used.

But, actually, there is a caveat. Right now linux driver requires static SHM,
even if it will not use it. I have submitted patch that fixes it, but
it still not merged.

So... what do you think? Should I add piece of code which is needed
right now, but will not be used later?

--
Best regards, Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 07/11] optee: add support for RPC SHM buffers
  2019-01-17 18:49   ` Julien Grall
@ 2019-01-17 19:48     ` Volodymyr Babchuk
  2019-01-17 20:08       ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-17 19:48 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Julien Grall writes:

> Hi Volodymyr,
>
> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>
>> OP-TEE usually uses the same idea with command buffers (see
>> previous commit) to issue RPC requests. Problem is that initially
>> it has no buffer, where it can write request. So the first RPC
>> request it makes is special: it requests NW to allocate shared
>> buffer for other RPC requests. Usually this buffer is allocated
>> only once for every OP-TEE thread and it remains allocated all
>> the time until shutdown.
>
> By shutdown you mean for the OS or the thread?
Shutdown of OP-TEE actually. But guest can ask OP-TEE to de-allocate this
buffers. And this is what linux drivers does when it unloads.
So, basically, linux drivers says "I want to disable RPC buffer caching"
and then OP-TEE issues number of RPCs to free those buffers.

>>
>> Mediator needs to pin this buffer(s) to make sure that domain can't
>> transfer it to someone else.
>>
>> Life cycle of this buffer is controlled by OP-TEE. It asks guest
>> to create buffer and it asks it to free it.
>>
>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>> ---
>>
>>   Changes from v2:
>>    - Added check to ensure that guests does not return two SHM buffers
>>      with the same cookie
>>    - Fixed coding style
>>    - Storing RPC parameters during RPC return to make sure, that guest
>>      will not change them during call continuation
>>      xen/arch/arm/tee/optee.c | 140
>> ++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 138 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>> index dc90e2ed8e..771148e940 100644
>> --- a/xen/arch/arm/tee/optee.c
>> +++ b/xen/arch/arm/tee/optee.c
>> @@ -30,6 +30,12 @@
>>    * OP-TEE spawns a thread for every standard call.
>>    */
>>   #define MAX_STD_CALLS   16
>> +/*
>> + * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
>> + * for one SHM buffer per thread, so this also corresponds to OP-TEE
>> + * option CFG_NUM_THREADS
>> + */
>
> Same as patch #6 regarding CFG_NUM_THREADS.
Right now OP-TEE will not allocate more than one buffer per OP-TEE
thread. And I can see no reason why it would change. So, basically I can
remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
it will be not so obvious, why I compare number of SHM buffers with
number of std calls. Thus, I think it is good to have separate
define and comment.

[...]
>> @@ -227,11 +247,90 @@ static void put_std_call(struct optee_domain *ctx, struct optee_std_call *call)
>>       spin_unlock(&ctx->lock);
>>   }
>>   +static struct shm_rpc *allocate_and_pin_shm_rpc(struct
>> optee_domain *ctx,
>> +                                                paddr_t gaddr,
>
> As I said on v3, I would prefer if you use gfn_t here. This would
> introduce more safety.
Sure, will do.

[...]

>> +    shm_rpc->guest_page = get_page_from_gfn(current->domain,
>> +                                            paddr_to_pfn(gaddr),
>> +                                            NULL,
>> +                                            P2M_ALLOC);
>
> I think it would be wrong to share any page other than p2m_ram_rw with OP-TEE.
>
So it should be like this:

    shm_rpc->guest_page = get_page_from_gfn(current->domain,
                                            paddr_to_pfn(gaddr),
                                            &p2m,
                                            P2M_ALLOC);
    if ( !shm_rpc->guest_page || p2m != p2m_ram_rw)
        goto err;

?

[...]

>> +static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
>> +{
>> +    struct shm_rpc *shm_rpc;
>> +    bool found = false;
>> +
>> +    spin_lock(&ctx->lock);
>> +
>> +    list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
>> +    {
>> +        if ( shm_rpc->cookie == cookie )
>> +        {
>> +            found = true;
>> +            list_del(&shm_rpc->list);
>> +            break;
>> +        }
>> +    }
>> +    spin_unlock(&ctx->lock);
>
> I think you are missing an atomic_dec(&ctx->shm_rpc_count) here.
Good catch. Thank you.

[...]
>>   +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>> +                                  struct cpu_user_regs *regs)
>> +{
>> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>> +
>> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n");
>
> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
OP-TEE does own checks and that check will fail also. Then OP-TEE will
issue request to free this SHM.

But you have a point. I need to rework error path there.

[...]
>>       case OPTEE_SMC_RPC_FUNC_FREE:
>> -        /* TODO: Add handling */
>> +    {
>> +        uint64_t cookie = call->rpc_params[0] << 32 |
>> +                            (uint32_t)call->rpc_params[1];
>
> The indentation looks weird here.
You are right. How it should look? Would this be okay?

        uint64_t cookie = call->rpc_params[0] << 32 |
                (uint32_t)call->rpc_params[1];


>> +        free_shm_rpc(ctx, cookie);
>>           break;
>> +    }
>>       case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR:
>>           break;
>>       case OPTEE_SMC_RPC_FUNC_CMD:
>>
>
> Cheers,


--
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 06/11] optee: add std call handling
  2019-01-17 19:13         ` Volodymyr Babchuk
@ 2019-01-17 20:00           ` Julien Grall
  2019-01-17 20:14             ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-17 20:00 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev



On 1/17/19 7:13 PM, Volodymyr Babchuk wrote:
>>> Actually, OP-TEE protocol have possibility to handle limited number of
>>> threads correctly. OP-TEE can report that all threads are busy and
>>> client will wait for a free one. XEN can do the same, although this is not
>>> implemented in this patch series. But I can add this.
>>
>> Could you expand by wait? Will it block in OP-TEE/Xen or does it
>> return to the guest?
> It returns to the guest with response code
> OPTEE_SMC_RETURN_ETHREAD_LIMIT. Linux driver blocks calling application
> thread until one of the calls to OP-TEE is finished. Then driver awakens
> one of the blocked threads, so it can perform the call.

Shouldn't not you return this value when you reach out MAX_STD_CALLS?

Actually, looking at the code, you don't seem to return in error code 
when there are a failure in the mediator. Instead you seem to always 
return "false". Which means the virtual SMCCC framework thinks the call 
was never handled. However, this is not true, you handled the call but 
the there was a failure during it.

In general, handle_call should return false only if the call is 
non-existent. In all the other case, you should feed a proper return by 
yourself.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 05/11] optee: add fast calls handling
  2019-01-17 19:22     ` Volodymyr Babchuk
@ 2019-01-17 20:02       ` Julien Grall
  2019-01-17 20:10         ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-17 20:02 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev

Hi,

On 1/17/19 7:22 PM, Volodymyr Babchuk wrote:
> Julien Grall writes:
> 
> [...]
>>> Because mediator can't share one static SHM region with all guests,
>>> it just disables it for all.
>>
>> I haven't seen an answer to my question on the previous version. For reminder:
> I'm sorry, I missed this somehow.
> 
> 
>> Would it make sense to still allow the hardware domain to access static SHM?
> Not really. If linux drivers sees that OP-TEE support dynamic SHM, it
> uses it by default. And all new versions of OP-TEE support dynamic SHM.
> 
> So, it will add unnecessary code to enable feature which will not be used.
> 
> But, actually, there is a caveat. Right now linux driver requires static SHM,
> even if it will not use it. I have submitted patch that fixes it, but
> it still not merged.
> 
> So... what do you think? Should I add piece of code which is needed
> right now, but will not be used later?

I think it would be nice to have OP-TEE working in Dom0 with current 
Linux. For the guest, we can require a more recent Linux if there are no 
other solution.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 07/11] optee: add support for RPC SHM buffers
  2019-01-17 19:48     ` Volodymyr Babchuk
@ 2019-01-17 20:08       ` Julien Grall
  2019-01-17 21:01         ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-17 20:08 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev

HI,

On 1/17/19 7:48 PM, Volodymyr Babchuk wrote:
> 
> Julien Grall writes:
> 
>> Hi Volodymyr,
>>
>> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>>
>>> OP-TEE usually uses the same idea with command buffers (see
>>> previous commit) to issue RPC requests. Problem is that initially
>>> it has no buffer, where it can write request. So the first RPC
>>> request it makes is special: it requests NW to allocate shared
>>> buffer for other RPC requests. Usually this buffer is allocated
>>> only once for every OP-TEE thread and it remains allocated all
>>> the time until shutdown.
>>
>> By shutdown you mean for the OS or the thread?
> Shutdown of OP-TEE actually. But guest can ask OP-TEE to de-allocate this
> buffers. And this is what linux drivers does when it unloads.
> So, basically, linux drivers says "I want to disable RPC buffer caching"
> and then OP-TEE issues number of RPCs to free those buffers.
> 
>>>
>>> Mediator needs to pin this buffer(s) to make sure that domain can't
>>> transfer it to someone else.
>>>
>>> Life cycle of this buffer is controlled by OP-TEE. It asks guest
>>> to create buffer and it asks it to free it.
>>>
>>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>> ---
>>>
>>>    Changes from v2:
>>>     - Added check to ensure that guests does not return two SHM buffers
>>>       with the same cookie
>>>     - Fixed coding style
>>>     - Storing RPC parameters during RPC return to make sure, that guest
>>>       will not change them during call continuation
>>>       xen/arch/arm/tee/optee.c | 140
>>> ++++++++++++++++++++++++++++++++++++++-
>>>    1 file changed, 138 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>> index dc90e2ed8e..771148e940 100644
>>> --- a/xen/arch/arm/tee/optee.c
>>> +++ b/xen/arch/arm/tee/optee.c
>>> @@ -30,6 +30,12 @@
>>>     * OP-TEE spawns a thread for every standard call.
>>>     */
>>>    #define MAX_STD_CALLS   16
>>> +/*
>>> + * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
>>> + * for one SHM buffer per thread, so this also corresponds to OP-TEE
>>> + * option CFG_NUM_THREADS
>>> + */
>>
>> Same as patch #6 regarding CFG_NUM_THREADS.
> Right now OP-TEE will not allocate more than one buffer per OP-TEE
> thread. And I can see no reason why it would change. So, basically I can
> remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
> it will be not so obvious, why I compare number of SHM buffers with
> number of std calls. Thus, I think it is good to have separate
> define and comment.

I am not against have the 2 defines, what I was pointed out with the 
documentation on top is incorrect as patch #6.

If you happen to make MAX_STD_CALLS dynamic, then this should also be 
dynamic.

> 
> [...]
>>> @@ -227,11 +247,90 @@ static void put_std_call(struct optee_domain *ctx, struct optee_std_call *call)
>>>        spin_unlock(&ctx->lock);
>>>    }
>>>    +static struct shm_rpc *allocate_and_pin_shm_rpc(struct
>>> optee_domain *ctx,
>>> +                                                paddr_t gaddr,
>>
>> As I said on v3, I would prefer if you use gfn_t here. This would
>> introduce more safety.
> Sure, will do.
> 
> [...]
> 
>>> +    shm_rpc->guest_page = get_page_from_gfn(current->domain,
>>> +                                            paddr_to_pfn(gaddr),
>>> +                                            NULL,
>>> +                                            P2M_ALLOC);
>>
>> I think it would be wrong to share any page other than p2m_ram_rw with OP-TEE.
>>
> So it should be like this:
> 
>      shm_rpc->guest_page = get_page_from_gfn(current->domain,
>                                              paddr_to_pfn(gaddr),
>                                              &p2m,
>                                              P2M_ALLOC);
>      if ( !shm_rpc->guest_page || p2m != p2m_ram_rw)
>          goto err;
> 
> ?

Sounds good to me.

> 
> [...]
> 
>>> +static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
>>> +{
>>> +    struct shm_rpc *shm_rpc;
>>> +    bool found = false;
>>> +
>>> +    spin_lock(&ctx->lock);
>>> +
>>> +    list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
>>> +    {
>>> +        if ( shm_rpc->cookie == cookie )
>>> +        {
>>> +            found = true;
>>> +            list_del(&shm_rpc->list);
>>> +            break;
>>> +        }
>>> +    }
>>> +    spin_unlock(&ctx->lock);
>>
>> I think you are missing an atomic_dec(&ctx->shm_rpc_count) here.
> Good catch. Thank you.
> 
> [...]
>>>    +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>>> +                                  struct cpu_user_regs *regs)
>>> +{
>>> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>>> +
>>> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n");
>>
>> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
> OP-TEE does own checks and that check will fail also. Then OP-TEE will
> issue request to free this SHM.

I think it is better if we go on the safe-side. I.e if we know there 
would be an error (like here), then you need to return an error in from 
Xen rather than calling OP-TEE. Otherwise, you may end up to nasty problem.

Also, I think you want a comment (maybe in the commit message) 
explaining that OPTEE_MSG_NONCONTIG_PAGE_SIZE will always be equal to 
PAGE_SIZE.

> 
> But you have a point. I need to rework error path there.
> 
> [...]
>>>        case OPTEE_SMC_RPC_FUNC_FREE:
>>> -        /* TODO: Add handling */
>>> +    {
>>> +        uint64_t cookie = call->rpc_params[0] << 32 |
>>> +                            (uint32_t)call->rpc_params[1];
>>
>> The indentation looks weird here.
> You are right. How it should look? Would this be okay?
> 
>          uint64_t cookie = call->rpc_params[0] << 32 |
>                  (uint32_t)call->rpc_params[1];

call and (uint32_t) should be aligned:

uint64_t cookie = call->rcp... |
                   (uint32_t)call..

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 05/11] optee: add fast calls handling
  2019-01-17 20:02       ` Julien Grall
@ 2019-01-17 20:10         ` Volodymyr Babchuk
  2019-01-18 15:29           ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-17 20:10 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Hi,

Julien Grall writes:

> Hi,
>
> On 1/17/19 7:22 PM, Volodymyr Babchuk wrote:
>> Julien Grall writes:
>>
>> [...]
>>>> Because mediator can't share one static SHM region with all guests,
>>>> it just disables it for all.
>>>
>>> I haven't seen an answer to my question on the previous version. For reminder:
>> I'm sorry, I missed this somehow.
>>
>>
>>> Would it make sense to still allow the hardware domain to access static SHM?
>> Not really. If linux drivers sees that OP-TEE support dynamic SHM, it
>> uses it by default. And all new versions of OP-TEE support dynamic SHM.
>>
>> So, it will add unnecessary code to enable feature which will not be used.
>>
>> But, actually, there is a caveat. Right now linux driver requires static SHM,
>> even if it will not use it. I have submitted patch that fixes it, but
>> it still not merged.
>>
>> So... what do you think? Should I add piece of code which is needed
>> right now, but will not be used later?
>
> I think it would be nice to have OP-TEE working in Dom0 with current
> Linux. For the guest, we can require a more recent Linux if there are
> no other solution.

Okay, I'll take a look what can be done. Can I do this in a separate patch?


-- 
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 06/11] optee: add std call handling
  2019-01-17 20:00           ` Julien Grall
@ 2019-01-17 20:14             ` Volodymyr Babchuk
  0 siblings, 0 replies; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-17 20:14 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Julien Grall writes:

> On 1/17/19 7:13 PM, Volodymyr Babchuk wrote:
>>>> Actually, OP-TEE protocol have possibility to handle limited number of
>>>> threads correctly. OP-TEE can report that all threads are busy and
>>>> client will wait for a free one. XEN can do the same, although this is not
>>>> implemented in this patch series. But I can add this.
>>>
>>> Could you expand by wait? Will it block in OP-TEE/Xen or does it
>>> return to the guest?
>> It returns to the guest with response code
>> OPTEE_SMC_RETURN_ETHREAD_LIMIT. Linux driver blocks calling application
>> thread until one of the calls to OP-TEE is finished. Then driver awakens
>> one of the blocked threads, so it can perform the call.
>
> Shouldn't not you return this value when you reach out MAX_STD_CALLS?
Yes, I should. As I said earlier, this isn't done right now. But
apparently will be done in the next version.

> Actually, looking at the code, you don't seem to return in error code
> when there are a failure in the mediator. Instead you seem to always
> return "false". Which means the virtual SMCCC framework thinks the
> call was never handled. However, this is not true, you handled the
> call but the there was a failure during it.
>
> In general, handle_call should return false only if the call is
> non-existent. In all the other case, you should feed a proper return
> by yourself.

Agree. At first I seen OP-TEE mediator as relatively thin shim between
guest and OP-TEE. And I expected that it should not return errors to a
good behaving guest. But logic becomes more complex and indeed, there
are cases when even behaving guest can get an error.

-- 
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 07/11] optee: add support for RPC SHM buffers
  2019-01-17 20:08       ` Julien Grall
@ 2019-01-17 21:01         ` Volodymyr Babchuk
  2019-01-18 15:38           ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-17 21:01 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Hi,

Julien Grall writes:

> HI,
>
> On 1/17/19 7:48 PM, Volodymyr Babchuk wrote:
>>
>> Julien Grall writes:
>>
>>> Hi Volodymyr,
>>>
>>> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>>>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>>>
[...]
>>>>
>>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>>> index dc90e2ed8e..771148e940 100644
>>>> --- a/xen/arch/arm/tee/optee.c
>>>> +++ b/xen/arch/arm/tee/optee.c
>>>> @@ -30,6 +30,12 @@
>>>>     * OP-TEE spawns a thread for every standard call.
>>>>     */
>>>>    #define MAX_STD_CALLS   16
>>>> +/*
>>>> + * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
>>>> + * for one SHM buffer per thread, so this also corresponds to OP-TEE
>>>> + * option CFG_NUM_THREADS
>>>> + */
>>>
>>> Same as patch #6 regarding CFG_NUM_THREADS.
>> Right now OP-TEE will not allocate more than one buffer per OP-TEE
>> thread. And I can see no reason why it would change. So, basically I can
>> remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
>> it will be not so obvious, why I compare number of SHM buffers with
>> number of std calls. Thus, I think it is good to have separate
>> define and comment.
>
> I am not against have the 2 defines, what I was pointed out with the
> documentation on top is incorrect as patch #6.
I'm sorry, I don't quite understand. In patch #6 your concern was that
CFG_NUM_THREADS depends on platform, right?

> If you happen to make MAX_STD_CALLS dynamic, then this should also be
> dynamic.
Of course.

[...]
>>>>    +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>>>> +                                  struct cpu_user_regs *regs)
>>>> +{
>>>> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>>>> +
>>>> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>>> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n");
>>>
>>> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
>> OP-TEE does own checks and that check will fail also. Then OP-TEE will
>> issue request to free this SHM.
>
> I think it is better if we go on the safe-side. I.e if we know there
> would be an error (like here), then you need to return an error in
> from Xen rather than calling OP-TEE. Otherwise, you may end up to
> nasty problem.
Actually, I don't see how I can do this. This is an RPC response. I
can't return error to RPC response. All I can do is to mangle RPC
response in a such way, that OP-TEE surely will treat it as an error and
act accordingly.

> Also, I think you want a comment (maybe in the commit message)
> explaining that OPTEE_MSG_NONCONTIG_PAGE_SIZE will always be equal to
> PAGE_SIZE.
It is always equal to 4096 bytes. But, AFAIK, XEN can work with other
PAGE_SIZEs, isn't? Actually, linux driver support other page sizes, it
just splits those pages into 4k chunks. The same can be done in XEN, but
I don't want to introduce this logic right now. The patches are complex
enough. Right now we have
BUILD_BUG_ON(PAGE_SIZE != OPTEE_MSG_NONCONTIG_PAGE_SIZE) and this is
enough, I hope.


--
Best regards, Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 05/11] optee: add fast calls handling
  2019-01-17 20:10         ` Volodymyr Babchuk
@ 2019-01-18 15:29           ` Julien Grall
  0 siblings, 0 replies; 47+ messages in thread
From: Julien Grall @ 2019-01-18 15:29 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev



On 17/01/2019 20:10, Volodymyr Babchuk wrote:
> 
> Hi,
> 
> Julien Grall writes:
> 
>> Hi,
>>
>> On 1/17/19 7:22 PM, Volodymyr Babchuk wrote:
>>> Julien Grall writes:
>>>
>>> [...]
>>>>> Because mediator can't share one static SHM region with all guests,
>>>>> it just disables it for all.
>>>>
>>>> I haven't seen an answer to my question on the previous version. For reminder:
>>> I'm sorry, I missed this somehow.
>>>
>>>
>>>> Would it make sense to still allow the hardware domain to access static SHM?
>>> Not really. If linux drivers sees that OP-TEE support dynamic SHM, it
>>> uses it by default. And all new versions of OP-TEE support dynamic SHM.
>>>
>>> So, it will add unnecessary code to enable feature which will not be used.
>>>
>>> But, actually, there is a caveat. Right now linux driver requires static SHM,
>>> even if it will not use it. I have submitted patch that fixes it, but
>>> it still not merged.
>>>
>>> So... what do you think? Should I add piece of code which is needed
>>> right now, but will not be used later?
>>
>> I think it would be nice to have OP-TEE working in Dom0 with current
>> Linux. For the guest, we can require a more recent Linux if there are
>> no other solution.
> 
> Okay, I'll take a look what can be done. Can I do this in a separate patch?

I am fine with that. Please mention it in the commit message though.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 07/11] optee: add support for RPC SHM buffers
  2019-01-17 21:01         ` Volodymyr Babchuk
@ 2019-01-18 15:38           ` Julien Grall
  2019-01-18 16:05             ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-18 15:38 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev



On 17/01/2019 21:01, Volodymyr Babchuk wrote:
> 
> Hi,

Hi Volodymyr,

> Julien Grall writes:
> 
>> HI,
>>
>> On 1/17/19 7:48 PM, Volodymyr Babchuk wrote:
>>>
>>> Julien Grall writes:
>>>
>>>> Hi Volodymyr,
>>>>
>>>> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>>>>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>>>>
> [...]
>>>>>
>>>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>>>> index dc90e2ed8e..771148e940 100644
>>>>> --- a/xen/arch/arm/tee/optee.c
>>>>> +++ b/xen/arch/arm/tee/optee.c
>>>>> @@ -30,6 +30,12 @@
>>>>>      * OP-TEE spawns a thread for every standard call.
>>>>>      */
>>>>>     #define MAX_STD_CALLS   16
>>>>> +/*
>>>>> + * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
>>>>> + * for one SHM buffer per thread, so this also corresponds to OP-TEE
>>>>> + * option CFG_NUM_THREADS
>>>>> + */
>>>>
>>>> Same as patch #6 regarding CFG_NUM_THREADS.
>>> Right now OP-TEE will not allocate more than one buffer per OP-TEE
>>> thread. And I can see no reason why it would change. So, basically I can
>>> remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
>>> it will be not so obvious, why I compare number of SHM buffers with
>>> number of std calls. Thus, I think it is good to have separate
>>> define and comment.
>>
>> I am not against have the 2 defines, what I was pointed out with the
>> documentation on top is incorrect as patch #6.
> I'm sorry, I don't quite understand. In patch #6 your concern was that
> CFG_NUM_THREADS depends on platform, right?

My first concern was the documentation does not reflect the reality because 
CFG_NUM_THREADS is not always equal to 16.

Ideally we should be able to know the number of threads supported. But that 
could be a follow-up patch (or potentially ignored) if nothing bad can happen 
when Xen handles more thread than OP-TEE does.

In any case, the documentation in Xen should reflect the reality.

> 
>> If you happen to make MAX_STD_CALLS dynamic, then this should also be
>> dynamic.
> Of course.
> 
> [...]
>>>>>     +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>>>>> +                                  struct cpu_user_regs *regs)
>>>>> +{
>>>>> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>>>>> +
>>>>> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>>>> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n");
>>>>
>>>> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
>>> OP-TEE does own checks and that check will fail also. Then OP-TEE will
>>> issue request to free this SHM.
>>
>> I think it is better if we go on the safe-side. I.e if we know there
>> would be an error (like here), then you need to return an error in
>> from Xen rather than calling OP-TEE. Otherwise, you may end up to
>> nasty problem.
> Actually, I don't see how I can do this. This is an RPC response. I
> can't return error to RPC response. All I can do is to mangle RPC
> response in a such way, that OP-TEE surely will treat it as an error and
> act accordingly.

I am not sure to understand what you mean here. Surely if the address is not 
aligned, then OP-TEE will return an error too, right? So can't you emulate 
OP-TEE behavior in that case?

> 
>> Also, I think you want a comment (maybe in the commit message)
>> explaining that OPTEE_MSG_NONCONTIG_PAGE_SIZE will always be equal to
>> PAGE_SIZE.
> It is always equal to 4096 bytes. But, AFAIK, XEN can work with other
> PAGE_SIZEs, isn't? Actually, linux driver support other page sizes, it
> just splits those pages into 4k chunks. The same can be done in XEN, but
> I don't want to introduce this logic right now. The patches are complex
> enough. Right now we have
> BUILD_BUG_ON(PAGE_SIZE != OPTEE_MSG_NONCONTIG_PAGE_SIZE) and this is
> enough, I hope.

Xen only support 4KB page, but I wouldn't bet that in the future on Arm as we 
will require 64KB page for some features (i.e 52-bit support).

I wasn't asking you to handle a different PAGE_SIZE, just to clarify in a 
comment on top that you expect the both defined to be valid. Another 
BUILD_BUG_ON is not necessary assuming you add the one in patch #6.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 07/11] optee: add support for RPC SHM buffers
  2019-01-18 15:38           ` Julien Grall
@ 2019-01-18 16:05             ` Volodymyr Babchuk
  2019-01-18 17:51               ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-18 16:05 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Hi Julien,

Julien Grall writes:

[...]
>>>>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>>>>> index dc90e2ed8e..771148e940 100644
>>>>>> --- a/xen/arch/arm/tee/optee.c
>>>>>> +++ b/xen/arch/arm/tee/optee.c
>>>>>> @@ -30,6 +30,12 @@
>>>>>>      * OP-TEE spawns a thread for every standard call.
>>>>>>      */
>>>>>>     #define MAX_STD_CALLS   16
>>>>>> +/*
>>>>>> + * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
>>>>>> + * for one SHM buffer per thread, so this also corresponds to OP-TEE
>>>>>> + * option CFG_NUM_THREADS
>>>>>> + */
>>>>>
>>>>> Same as patch #6 regarding CFG_NUM_THREADS.
>>>> Right now OP-TEE will not allocate more than one buffer per OP-TEE
>>>> thread. And I can see no reason why it would change. So, basically I can
>>>> remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
>>>> it will be not so obvious, why I compare number of SHM buffers with
>>>> number of std calls. Thus, I think it is good to have separate
>>>> define and comment.
>>>
>>> I am not against have the 2 defines, what I was pointed out with the
>>> documentation on top is incorrect as patch #6.
>> I'm sorry, I don't quite understand. In patch #6 your concern was that
>> CFG_NUM_THREADS depends on platform, right?
>
> My first concern was the documentation does not reflect the reality
> because CFG_NUM_THREADS is not always equal to 16.

> Ideally we should be able to know the number of threads supported. But
> that could be a follow-up patch (or potentially ignored) if nothing
> bad can happen when Xen handles more thread than OP-TEE does.
No, it is valid situation. Opposite (when OP-TEE can handle more threads
than Xen) also can be handled in a right way.

> In any case, the documentation in Xen should reflect the reality.
Ah, okay, got it.

>>
>>> If you happen to make MAX_STD_CALLS dynamic, then this should also be
>>> dynamic.
>> Of course.
>>
>> [...]
>>>>>>     +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>>>>>> +                                  struct cpu_user_regs *regs)
>>>>>> +{
>>>>>> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>>>>>> +
>>>>>> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>>>>> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n");
>>>>>
>>>>> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
>>>> OP-TEE does own checks and that check will fail also. Then OP-TEE will
>>>> issue request to free this SHM.
>>>
>>> I think it is better if we go on the safe-side. I.e if we know there
>>> would be an error (like here), then you need to return an error in
>>> from Xen rather than calling OP-TEE. Otherwise, you may end up to
>>> nasty problem.
>> Actually, I don't see how I can do this. This is an RPC response. I
>> can't return error to RPC response. All I can do is to mangle RPC
>> response in a such way, that OP-TEE surely will treat it as an error and
>> act accordingly.
>
> I am not sure to understand what you mean here. Surely if the address
> is not aligned, then OP-TEE will return an error too, right? So can't
> you emulate OP-TEE behavior in that case?
No, OP-TEE will not return an error. OP-TEE will handle it as an error.
For OP-TEE any RPC looks like a function call. So it excepts some return
value - buffer pointer in this case. If it gets invalid pointer, it
issues another RPC to free this buffer and then propagates error back to
caller.

So, if I'll return error back to the guest (or rather issue RPC to free
the buffer), OP-TEE will be still blocked, waiting for a pointer from
the guest. Of course I can go further end emulate a new buffer
allocation RPC but from Xen side in a hope that guest will provide valid
buffer this time. But, what if not? And why I should decide instead of
OP-TEE?

>
>>
>>> Also, I think you want a comment (maybe in the commit message)
>>> explaining that OPTEE_MSG_NONCONTIG_PAGE_SIZE will always be equal to
>>> PAGE_SIZE.
>> It is always equal to 4096 bytes. But, AFAIK, XEN can work with other
>> PAGE_SIZEs, isn't? Actually, linux driver support other page sizes, it
>> just splits those pages into 4k chunks. The same can be done in XEN, but
>> I don't want to introduce this logic right now. The patches are complex
>> enough. Right now we have
>> BUILD_BUG_ON(PAGE_SIZE != OPTEE_MSG_NONCONTIG_PAGE_SIZE) and this is
>> enough, I hope.
>
> Xen only support 4KB page, but I wouldn't bet that in the future on
> Arm as we will require 64KB page for some features (i.e 52-bit
> support).
So, I was mistaken. For some reason I though that Xen can handle other
page sizes as well.

> I wasn't asking you to handle a different PAGE_SIZE, just to clarify
> in a comment on top that you expect the both defined to be
> valid. Another BUILD_BUG_ON is not necessary assuming you add the one
> in patch #6.
Okay, I'll add such comment.


--
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 07/11] optee: add support for RPC SHM buffers
  2019-01-18 16:05             ` Volodymyr Babchuk
@ 2019-01-18 17:51               ` Julien Grall
  2019-01-18 18:29                 ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-18 17:51 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev

Hi Volodymyr,

On 18/01/2019 16:05, Volodymyr Babchuk wrote:
> Julien Grall writes:
>>>> If you happen to make MAX_STD_CALLS dynamic, then this should also be
>>>> dynamic.
>>> Of course.
>>>
>>> [...]
>>>>>>>      +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>>>>>>> +                                  struct cpu_user_regs *regs)
>>>>>>> +{
>>>>>>> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>>>>>>> +
>>>>>>> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>>>>>> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n");
>>>>>>
>>>>>> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
>>>>> OP-TEE does own checks and that check will fail also. Then OP-TEE will
>>>>> issue request to free this SHM.
>>>>
>>>> I think it is better if we go on the safe-side. I.e if we know there
>>>> would be an error (like here), then you need to return an error in
>>>> from Xen rather than calling OP-TEE. Otherwise, you may end up to
>>>> nasty problem.
>>> Actually, I don't see how I can do this. This is an RPC response. I
>>> can't return error to RPC response. All I can do is to mangle RPC
>>> response in a such way, that OP-TEE surely will treat it as an error and
>>> act accordingly.
>>
>> I am not sure to understand what you mean here. Surely if the address
>> is not aligned, then OP-TEE will return an error too, right? So can't
>> you emulate OP-TEE behavior in that case?
> No, OP-TEE will not return an error. OP-TEE will handle it as an error.

Will it? Looking at the code again, you will pass either 0 (when not able to 
translate the address) or page_to_maddr(...) value. So the value will get 
truncated by Xen with just a warning.

Is it the expected behavior?

> For OP-TEE any RPC looks like a function call. So it excepts some return
> value - buffer pointer in this case. If it gets invalid pointer, it
> issues another RPC to free this buffer and then propagates error back to
> caller.
> 
> So, if I'll return error back to the guest (or rather issue RPC to free
> the buffer), OP-TEE will be still blocked, waiting for a pointer from
> the guest. Of course I can go further end emulate a new buffer
> allocation RPC but from Xen side in a hope that guest will provide valid
> buffer this time. But, what if not? And why I should decide instead of
> OP-TEE?

My main concern here is you do a print a warning and continue like nothing 
happen. As a reviewer, this is worrying because it is a call for weird behavior 
to happen in Xen. We don't want that.

Looking at the code, why can't you pass 0 as you do if you fail to pin the pages?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 07/11] optee: add support for RPC SHM buffers
  2019-01-18 17:51               ` Julien Grall
@ 2019-01-18 18:29                 ` Volodymyr Babchuk
  0 siblings, 0 replies; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-01-18 18:29 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Hi Julien,

Julien Grall writes:

> Hi Volodymyr,
>
> On 18/01/2019 16:05, Volodymyr Babchuk wrote:
>> Julien Grall writes:
>>>>> If you happen to make MAX_STD_CALLS dynamic, then this should also be
>>>>> dynamic.
>>>> Of course.
>>>>
>>>> [...]
>>>>>>>>      +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>>>>>>>> +                                  struct cpu_user_regs *regs)
>>>>>>>> +{
>>>>>>>> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>>>>>>>> +
>>>>>>>> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>>>>>>> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command buffer\n");
>>>>>>>
>>>>>>> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
>>>>>> OP-TEE does own checks and that check will fail also. Then OP-TEE will
>>>>>> issue request to free this SHM.
>>>>>
>>>>> I think it is better if we go on the safe-side. I.e if we know there
>>>>> would be an error (like here), then you need to return an error in
>>>>> from Xen rather than calling OP-TEE. Otherwise, you may end up to
>>>>> nasty problem.
>>>> Actually, I don't see how I can do this. This is an RPC response. I
>>>> can't return error to RPC response. All I can do is to mangle RPC
>>>> response in a such way, that OP-TEE surely will treat it as an error and
>>>> act accordingly.
>>>
>>> I am not sure to understand what you mean here. Surely if the address
>>> is not aligned, then OP-TEE will return an error too, right? So can't
>>> you emulate OP-TEE behavior in that case?
>> No, OP-TEE will not return an error. OP-TEE will handle it as an error.
>
> Will it?
Yes. It checks for page alignment in the same way, as Xen does.

> Looking at the code again, you will pass either 0 (when not
> able to translate the address) or page_to_maddr(...) value. So the
> value will get truncated by Xen with just a warning.
>
> Is it the expected behavior?
No, this is a bug and I'll fix it.

>> For OP-TEE any RPC looks like a function call. So it excepts some return
>> value - buffer pointer in this case. If it gets invalid pointer, it
>> issues another RPC to free this buffer and then propagates error back to
>> caller.
>>
>> So, if I'll return error back to the guest (or rather issue RPC to free
>> the buffer), OP-TEE will be still blocked, waiting for a pointer from
>> the guest. Of course I can go further end emulate a new buffer
>> allocation RPC but from Xen side in a hope that guest will provide valid
>> buffer this time. But, what if not? And why I should decide instead of
>> OP-TEE?
>
> My main concern here is you do a print a warning and continue like
> nothing happen. As a reviewer, this is worrying because it is a call
> for weird behavior to happen in Xen. We don't want that.
Yep, I made a mistake there. I need to pass invalid value to OP-TEE to
make sure that it will not mistake it with semi-valid address.

> Looking at the code, why can't you pass 0 as you do if you fail to pin the pages?
This is exactly what I'm going to do.

-- 
Best regards, Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 08/11] optee: add support for arbitrary shared memory
  2018-12-18 21:11 ` [PATCH v3 08/11] optee: add support for arbitrary shared memory Volodymyr Babchuk
@ 2019-01-21 11:42   ` Julien Grall
  2019-02-19 15:59     ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-21 11:42 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> Shared memory is widely used by NW to communicate with
> TAs in OP-TEE. NW can share part of own memory with
> TA or OP-TEE core, by registering it OP-TEE, or by providing
> a temporal reference. Anyways, information about such memory
> buffers are sent to OP-TEE as a list of pages. This mechanism
> is described in optee_msg.h.
> 
> Mediator should step in when NW tries to share memory with
> OP-TEE for two reasons:
> 
> 1. Do address translation from IPA to PA.
> 2. Pin domain pages till they are mapped into OP-TEE or TA

Looking at the code, I think the page are mapped while OP-TEE is using them. If 
so, it should be s/till/while/.

>     address space, so domain can't transfer this pages to
>     other domain or balloon out them. >
> Address translation is done by translate_noncontig(...) function.
> It allocates new buffer from xenheap and then walks on guest
> provided list of pages, translates addresses and stores PAs into
> newly allocated buffer. This buffer will be provided to OP-TEE
> instead of original buffer from the guest. This buffer will
> be free at the end of standard call.
> 
> In the same time this function pins pages and stores them in
> struct optee_shm_buf object. This object will live all the time,
> when given SHM buffer is known to OP-TEE. It will be freed
> after guest unregisters shared buffer. At this time pages
> will be unpinned.
> 
> We don't need to do any special reference counting because OP-TEE
> tracks buffer on its side. So, mediator will unpin pages only
> when OP-TEE returns successfully from OPTEE_MSG_CMD_UNREGISTER_SHM
> call.
> 
> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> ---
> 
>   Changes from v2:
>   - Made sure that guest does not tries to register shared buffer with
>     the same cookie twice
>   - Fixed coding style
>   - Use access_guest_memory_by_ipa() instead of direct memory mapping
> 
>   xen/arch/arm/tee/optee.c | 274 +++++++++++++++++++++++++++++++++++++++
>   1 file changed, 274 insertions(+)
> 
> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
> index 771148e940..cfc3b34df7 100644
> --- a/xen/arch/arm/tee/optee.c
> +++ b/xen/arch/arm/tee/optee.c
> @@ -37,6 +37,10 @@
>    */
>   #define MAX_RPC_SHMS    MAX_STD_CALLS
>   
> +/* Maximum total number of pages that guest can share with OP-TEE */
> +#define MAX_TOTAL_SMH_BUF_PG    16384

Please explain in the commit message and code how you came up to this value.

> +#define MAX_NONCONTIG_ENTRIES   5

If I understand correctly the code below, MAX_NONCONTIG_ENTIRES is basically 
linked to the number of parameters. The maximum number of parameters is 5.

I see in patch #6 you check have the following check

OPTEE_MSG_GET_ARG_SIZE(num_params) > OPTEE_MSG_NONCONTIG_PAGE_SIZE

It is not entirely obvious how this ensure you have no more than 5 parameters 
neither how you came up to this value. Can you please at least provide more 
documentation in the code explaining the origin of the value?

You might also want a

BUILD_BUG_ON(OPTEE_MSG_GET_ARG_SIZE(MAX_NONCONTIG_ENTRIES) == 
OPTEE_MSG_NONCONTIG_PAGE_SIZE).

> +
>   #define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
>   #define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
>                                 OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM |  \
> @@ -50,6 +54,9 @@ struct optee_std_call {
>       struct list_head list;
>       struct optee_msg_arg *xen_arg;
>       paddr_t guest_arg_ipa;
> +    /* Buffer for translated page addresses, shared with OP-TEE */
> +    void *non_contig[MAX_NONCONTIG_ENTRIES];
> +    int non_contig_order[MAX_NONCONTIG_ENTRIES];
>       int optee_thread_id;
>       int rpc_op;
>       bool in_flight;
> @@ -63,12 +70,23 @@ struct shm_rpc {
>       uint64_t cookie;
>   };
>   
> +/* Shared memory buffer for arbitrary data */
> +struct optee_shm_buf {
> +    struct list_head list;
> +    uint64_t cookie;
> +    unsigned int max_page_cnt;
> +    unsigned int page_cnt;

I am not convinced the two variables are useful. It actually adds more confusion 
because of the close naming.

Looking at the code, you add the page contiguously in the array. So I think 
page_cnt could easily be replaced by checking whether pages[N] is NULL .

> +    struct page_info *pages[];
> +};
> +
>   /* Domain context */
>   struct optee_domain {
>       struct list_head call_list;
>       struct list_head shm_rpc_list;
> +    struct list_head optee_shm_buf_list;
>       atomic_t call_count;
>       atomic_t shm_rpc_count;
> +    atomic_t optee_shm_buf_pages;
>       spinlock_t lock;
>   };
>   
> @@ -125,9 +143,11 @@ static int optee_enable(struct domain *d)
>   
>       INIT_LIST_HEAD(&ctx->call_list);
>       INIT_LIST_HEAD(&ctx->shm_rpc_list);
> +    INIT_LIST_HEAD(&ctx->optee_shm_buf_list);
>   
>       atomic_set(&ctx->call_count, 0);
>       atomic_set(&ctx->shm_rpc_count, 0);
> +    atomic_set(&ctx->optee_shm_buf_pages, 0);
>   
>       spin_lock_init(&ctx->lock);
>   
> @@ -325,12 +345,91 @@ static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
>       xfree(shm_rpc);
>   }
>   
> +static struct optee_shm_buf *allocate_optee_shm_buf(struct optee_domain *ctx,
> +                                                    uint64_t cookie,
> +                                                    unsigned int pages_cnt)
> +{
> +    struct optee_shm_buf *optee_shm_buf, *optee_shm_buf_tmp;
> +
> +    while ( true )

I would prefer if we avoid while ( true ) and use

do { } while ( atomic...(...) != old)

> +    {
> +        int old = atomic_read(&ctx->optee_shm_buf_pages);
> +        int new = old + pages_cnt;

Newline here please.

> +        if ( new >= MAX_TOTAL_SMH_BUF_PG )
> +            return NULL;
The limitation is in number of page and quite high. What would prevent a guest 
to register shared memory page by page? If nothing, then I think you can end up 
to something interesting issue in Xen because of the growing list and memory used.


> +        if ( likely(old == atomic_cmpxchg(&ctx->optee_shm_buf_pages,
> +                                          old, new)) )
> +            break;
> +    }
> +
> +    optee_shm_buf = xzalloc_bytes(sizeof(struct optee_shm_buf) +
> +                            pages_cnt * sizeof(struct page *));

Coding style: page_cnt should be aligned with sizeof.

> +    if ( !optee_shm_buf )
> +        goto err;
> +
> +    optee_shm_buf->cookie = cookie;
> +    optee_shm_buf->max_page_cnt = pages_cnt;
> +
> +    spin_lock(&ctx->lock);
> +    /* Check if there is already SHM with the same cookie */
> +    list_for_each_entry( optee_shm_buf_tmp, &ctx->optee_shm_buf_list, list )
> +    {
> +        if ( optee_shm_buf_tmp->cookie == cookie )
> +        {
> +            spin_unlock(&ctx->lock);
> +            gprintk(XENLOG_WARNING, "Guest tries to use the same SHM buffer cookie");

I would use gdprintk(...) and add the cookie number for debug.

> +            goto err;
> +        }
> +    }
> +
> +    list_add_tail(&optee_shm_buf->list, &ctx->optee_shm_buf_list);
> +    spin_unlock(&ctx->lock);
> +
> +    return optee_shm_buf;
> +
> +err:
> +    atomic_sub(pages_cnt, &ctx->optee_shm_buf_pages);
> +    xfree(optee_shm_buf);
> +
> +    return NULL;
> +}
> +
> +static void free_optee_shm_buf(struct optee_domain *ctx, uint64_t cookie)
> +{
> +    struct optee_shm_buf *optee_shm_buf;
> +    bool found = false;
> +
> +    spin_lock(&ctx->lock);
> +    list_for_each_entry( optee_shm_buf, &ctx->optee_shm_buf_list, list )
> +    {
> +        if ( optee_shm_buf->cookie == cookie )
> +        {
> +            found = true;
> +            list_del(&optee_shm_buf->list);
> +            break;
> +        }
> +    }
> +    spin_unlock(&ctx->lock);
> +
> +    if ( !found )
> +        return;
> +
> +    for ( int i = 0; i < optee_shm_buf->page_cnt; i++ )

The variable should be declared at the beginning of the function. It should also 
be unsigned.

> +        if ( optee_shm_buf->pages[i] )
> +            put_page(optee_shm_buf->pages[i]);
> +
> +    atomic_sub(optee_shm_buf->max_page_cnt, &ctx->optee_shm_buf_pages);
> +
> +    xfree(optee_shm_buf);
> +}
> +
>   static void optee_domain_destroy(struct domain *d)
>   {
>       struct arm_smccc_res resp;
>       struct optee_std_call *call, *call_tmp;
>       struct optee_domain *ctx = d->arch.tee;
>       struct shm_rpc *shm_rpc, *shm_rpc_tmp;
> +    struct optee_shm_buf *optee_shm_buf, *optee_shm_buf_tmp;
>   
>       /* At this time all domain VCPUs should be stopped */
>   
> @@ -349,12 +448,177 @@ static void optee_domain_destroy(struct domain *d)
>       list_for_each_entry_safe( shm_rpc, shm_rpc_tmp, &ctx->shm_rpc_list, list )
>           free_shm_rpc(ctx, shm_rpc->cookie);
>   
> +    list_for_each_entry_safe( optee_shm_buf, optee_shm_buf_tmp,
> +                              &ctx->optee_shm_buf_list, list )
> +        free_optee_shm_buf(ctx, optee_shm_buf->cookie);
> +
>       ASSERT(!atomic_read(&ctx->call_count));
>       ASSERT(!atomic_read(&ctx->shm_rpc_count));
> +    ASSERT(!atomic_read(&ctx->optee_shm_buf_pages));
>   
>       xfree(d->arch.tee);
>   }
>   
> +#define PAGELIST_ENTRIES_PER_PAGE                       \
> +    ((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1)
> +
> +static size_t get_pages_list_size(size_t num_entries)
> +{
> +    int pages = DIV_ROUND_UP(num_entries, PAGELIST_ENTRIES_PER_PAGE);
> +
> +    return pages * OPTEE_MSG_NONCONTIG_PAGE_SIZE;
> +}
> +
> +static bool translate_noncontig(struct optee_domain *ctx,
> +                                struct optee_std_call *call,
> +                                struct optee_msg_param *param,
> +                                int idx)

The parameter idx should be unsigned int.

> +{
> +    /*
> +     * Refer to OPTEE_MSG_ATTR_NONCONTIG description in optee_msg.h for details.
> +     */
> +    uint64_t size;
> +    unsigned int page_offset;
> +    unsigned int num_pages;
> +    unsigned int order;
> +    unsigned int entries_on_page = 0;
> +    paddr_t gaddr;

By the way you use this variable, I would prefer if you store a frame using gfn_t.

> +    struct page_info *guest_page;
> +    struct {
> +        uint64_t pages_list[PAGELIST_ENTRIES_PER_PAGE];
> +        uint64_t next_page_data;
> +    } *pages_data_guest, *pages_data_xen, *pages_data_xen_start;

The structure is the internal of OPTEE_MSG_ATTR_NONCONTIG, am I correct? If so, 
the comment at the beginning of the function should be on top of the structure 
with further clarification (i.e this is the layout of the memory).

> +    struct optee_shm_buf *optee_shm_buf;
> +
> +    /* Offset of user buffer withing page */

NIT: s/withing/within/

> +    page_offset = param->u.tmem.buf_ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
> +
> +    /* Size of the user buffer in bytes */
> +    size = ROUNDUP(param->u.tmem.size + page_offset,
> +                   OPTEE_MSG_NONCONTIG_PAGE_SIZE);
> +
> +    num_pages = DIV_ROUND_UP(size, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
> +
> +    order = get_order_from_bytes(get_pages_list_size(num_pages));
> +
> +    pages_data_xen_start = alloc_xenheap_pages(order, 0);

By using alloc_xenheap_pages, you may end-up allocation more memory than 
necessary when the order is getting bigger. What is the bigger order you expect 
here?

[...]

>   /*
>    * Copy command buffer into xen memory to:
>    * 1) Hide translated addresses from guest
> @@ -469,6 +733,14 @@ static void execute_std_call(struct optee_domain *ctx,
>   
>       copy_std_request_back(ctx, regs, call);
>   
> +    /*
> +     * If guest successfully unregistered own shared memory,
> +     * then we can unpin it's pages

NIT: Missing full stop.

> +     */
> +    if ( call->xen_arg->cmd == OPTEE_MSG_CMD_UNREGISTER_SHM &&
> +         call->xen_arg->ret == 0 )
> +        free_optee_shm_buf(ctx, call->xen_arg->params[0].u.rmem.shm_ref);
> +
>       put_std_call(ctx, call);
>       free_std_call(ctx, call);
>   }

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 09/11] optee: add support for RPC commands
  2018-12-18 21:11 ` [PATCH v3 09/11] optee: add support for RPC commands Volodymyr Babchuk
@ 2019-01-21 11:58   ` Julien Grall
  2019-02-19 16:14     ` Volodymyr Babchuk
  0 siblings, 1 reply; 47+ messages in thread
From: Julien Grall @ 2019-01-21 11:58 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, tee-dev, Stefano Stabellini

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> OP-TEE can issue multiple RPC requests. We are interested mostly in
> request that asks NW to allocate/free shared memory for OP-TEE
> needs, because mediator need to do address translation in the same

NIT: the mediator needs

> way as it was done for shared buffers registered by NW.
> 
> As mediator now accesses shared command buffer, we need to shadow
> it in the same way, as we shadow request buffers for STD calls.

This is a bit confusing, does it means patch #8 is not doing the right thing?

> 
> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> ---
> 
>   Changes from v2:
>   - Use access_guest_memory_by_ipa() instead of direct mapping
> 
>   xen/arch/arm/tee/optee.c | 136 +++++++++++++++++++++++++++++++++++++--
>   1 file changed, 130 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
> index cfc3b34df7..bf3535946d 100644
> --- a/xen/arch/arm/tee/optee.c
> +++ b/xen/arch/arm/tee/optee.c
> @@ -67,6 +67,8 @@ struct optee_std_call {
>   struct shm_rpc {
>       struct list_head list;
>       struct page_info *guest_page;
> +    struct optee_msg_arg *xen_arg;
> +    paddr_t guest_ipa;
>       uint64_t cookie;
>   };
>   
> @@ -290,6 +292,11 @@ static struct shm_rpc *allocate_and_pin_shm_rpc(struct optee_domain *ctx,
>                                               P2M_ALLOC);
>       if ( !shm_rpc->guest_page )
>           goto err;
> +    shm_rpc->guest_ipa = gaddr;
> +
> +    shm_rpc->xen_arg = alloc_xenheap_page();

Based on the discussion in patch #6, I think you want to use to allocate the 
memory from domheap.

> +    if ( !shm_rpc->xen_arg )
> +        goto err;
>   
>       shm_rpc->cookie = cookie;
>   
> @@ -313,6 +320,7 @@ static struct shm_rpc *allocate_and_pin_shm_rpc(struct optee_domain *ctx,
>   err:
>       atomic_dec(&ctx->shm_rpc_count);
>       put_page(shm_rpc->guest_page);
> +    free_xenheap_page(shm_rpc->xen_arg);
>       xfree(shm_rpc);
>   
>       return NULL;
> @@ -339,12 +347,32 @@ static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
>       if ( !found )
>           return;
>   
> +    free_xenheap_page(shm_rpc->xen_arg);
> +
>       ASSERT(shm_rpc->guest_page);
>       put_page(shm_rpc->guest_page);
>   
>       xfree(shm_rpc);
>   }
>   
> +static struct shm_rpc *find_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
> +{
> +    struct shm_rpc *shm_rpc;
> +
> +    spin_lock(&ctx->lock);
> +    list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
> +    {
> +        if ( shm_rpc->cookie == cookie )
> +        {
> +                spin_unlock(&ctx->lock);
> +                return shm_rpc;
> +        }
> +    }
> +    spin_unlock(&ctx->lock);
> +
> +    return NULL;
> +}
> +
>   static struct optee_shm_buf *allocate_optee_shm_buf(struct optee_domain *ctx,
>                                                       uint64_t cookie,
>                                                       unsigned int pages_cnt)
> @@ -712,6 +740,33 @@ static void copy_std_request_back(struct optee_domain *ctx,
>       put_page(page);
>   }
>   
> +static void handle_rpc_return(struct optee_domain *ctx,
> +                             struct cpu_user_regs *regs,
> +                             struct optee_std_call *call)
> +{
> +    call->rpc_params[0] = get_user_reg(regs, 1);
> +    call->rpc_params[1] = get_user_reg(regs, 2);
> +    call->optee_thread_id = get_user_reg(regs, 3);
> +    call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(get_user_reg(regs, 0));
> +
> +    if ( call->rpc_op == OPTEE_SMC_RPC_FUNC_CMD )
> +    {
> +        /* Copy RPC request from shadowed buffer to guest */
> +        uint64_t cookie = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);

I am afraid this is not going to work correctly. get_user_reg return a 64-bit 
value, so if the top bit are non-zero then the cookie value will actually be 
wrong. It is not clear to me whether the bits [63:32] should be 0, so the best 
is to ignore them.

Given you use similar construction in a few places in the code, I would 
introduce a new helper that take 2 register indexes and return the 64-bit value.

> +        struct shm_rpc *shm_rpc = find_shm_rpc(ctx, cookie);

Newline here please.

> +        if ( !shm_rpc )
> +        {
> +            gprintk(XENLOG_ERR, "Can't find SHM-RPC with cookie %lx\n", cookie);
> +            return;
> +        }
> +        access_guest_memory_by_ipa(current->domain,
> +                        shm_rpc->guest_ipa,
> +                        shm_rpc->xen_arg,
> +                        OPTEE_MSG_GET_ARG_SIZE(shm_rpc->xen_arg->num_params),
> +                        true);
> +    }
> +}
> +
>   static void execute_std_call(struct optee_domain *ctx,
>                                struct cpu_user_regs *regs,
>                                struct optee_std_call *call)
> @@ -723,10 +778,7 @@ static void execute_std_call(struct optee_domain *ctx,
>       optee_ret = get_user_reg(regs, 0);
>       if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) )
>       {
> -        call->rpc_params[0] = get_user_reg(regs, 1);
> -        call->rpc_params[1] = get_user_reg(regs, 2);
> -        call->optee_thread_id = get_user_reg(regs, 3);
> -        call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret);
> +        handle_rpc_return(ctx, regs, call);

Please avoid code movement and introduce the function in the patch that was 
adding this code.

>           put_std_call(ctx, call);
>           return;
>       }
> @@ -787,6 +839,78 @@ err:
>       return false;
>   }
>   
> +static void handle_rpc_cmd_alloc(struct optee_domain *ctx,
> +                                 struct cpu_user_regs *regs,
> +                                 struct optee_std_call *call,
> +                                 struct shm_rpc *shm_rpc)
> +{
> +    if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT |
> +                                            OPTEE_MSG_ATTR_NONCONTIG) )
> +    {
> +        gprintk(XENLOG_WARNING, "Invalid attrs for shared mem buffer\n");

I would use gdprintk here and print the attributes to help the developer.

> +        return;
> +    }
> +
> +    /* Last entry in non_contig array is used to hold RPC-allocated buffer */
> +    if ( call->non_contig[MAX_NONCONTIG_ENTRIES - 1] )
> +    {
> +        free_xenheap_pages(call->non_contig[MAX_NONCONTIG_ENTRIES - 1],
> +                           call->non_contig_order[MAX_NONCONTIG_ENTRIES - 1]);
> +        call->non_contig[MAX_NONCONTIG_ENTRIES - 1] = NULL;
> +    }
> +    translate_noncontig(ctx, call, shm_rpc->xen_arg->params + 0,
> +                        MAX_NONCONTIG_ENTRIES - 1);
> +}
> +
> +static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs,
> +                           struct optee_std_call *call)
> +{
> +    struct shm_rpc *shm_rpc;
> +    uint64_t cookie;
> +    size_t arg_size;
> +
> +    cookie = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);

See above.

> +
> +    shm_rpc = find_shm_rpc(ctx, cookie);
> +
> +    if ( !shm_rpc )
> +    {
> +        gprintk(XENLOG_ERR, "Can't find SHM-RPC with cookie %lx\n", cookie);
> +        return;
> +    }
> +
> +    /* First, copy only header to read number of arguments */
> +    access_guest_memory_by_ipa(current->domain, shm_rpc->guest_ipa,
> +                               shm_rpc->xen_arg, sizeof(struct optee_msg_arg),
> +                               false);

This can return an error.

> +
> +    arg_size = OPTEE_MSG_GET_ARG_SIZE(shm_rpc->xen_arg->num_params);
> +    if ( arg_size > OPTEE_MSG_NONCONTIG_PAGE_SIZE )
> +        return;
> +
> +    /* Read the whole command structure */
> +    access_guest_memory_by_ipa(current->domain, shm_rpc->guest_ipa,
> +                               shm_rpc->xen_arg, arg_size, false);

Ditto.

> +
> +    switch (shm_rpc->xen_arg->cmd)
> +    {
> +    case OPTEE_MSG_RPC_CMD_GET_TIME:
> +        break;
> +    case OPTEE_MSG_RPC_CMD_WAIT_QUEUE:
> +        break;
> +    case OPTEE_MSG_RPC_CMD_SUSPEND:
> +        break;
> +    case OPTEE_MSG_RPC_CMD_SHM_ALLOC:
> +        handle_rpc_cmd_alloc(ctx, regs, call, shm_rpc);
> +        break;
> +    case OPTEE_MSG_RPC_CMD_SHM_FREE:
> +        free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b);
> +        break;
> +    default:
> +        break;
> +    }
> +}
> +
>   static void handle_rpc_func_alloc(struct optee_domain *ctx,
>                                     struct cpu_user_regs *regs)
>   {
> @@ -807,7 +931,7 @@ static void handle_rpc_func_alloc(struct optee_domain *ctx,
>               ptr = 0;
>           }
>           else
> -            ptr = page_to_maddr(shm_rpc->guest_page);
> +            ptr = virt_to_maddr(shm_rpc->xen_arg);
>   
>           set_user_reg(regs, 1, ptr >> 32);
>           set_user_reg(regs, 2, ptr & 0xFFFFFFFF);
> @@ -839,7 +963,7 @@ static bool handle_rpc(struct optee_domain *ctx, struct cpu_user_regs *regs)
>       case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR:
>           break;
>       case OPTEE_SMC_RPC_FUNC_CMD:
> -        /* TODO: Add handling */
> +        handle_rpc_cmd(ctx, regs, call);
>           break;
>       }
>   
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 10/11] xl: add "tee" option for xl.cfg
  2018-12-18 21:11 ` [PATCH v3 10/11] xl: add "tee" option for xl.cfg Volodymyr Babchuk
@ 2019-01-21 12:08   ` Julien Grall
  0 siblings, 0 replies; 47+ messages in thread
From: Julien Grall @ 2019-01-21 12:08 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, Wei Liu, Ian Jackson, tee-dev

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> This boolean option controls if TEE access is enabled for the domain.
> If access is enabled, xl will set appropriate flag in architecture
> configuration to ask hypervisor to enable TEE support.
> 
> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> ---
>   Changes from v2:
>   - Use arch.tee_enabled instead of separate domctl
> 
>   docs/man/xl.cfg.pod.5.in    | 10 ++++++++++
>   tools/libxl/libxl_arm.c     |  2 ++
>   tools/libxl/libxl_create.c  |  1 +
>   tools/libxl/libxl_types.idl |  1 +
>   tools/xl/xl_parse.c         |  1 +
>   5 files changed, 15 insertions(+)
> 
> diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
> index b1c0be14cd..9a7064c951 100644
> --- a/docs/man/xl.cfg.pod.5.in
> +++ b/docs/man/xl.cfg.pod.5.in
> @@ -2793,6 +2793,16 @@ Currently, only the "sbsa_uart" model is supported for ARM.
>   
>   =back
>   
> +=over 4
> +
> +=item B<tee=BOOLEAN>
> +
> +Enable TEE support for the guest. Currently only OP-TEE is supported. If this
> +option is enabled, xl will create guest, which can access TEE. Also
> +OP-TEE node will be emitted into guest's device tree.
> +
> +=back
> +
>   =head3 x86
>   
>   =over 4
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 141e159043..f8b4ef55e0 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -89,6 +89,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
>           return ERROR_FAIL;
>       }
>   
> +    config->arch.tee_enabled = libxl_defbool_val(d_config->b_info.tee);
> +
>       return 0;
>   }
>   
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index fa573344bc..bc9c4ee9ef 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -219,6 +219,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>   
>       libxl__arch_domain_build_info_setdefault(gc, b_info);
>       libxl_defbool_setdefault(&b_info->dm_restrict, false);
> +    libxl_defbool_setdefault(&b_info->tee, false);
>   
>       switch (b_info->type) {
>       case LIBXL_DOMAIN_TYPE_HVM:
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 51cf06a3a2..a634f6e704 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -613,6 +613,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>       # Alternate p2m is not bound to any architecture or guest type, as it is
>       # supported by x86 HVM and ARM support is planned.
>       ("altp2m", libxl_altp2m_mode),
> +    ("tee",  libxl_defbool),

I would appreciate if you address or at least reply to my comments on v2 [1]. 
For reminder:

The option TEE is described in the Arm section, but the option is declared for 
all architectures here. Do you see any use on x86?

Also, new option should be accompanied with a define (e.g 
LIBXL_HAVE_BUILDINFO_TEE) in libxl.h informing toolstack (e.g libvirt) that the 
option is available.

Lastly, I would consider to introduce an enum here with for now only the 
options: NONE, NATIVE. This would give us some freedom to potentially emulate 
TEE in the future or even choose the TEE (i.e in Secure EL2 case) without 
re-defining a new parameters.

Cheers,

[1] https://lists.xenproject.org/archives/html/xen-devel/2018-09/msg01004.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 11/11] libxl: arm: create optee firmware node in DT if tee=1
  2018-12-18 21:11 ` [PATCH v3 11/11] libxl: arm: create optee firmware node in DT if tee=1 Volodymyr Babchuk
@ 2019-01-21 12:10   ` Julien Grall
  0 siblings, 0 replies; 47+ messages in thread
From: Julien Grall @ 2019-01-21 12:10 UTC (permalink / raw)
  To: Volodymyr Babchuk, xen-devel
  Cc: Volodymyr Babchuk, Wei Liu, Ian Jackson, tee-dev

Hi,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> 
> If TEE support is enabled with "tee=1" option in xl.cfg,
> then we need to inform guest about available TEE.
> 
> Currently only OP-TEE is supported, so we'll create DT
> node in a way that is expected by optee driver in linux.
> 
> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
> ---
>   tools/libxl/libxl_arm.c | 29 +++++++++++++++++++++++++++++
>   1 file changed, 29 insertions(+)
> 
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index f8b4ef55e0..befccebd19 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -409,6 +409,32 @@ static int make_psci_node(libxl__gc *gc, void *fdt)
>       return 0;
>   }
>   
> +static int make_optee_node(libxl__gc *gc, void *fdt)
> +{
> +    int res;
> +    LOG(DEBUG, "Creating OP-TEE node in dtb");
> +
> +    res = fdt_begin_node(fdt, "firmware");
> +    if (res) return res;
> +
> +    res = fdt_begin_node(fdt, "optee");
> +    if (res) return res;
> +
> +    res = fdt_property_compat(gc, fdt, 1, "linaro,optee-tz");
> +    if (res) return res;
> +
> +    res = fdt_property_string(fdt, "method", "smc");

As said on v2, we want to use HVC interface and not SMC.

> +    if (res) return res;
> +
> +    res = fdt_end_node(fdt);
> +    if (res) return res;
> +
> +    res = fdt_end_node(fdt);
> +    if (res) return res;
> +
> +    return 0;
> +}
> +
>   static int make_memory_nodes(libxl__gc *gc, void *fdt,
>                                const struct xc_dom_image *dom)
>   {
> @@ -922,6 +948,9 @@ next_resize:
>           if (info->arch_arm.vuart == LIBXL_VUART_TYPE_SBSA_UART)
>               FDT( make_vpl011_uart_node(gc, fdt, ainfo, dom) );
>   
> +        if (libxl_defbool_val(info->tee))
> +            FDT( make_optee_node(gc, fdt));

As said on v2, there are a missing space before the last ).

> +
>           if (pfdt)
>               FDT( copy_partial_fdt(gc, fdt, pfdt) );
>   
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 08/11] optee: add support for arbitrary shared memory
  2019-01-21 11:42   ` Julien Grall
@ 2019-02-19 15:59     ` Volodymyr Babchuk
  2019-02-20 13:26       ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-02-19 15:59 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev

Hello Julien,

Julien Grall writes:

> Hi Volodymyr,
>
> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>
>> Shared memory is widely used by NW to communicate with
>> TAs in OP-TEE. NW can share part of own memory with
>> TA or OP-TEE core, by registering it OP-TEE, or by providing
>> a temporal reference. Anyways, information about such memory
>> buffers are sent to OP-TEE as a list of pages. This mechanism
>> is described in optee_msg.h.
>>
>> Mediator should step in when NW tries to share memory with
>> OP-TEE for two reasons:
>>
>> 1. Do address translation from IPA to PA.
>> 2. Pin domain pages till they are mapped into OP-TEE or TA
>
> Looking at the code, I think the page are mapped while OP-TEE is using
> them. If so, it should be s/till/while/.
>
>>     address space, so domain can't transfer this pages to
>>     other domain or balloon out them. >
>> Address translation is done by translate_noncontig(...) function.
>> It allocates new buffer from xenheap and then walks on guest
>> provided list of pages, translates addresses and stores PAs into
>> newly allocated buffer. This buffer will be provided to OP-TEE
>> instead of original buffer from the guest. This buffer will
>> be free at the end of standard call.
>>
>> In the same time this function pins pages and stores them in
>> struct optee_shm_buf object. This object will live all the time,
>> when given SHM buffer is known to OP-TEE. It will be freed
>> after guest unregisters shared buffer. At this time pages
>> will be unpinned.
>>
>> We don't need to do any special reference counting because OP-TEE
>> tracks buffer on its side. So, mediator will unpin pages only
>> when OP-TEE returns successfully from OPTEE_MSG_CMD_UNREGISTER_SHM
>> call.
>>
>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>> ---
>>
>>   Changes from v2:
>>   - Made sure that guest does not tries to register shared buffer with
>>     the same cookie twice
>>   - Fixed coding style
>>   - Use access_guest_memory_by_ipa() instead of direct memory mapping
>>
>>   xen/arch/arm/tee/optee.c | 274 +++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 274 insertions(+)
>>
>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>> index 771148e940..cfc3b34df7 100644
>> --- a/xen/arch/arm/tee/optee.c
>> +++ b/xen/arch/arm/tee/optee.c
>> @@ -37,6 +37,10 @@
>>    */
>>   #define MAX_RPC_SHMS    MAX_STD_CALLS
>>   +/* Maximum total number of pages that guest can share with OP-TEE
>> */
>> +#define MAX_TOTAL_SMH_BUF_PG    16384
>
> Please explain in the commit message and code how you came up to this value.

Basically it is taken from head. I just needed some number. You see,
number of registered shared memory buffer solely depends on free heap in
OP-TEE. But the same heap is used for other purposes, so it is hard to
tell how much pages can be shared with OP-TEE. I assumed that 64M ought
to be enough for anybody.

Probably it is worth to make this configurable via domctl interface.

>> +#define MAX_NONCONTIG_ENTRIES   5
>
> If I understand correctly the code below, MAX_NONCONTIG_ENTIRES is
> basically linked to the number of parameters. The maximum number of
> parameters is 5.
> I see in patch #6 you check have the following check
>
> OPTEE_MSG_GET_ARG_SIZE(num_params) > OPTEE_MSG_NONCONTIG_PAGE_SIZE
> It is not entirely obvious how this ensure you have no more than 5
> parameters neither how you came up to this value. Can you please at
> least provide more documentation in the code explaining the origin of
> the value?
Sadly, it is not mentioned in standard. But it is defined locally in
OP-TEE, in "tee_api_defines.h" file in the following way:

/* Not specified in the standard */
#define TEE_NUM_PARAMS  4

Fifth parameter is added to handle RPC buffer in the same way as other parameters.
Actually, I think, I need to rework this, because there only hard limit
to number of parameters is mentioned check

OPTEE_MSG_GET_ARG_SIZE(num_params) > OPTEE_MSG_NONCONTIG_PAGE_SIZE


> You might also want a
>
> BUILD_BUG_ON(OPTEE_MSG_GET_ARG_SIZE(MAX_NONCONTIG_ENTRIES) ==
> OPTEE_MSG_NONCONTIG_PAGE_SIZE).
I think that

 BUILD_BUG_ON(OPTEE_MSG_GET_ARG_SIZE(MAX_NONCONTIG_ENTRIES) <=
 OPTEE_MSG_NONCONTIG_PAGE_SIZE).

would be better in this case. But yes, I think I need to revisit this part.

[...]

>> +        if ( new >= MAX_TOTAL_SMH_BUF_PG )
>> +            return NULL;
> The limitation is in number of page and quite high. What would prevent
> a guest to register shared memory page by page? If nothing, then I
> think you can end up to something interesting issue in Xen because of
> the growing list and memory used.
Are you suggesting to introduce limit both on number of buffers and
the total number of pages?

[...]
>> +    struct page_info *guest_page;
>> +    struct {
>> +        uint64_t pages_list[PAGELIST_ENTRIES_PER_PAGE];
>> +        uint64_t next_page_data;
>> +    } *pages_data_guest, *pages_data_xen, *pages_data_xen_start;
>
> The structure is the internal of OPTEE_MSG_ATTR_NONCONTIG, am I
> correct? If so, the comment at the beginning of the function should be
> on top of the structure with further clarification (i.e this is the
> layout of the memory).
Yes, this is the layout of memory which is described in
optee_msg.h. I'll add reference to this file in comment.

[...]
>> +    page_offset = param->u.tmem.buf_ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
>> +
>> +    /* Size of the user buffer in bytes */
>> +    size = ROUNDUP(param->u.tmem.size + page_offset,
>> +                   OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>> +
>> +    num_pages = DIV_ROUND_UP(size, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>> +
>> +    order = get_order_from_bytes(get_pages_list_size(num_pages));
>> +
>> +    pages_data_xen_start = alloc_xenheap_pages(order, 0);
>
> By using alloc_xenheap_pages, you may end-up allocation more memory
> than necessary when the order is getting bigger. What is the bigger
> order you expect here?
>
It depend on MAX_TOTAL_SMH_BUF_PG. One page can contain up to 511
entries, plus reference to the next one. So, basically at max I will
need about 32 pages, which gives order 5-6.
I think, I can try xzalloc or domheap there. But looks like there is no
xmem_pool for the domheap. So, I still will be forced to use
alloc_domheap_pages().

[...]

-- 
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 09/11] optee: add support for RPC commands
  2019-01-21 11:58   ` Julien Grall
@ 2019-02-19 16:14     ` Volodymyr Babchuk
  2019-02-20 13:46       ` Julien Grall
  0 siblings, 1 reply; 47+ messages in thread
From: Volodymyr Babchuk @ 2019-02-19 16:14 UTC (permalink / raw)
  To: Julien Grall
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini,
	Volodymyr Babchuk, tee-dev


Hi Julien,

Julien Grall writes:

> Hi Volodymyr,
>
> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>
>> OP-TEE can issue multiple RPC requests. We are interested mostly in
>> request that asks NW to allocate/free shared memory for OP-TEE
>> needs, because mediator need to do address translation in the same
>
> NIT: the mediator needs
>
>> way as it was done for shared buffers registered by NW.
>>
>> As mediator now accesses shared command buffer, we need to shadow
>> it in the same way, as we shadow request buffers for STD calls.
>
> This is a bit confusing, does it means patch #8 is not doing the right thing?
No, it was patch #6 :)

And I can't say that it did something wrong. Remember that prior to the
last patch in series DomU can't use the mediator. And for Dom0 it is okay to
map RPC command buffer directly. Description of patch #4 mentions that
we need all patches in the series for a complete mediator.

>>
>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>> ---
>>
>>   Changes from v2:
>>   - Use access_guest_memory_by_ipa() instead of direct mapping
>>
>>   xen/arch/arm/tee/optee.c | 136 +++++++++++++++++++++++++++++++++++++--
>>   1 file changed, 130 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>> index cfc3b34df7..bf3535946d 100644
>> --- a/xen/arch/arm/tee/optee.c
>> +++ b/xen/arch/arm/tee/optee.c
>> @@ -67,6 +67,8 @@ struct optee_std_call {
>>   struct shm_rpc {
>>       struct list_head list;
>>       struct page_info *guest_page;
>> +    struct optee_msg_arg *xen_arg;
>> +    paddr_t guest_ipa;
>>       uint64_t cookie;
>>   };
>>   @@ -290,6 +292,11 @@ static struct shm_rpc
>> *allocate_and_pin_shm_rpc(struct optee_domain *ctx,
>>                                               P2M_ALLOC);
>>       if ( !shm_rpc->guest_page )
>>           goto err;
>> +    shm_rpc->guest_ipa = gaddr;
>> +
>> +    shm_rpc->xen_arg = alloc_xenheap_page();
>
> Based on the discussion in patch #6, I think you want to use to
> allocate the memory from domheap.
Yes, will do.
I already did it for #6 and now there is a lots of places
where I am mapping/unmapping that page. So, it is somewhat
inconvenient...

It just appeared to me that I can map those pages when entering
mediator and unmap when leaving.

>> +    if ( !shm_rpc->xen_arg )
>> +        goto err;
>>         shm_rpc->cookie = cookie;
>>   @@ -313,6 +320,7 @@ static struct shm_rpc
>> *allocate_and_pin_shm_rpc(struct optee_domain *ctx,
>>   err:
>>       atomic_dec(&ctx->shm_rpc_count);
>>       put_page(shm_rpc->guest_page);
>> +    free_xenheap_page(shm_rpc->xen_arg);
>>       xfree(shm_rpc);
>>         return NULL;
>> @@ -339,12 +347,32 @@ static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
>>       if ( !found )
>>           return;
>>   +    free_xenheap_page(shm_rpc->xen_arg);
>> +
>>       ASSERT(shm_rpc->guest_page);
>>       put_page(shm_rpc->guest_page);
>>         xfree(shm_rpc);
>>   }
>>   +static struct shm_rpc *find_shm_rpc(struct optee_domain *ctx,
>> uint64_t cookie)
>> +{
>> +    struct shm_rpc *shm_rpc;
>> +
>> +    spin_lock(&ctx->lock);
>> +    list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
>> +    {
>> +        if ( shm_rpc->cookie == cookie )
>> +        {
>> +                spin_unlock(&ctx->lock);
>> +                return shm_rpc;
>> +        }
>> +    }
>> +    spin_unlock(&ctx->lock);
>> +
>> +    return NULL;
>> +}
>> +
>>   static struct optee_shm_buf *allocate_optee_shm_buf(struct optee_domain *ctx,
>>                                                       uint64_t cookie,
>>                                                       unsigned int pages_cnt)
>> @@ -712,6 +740,33 @@ static void copy_std_request_back(struct optee_domain *ctx,
>>       put_page(page);
>>   }
>>   +static void handle_rpc_return(struct optee_domain *ctx,
>> +                             struct cpu_user_regs *regs,
>> +                             struct optee_std_call *call)
>> +{
>> +    call->rpc_params[0] = get_user_reg(regs, 1);
>> +    call->rpc_params[1] = get_user_reg(regs, 2);
>> +    call->optee_thread_id = get_user_reg(regs, 3);
>> +    call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(get_user_reg(regs, 0));
>> +
>> +    if ( call->rpc_op == OPTEE_SMC_RPC_FUNC_CMD )
>> +    {
>> +        /* Copy RPC request from shadowed buffer to guest */
>> +        uint64_t cookie = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>
> I am afraid this is not going to work correctly. get_user_reg return a
> 64-bit value, so if the top bit are non-zero then the cookie value
> will actually be wrong. It is not clear to me whether the bits [63:32]
> should be 0, so the best is to ignore them.
> Given you use similar construction in a few places in the code, I
> would introduce a new helper that take 2 register indexes and return
> the 64-bit value.
Good idea, thanks.

[...]

-- 
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 08/11] optee: add support for arbitrary shared memory
  2019-02-19 15:59     ` Volodymyr Babchuk
@ 2019-02-20 13:26       ` Julien Grall
  0 siblings, 0 replies; 47+ messages in thread
From: Julien Grall @ 2019-02-20 13:26 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev

On 19/02/2019 15:59, Volodymyr Babchuk wrote:
> Hello Julien,

Hi Volodymyr,

> Julien Grall writes:
> 
>> Hi Volodymyr,
>>
>> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>>
>>> Shared memory is widely used by NW to communicate with
>>> TAs in OP-TEE. NW can share part of own memory with
>>> TA or OP-TEE core, by registering it OP-TEE, or by providing
>>> a temporal reference. Anyways, information about such memory
>>> buffers are sent to OP-TEE as a list of pages. This mechanism
>>> is described in optee_msg.h.
>>>
>>> Mediator should step in when NW tries to share memory with
>>> OP-TEE for two reasons:
>>>
>>> 1. Do address translation from IPA to PA.
>>> 2. Pin domain pages till they are mapped into OP-TEE or TA
>>
>> Looking at the code, I think the page are mapped while OP-TEE is using
>> them. If so, it should be s/till/while/.
>>
>>>      address space, so domain can't transfer this pages to
>>>      other domain or balloon out them. >
>>> Address translation is done by translate_noncontig(...) function.
>>> It allocates new buffer from xenheap and then walks on guest
>>> provided list of pages, translates addresses and stores PAs into
>>> newly allocated buffer. This buffer will be provided to OP-TEE
>>> instead of original buffer from the guest. This buffer will
>>> be free at the end of standard call.
>>>
>>> In the same time this function pins pages and stores them in
>>> struct optee_shm_buf object. This object will live all the time,
>>> when given SHM buffer is known to OP-TEE. It will be freed
>>> after guest unregisters shared buffer. At this time pages
>>> will be unpinned.
>>>
>>> We don't need to do any special reference counting because OP-TEE
>>> tracks buffer on its side. So, mediator will unpin pages only
>>> when OP-TEE returns successfully from OPTEE_MSG_CMD_UNREGISTER_SHM
>>> call.
>>>
>>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>> ---
>>>
>>>    Changes from v2:
>>>    - Made sure that guest does not tries to register shared buffer with
>>>      the same cookie twice
>>>    - Fixed coding style
>>>    - Use access_guest_memory_by_ipa() instead of direct memory mapping
>>>
>>>    xen/arch/arm/tee/optee.c | 274 +++++++++++++++++++++++++++++++++++++++
>>>    1 file changed, 274 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>> index 771148e940..cfc3b34df7 100644
>>> --- a/xen/arch/arm/tee/optee.c
>>> +++ b/xen/arch/arm/tee/optee.c
>>> @@ -37,6 +37,10 @@
>>>     */
>>>    #define MAX_RPC_SHMS    MAX_STD_CALLS
>>>    +/* Maximum total number of pages that guest can share with OP-TEE
>>> */
>>> +#define MAX_TOTAL_SMH_BUF_PG    16384
>>
>> Please explain in the commit message and code how you came up to this value.
> 
> Basically it is taken from head. I just needed some number. You see,
> number of registered shared memory buffer solely depends on free heap in
> OP-TEE. But the same heap is used for other purposes, so it is hard to
> tell how much pages can be shared with OP-TEE. I assumed that 64M ought
> to be enough for anybody.

Such explanation would be fine for me in the commit message and the code. The 
goal is to log how we came up with the value (even if it is random). This would 
help us if we need to refine the value in the future.

> 
> Probably it is worth to make this configurable via domctl interface.
It is not strictly necessary for now. We can refine it in the future if needed.

[...]

>>> +    page_offset = param->u.tmem.buf_ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
>>> +
>>> +    /* Size of the user buffer in bytes */
>>> +    size = ROUNDUP(param->u.tmem.size + page_offset,
>>> +                   OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>>> +
>>> +    num_pages = DIV_ROUND_UP(size, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
>>> +
>>> +    order = get_order_from_bytes(get_pages_list_size(num_pages));
>>> +
>>> +    pages_data_xen_start = alloc_xenheap_pages(order, 0);
>>
>> By using alloc_xenheap_pages, you may end-up allocation more memory
>> than necessary when the order is getting bigger. What is the bigger
>> order you expect here?
>>
> It depend on MAX_TOTAL_SMH_BUF_PG. One page can contain up to 511
> entries, plus reference to the next one. So, basically at max I will
> need about 32 pages, which gives order 5-6.
> I think, I can try xzalloc or domheap there. But looks like there is no
> xmem_pool for the domheap. So, I still will be forced to use
> alloc_domheap_pages().
xmem_pool is not used when you allocate buffer larger than PAGE_SIZE. Instead, 
xmalloc will allocate an order bigger than free unused page (see 
xmalloc_whole_pages).

It is not overly critical to allocate more for now, but it would be nice to add 
it as a TODO in the code.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v3 09/11] optee: add support for RPC commands
  2019-02-19 16:14     ` Volodymyr Babchuk
@ 2019-02-20 13:46       ` Julien Grall
  0 siblings, 0 replies; 47+ messages in thread
From: Julien Grall @ 2019-02-20 13:46 UTC (permalink / raw)
  To: Volodymyr Babchuk
  Cc: Volodymyr Babchuk, xen-devel, Stefano Stabellini, tee-dev

Hi,

On 19/02/2019 16:14, Volodymyr Babchuk wrote:
> 
> Hi Julien,
> 
> Julien Grall writes:
> 
>> Hi Volodymyr,
>>
>> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>>> From: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>>
>>> OP-TEE can issue multiple RPC requests. We are interested mostly in
>>> request that asks NW to allocate/free shared memory for OP-TEE
>>> needs, because mediator need to do address translation in the same
>>
>> NIT: the mediator needs
>>
>>> way as it was done for shared buffers registered by NW.
>>>
>>> As mediator now accesses shared command buffer, we need to shadow
>>> it in the same way, as we shadow request buffers for STD calls.
>>
>> This is a bit confusing, does it means patch #8 is not doing the right thing?
> No, it was patch #6 :)
> 
> And I can't say that it did something wrong. Remember that prior to the
> last patch in series DomU can't use the mediator. And for Dom0 it is okay to
> map RPC command buffer directly. Description of patch #4 mentions that
> we need all patches in the series for a complete mediator.

Not all the memory in Dom0 is 1:1 mapped. So you may end up to use the wrong 
address here.

But, it is not very intuitive to have to read the commit message of patch #4 to 
understand that patch #8 is fixing a flaw in patch #6.

Technically, earlier patch should not have allowed to use shared command buffer 
until now. While I appreciate it is hard to split big series, we at least need 
to write clear commit message. In that case "now accesses" clear lead to think 
something wrong has been done before. So a reminder in the commit message would 
help the reviewer here.

> 
>>>
>>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@gmail.com>
>>> ---
>>>
>>>    Changes from v2:
>>>    - Use access_guest_memory_by_ipa() instead of direct mapping
>>>
>>>    xen/arch/arm/tee/optee.c | 136 +++++++++++++++++++++++++++++++++++++--
>>>    1 file changed, 130 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>> index cfc3b34df7..bf3535946d 100644
>>> --- a/xen/arch/arm/tee/optee.c
>>> +++ b/xen/arch/arm/tee/optee.c
>>> @@ -67,6 +67,8 @@ struct optee_std_call {
>>>    struct shm_rpc {
>>>        struct list_head list;
>>>        struct page_info *guest_page;
>>> +    struct optee_msg_arg *xen_arg;
>>> +    paddr_t guest_ipa;
>>>        uint64_t cookie;
>>>    };
>>>    @@ -290,6 +292,11 @@ static struct shm_rpc
>>> *allocate_and_pin_shm_rpc(struct optee_domain *ctx,
>>>                                                P2M_ALLOC);
>>>        if ( !shm_rpc->guest_page )
>>>            goto err;
>>> +    shm_rpc->guest_ipa = gaddr;
>>> +
>>> +    shm_rpc->xen_arg = alloc_xenheap_page();
>>
>> Based on the discussion in patch #6, I think you want to use to
>> allocate the memory from domheap.
> Yes, will do.
> I already did it for #6 and now there is a lots of places
> where I am mapping/unmapping that page. So, it is somewhat
> inconvenient...

Well, security is always inconvenient until we found a flaw ;).

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2019-02-20 13:47 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-18 21:11 [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Volodymyr Babchuk
2018-12-18 21:11 ` [PATCH v3 01/11] arm: add generic TEE mediator framework Volodymyr Babchuk
2019-01-15 17:50   ` Julien Grall
2018-12-18 21:11 ` [PATCH v3 03/11] arm: tee: add OP-TEE header files Volodymyr Babchuk
2018-12-18 21:11 ` [PATCH v3 02/11] arm: add tee_enabled flag to xen_arch_domainconfig Volodymyr Babchuk
2019-01-15 17:35   ` Julien Grall
2019-01-16 17:22     ` Volodymyr Babchuk
2019-01-16 17:27       ` Julien Grall
2018-12-18 21:11 ` [PATCH v3 04/11] optee: add OP-TEE mediator skeleton Volodymyr Babchuk
2019-01-15 18:15   ` Julien Grall
2019-01-16 17:31     ` Volodymyr Babchuk
2019-01-16 17:51   ` Julien Grall
2018-12-18 21:11 ` [PATCH v3 05/11] optee: add fast calls handling Volodymyr Babchuk
2019-01-17 18:31   ` Julien Grall
2019-01-17 19:22     ` Volodymyr Babchuk
2019-01-17 20:02       ` Julien Grall
2019-01-17 20:10         ` Volodymyr Babchuk
2019-01-18 15:29           ` Julien Grall
2018-12-18 21:11 ` [PATCH v3 06/11] optee: add std call handling Volodymyr Babchuk
2019-01-16 18:45   ` Julien Grall
2019-01-17 15:21     ` Volodymyr Babchuk
2019-01-17 18:18       ` Julien Grall
2019-01-17 19:13         ` Volodymyr Babchuk
2019-01-17 20:00           ` Julien Grall
2019-01-17 20:14             ` Volodymyr Babchuk
2018-12-18 21:11 ` [PATCH v3 07/11] optee: add support for RPC SHM buffers Volodymyr Babchuk
2019-01-17 18:49   ` Julien Grall
2019-01-17 19:48     ` Volodymyr Babchuk
2019-01-17 20:08       ` Julien Grall
2019-01-17 21:01         ` Volodymyr Babchuk
2019-01-18 15:38           ` Julien Grall
2019-01-18 16:05             ` Volodymyr Babchuk
2019-01-18 17:51               ` Julien Grall
2019-01-18 18:29                 ` Volodymyr Babchuk
2018-12-18 21:11 ` [PATCH v3 08/11] optee: add support for arbitrary shared memory Volodymyr Babchuk
2019-01-21 11:42   ` Julien Grall
2019-02-19 15:59     ` Volodymyr Babchuk
2019-02-20 13:26       ` Julien Grall
2018-12-18 21:11 ` [PATCH v3 09/11] optee: add support for RPC commands Volodymyr Babchuk
2019-01-21 11:58   ` Julien Grall
2019-02-19 16:14     ` Volodymyr Babchuk
2019-02-20 13:46       ` Julien Grall
2018-12-18 21:11 ` [PATCH v3 11/11] libxl: arm: create optee firmware node in DT if tee=1 Volodymyr Babchuk
2019-01-21 12:10   ` Julien Grall
2018-12-18 21:11 ` [PATCH v3 10/11] xl: add "tee" option for xl.cfg Volodymyr Babchuk
2019-01-21 12:08   ` Julien Grall
2018-12-19 13:25 ` [PATCH v3 00/11] TEE mediator (and OP-TEE) support in XEN Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.