xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM.
@ 2016-07-04 11:45 Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM Sergej Proskurin
                   ` (36 more replies)
  0 siblings, 37 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin

Hello together,

Since this is my first contribution to the Xen development mailing list, I
would like to shortly introduce myself. My name is Sergej Proskurin. I am a PhD
Student at the Technical University of Munich. My research areas focus on
Virtual Machine Introspection, Hypervisor/OS Security, and Reverse Engineering.

The following patch series can be found on Github[0] and is part of my
contribution to this year's Google Summer of Code (GSoC)[1]. My project is
managed by the organization The Honeynet Project. As part of GSoC, I am being
supervised by the Xen developer Tamas K. Lengyel <tamas@tklengyel.com>, George
D. Webster, and Steven Maresca.

In this patch series, we provide an implementation of the altp2m subsystem for
ARM. Our implementation is based on the altp2m subsystem for x86, providing
additional --alternate-- views on the guest's physical memory by means of the
ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
extend the p2m subsystem. We extend libxl to support altp2m on ARM and modify
xen-access to test the suggested functionality.

[0] https://github.com/sergej-proskurin/xen (Branch arm-altp2m-patch)
[1] https://summerofcode.withgoogle.com/projects/#4970052843470848

Sergej Proskurin (18):
  arm/altp2m: Add cmd-line support for altp2m on ARM.
  arm/altp2m: Add first altp2m HVMOP stubs.
  arm/altp2m: Add HVMOP_altp2m_get_domain_state.
  arm/altp2m: Add altp2m init/teardown routines.
  arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  arm/altp2m: Add a(p2m) table flushing routines.
  arm/altp2m: Add HVMOP_altp2m_create_p2m.
  arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  arm/altp2m: Add HVMOP_altp2m_switch_p2m.
  arm/altp2m: Renamed and extended p2m_alloc_table.
  arm/altp2m: Make flush_tlb_domain ready for altp2m.
  arm/altp2m: Cosmetic fixes - function prototypes.
  arm/altp2m: Make get_page_from_gva ready for altp2m.
  arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  arm/altp2m: Add altp2m paging mechanism.
  arm/altp2m: Extended libxl to activate altp2m on ARM.
  arm/altp2m: Adjust debug information to altp2m.
  arm/altp2m: Extend xen-access for altp2m on ARM.

 tools/libxl/libxl_create.c          |   1 +
 tools/libxl/libxl_dom.c             |  14 +
 tools/libxl/libxl_types.idl         |   1 +
 tools/libxl/xl_cmdimpl.c            |   5 +
 tools/tests/xen-access/xen-access.c |  11 +-
 xen/arch/arm/Makefile               |   1 +
 xen/arch/arm/altp2m.c               |  68 +++
 xen/arch/arm/domain.c               |   2 +-
 xen/arch/arm/guestcopy.c            |   6 +-
 xen/arch/arm/hvm.c                  | 145 ++++++
 xen/arch/arm/p2m.c                  | 930 ++++++++++++++++++++++++++++++++----
 xen/arch/arm/traps.c                | 104 +++-
 xen/include/asm-arm/altp2m.h        |  12 +-
 xen/include/asm-arm/domain.h        |  18 +
 xen/include/asm-arm/hvm/hvm.h       |  59 +++
 xen/include/asm-arm/mm.h            |   2 +-
 xen/include/asm-arm/p2m.h           |  72 ++-
 17 files changed, 1330 insertions(+), 121 deletions(-)
 create mode 100644 xen/arch/arm/altp2m.c
 create mode 100644 xen/include/asm-arm/hvm/hvm.h

-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 12:15   ` Andrew Cooper
                     ` (2 more replies)
  2016-07-04 11:45 ` [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
                   ` (35 subsequent siblings)
  36 siblings, 3 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The Xen altp2m subsystem is currently supported only on x86-64 based
architectures. By utilizing ARM's virtualization extensions, we intend
to implement altp2m support for ARM architectures and thus further
extend current Virtual Machine Introspection (VMI) capabilities on ARM.

With this commit, Xen is now able to activate altp2m support on ARM by
means of the command-line argument 'altp2m' (bool).

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c            | 22 ++++++++++++++++++++
 xen/include/asm-arm/hvm/hvm.h | 47 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 69 insertions(+)
 create mode 100644 xen/include/asm-arm/hvm/hvm.h

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index d999bde..3615036 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,28 @@
 
 #include <asm/hypercall.h>
 
+#include <asm/hvm/hvm.h>
+
+/* Xen command-line option enabling altp2m */
+static bool_t __initdata opt_altp2m_enabled = 0;
+boolean_param("altp2m", opt_altp2m_enabled);
+
+struct hvm_function_table hvm_funcs __read_mostly = {
+    .name = "ARM_HVM",
+};
+
+/* Initcall enabling hvm functionality. */
+static int __init hvm_enable(void)
+{
+    if ( opt_altp2m_enabled )
+        hvm_funcs.altp2m_supported = 1;
+    else
+        hvm_funcs.altp2m_supported = 0;
+
+    return 0;
+}
+presmp_initcall(hvm_enable);
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
diff --git a/xen/include/asm-arm/hvm/hvm.h b/xen/include/asm-arm/hvm/hvm.h
new file mode 100644
index 0000000..96c455c
--- /dev/null
+++ b/xen/include/asm-arm/hvm/hvm.h
@@ -0,0 +1,47 @@
+/*
+ * include/asm-arm/hvm/hvm.h
+ *
+ * Copyright (c) 2016, Sergej Proskurin <proskurin@sec.in.tum.de>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_HVM_HVM_H__
+#define __ASM_ARM_HVM_HVM_H__
+
+struct hvm_function_table {
+    char *name;
+
+    /* Necessary hardware support for alternate p2m's. */
+    bool_t altp2m_supported;
+};
+
+extern struct hvm_function_table hvm_funcs;
+
+/* Returns true if hardware supports alternate p2m's */
+static inline bool_t hvm_altp2m_supported(void)
+{
+    return hvm_funcs.altp2m_supported;
+}
+
+#endif /* __ASM_ARM_HVM_HVM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 13:36   ` Julien Grall
  2016-07-05 10:19   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
                   ` (34 subsequent siblings)
  36 siblings, 2 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit moves the altp2m-related code from x86 to ARM. Functions
that are no yet supported notify the caller or print a BUG message
stating their absence.

Also, the struct arch_domain is extended with the altp2m_active
attribute, represeting the current altp2m activity configuration of the
domain.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c           | 82 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/altp2m.h | 22 ++++++++++--
 xen/include/asm-arm/domain.h |  3 ++
 3 files changed, 105 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 3615036..1118f22 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,7 @@
 
 #include <asm/hypercall.h>
 
+#include <asm/altp2m.h>
 #include <asm/hvm/hvm.h>
 
 /* Xen command-line option enabling altp2m */
@@ -54,6 +55,83 @@ static int __init hvm_enable(void)
 }
 presmp_initcall(hvm_enable);
 
+static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    struct xen_hvm_altp2m_op a;
+    struct domain *d = NULL;
+    int rc = 0;
+
+    if ( !hvm_altp2m_supported() )
+        return -EOPNOTSUPP;
+
+    if ( copy_from_guest(&a, arg, 1) )
+        return -EFAULT;
+
+    if ( a.pad1 || a.pad2 ||
+         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
+         (a.cmd < HVMOP_altp2m_get_domain_state) ||
+         (a.cmd > HVMOP_altp2m_change_gfn) )
+        return -EINVAL;
+
+    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
+        rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
+
+    if ( d == NULL )
+        return -ESRCH;
+
+    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
+         (a.cmd != HVMOP_altp2m_set_domain_state) &&
+         !d->arch.altp2m_active )
+    {
+        rc = -EOPNOTSUPP;
+        goto out;
+    }
+
+    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
+        goto out;
+
+    switch ( a.cmd )
+    {
+    case HVMOP_altp2m_get_domain_state:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_set_domain_state:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_vcpu_enable_notify:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_create_p2m:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_destroy_p2m:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_switch_p2m:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_set_mem_access:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_change_gfn:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
@@ -102,6 +180,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             rc = -EINVAL;
         break;
 
+    case HVMOP_altp2m:
+        rc = do_altp2m_op(arg);
+        break;
+
     default:
     {
         gdprintk(XENLOG_DEBUG, "HVMOP op=%lu: not implemented\n", op);
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a87747a..16ae9d6 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -2,6 +2,7 @@
  * Alternate p2m
  *
  * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2016, Sergej Proskurin <proskurin@sec.in.tum.de>.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
@@ -24,8 +25,7 @@
 /* Alternate p2m on/off per domain */
 static inline bool_t altp2m_active(const struct domain *d)
 {
-    /* Not implemented on ARM. */
-    return 0;
+    return d->arch.altp2m_active;
 }
 
 /* Alternate p2m VCPU */
@@ -36,4 +36,22 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
     return 0;
 }
 
+static inline void altp2m_vcpu_initialise(struct vcpu *v)
+{
+    /* Not implemented on ARM, should not be reached. */
+    BUG();
+}
+
+static inline void altp2m_vcpu_destroy(struct vcpu *v)
+{
+    /* Not implemented on ARM, should not be reached. */
+    BUG();
+}
+
+static inline void altp2m_vcpu_reset(struct vcpu *v)
+{
+    /* Not implemented on ARM, should not be reached. */
+    BUG();
+}
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 979f7de..2039f16 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -127,6 +127,9 @@ struct arch_domain
     paddr_t efi_acpi_gpa;
     paddr_t efi_acpi_len;
 #endif
+
+    /* altp2m: allow multiple copies of host p2m */
+    bool_t altp2m_active;
 }  __cacheline_aligned;
 
 struct arch_vcpu
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
                   ` (33 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adopts the x86 HVMOP_altp2m_get_domain_state implementation.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 1118f22..8e8e0f7 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -93,7 +93,14 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
     switch ( a.cmd )
     {
     case HVMOP_altp2m_get_domain_state:
-        rc = -EOPNOTSUPP;
+        if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        a.u.domain_state.state = altp2m_active(d);
+        rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
         break;
 
     case HVMOP_altp2m_set_domain_state:
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (2 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 15:17   ` Julien Grall
  2016-07-04 16:15   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
                   ` (32 subsequent siblings)
  36 siblings, 2 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The p2m intialization now invokes intialization routines responsible for
the allocation and intitialization of altp2m structures. The same
applies to teardown routines. The functionality has been adopted from
the x86 altp2m implementation.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c            | 166 ++++++++++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/domain.h  |   6 ++
 xen/include/asm-arm/hvm/hvm.h |  12 +++
 xen/include/asm-arm/p2m.h     |  20 +++++
 4 files changed, 198 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index aa4e774..e72ca7a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1400,19 +1400,103 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    int ret = 0;
+
+    spin_lock_init(&p2m->lock);
+    INIT_PAGE_LIST_HEAD(&p2m->pages);
+
+    spin_lock(&p2m->lock);
+
+    p2m->domain = d;
+    p2m->access_required = false;
+    p2m->mem_access_enabled = false;
+    p2m->default_access = p2m_access_rwx;
+    p2m->p2m_class = p2m_host;
+    p2m->root = NULL;
+
+    /* Adopt VMID of the associated domain */
+    p2m->vmid = d->arch.p2m.vmid;
+    p2m->vttbr.vttbr = 0;
+    p2m->vttbr.vttbr_vmid = p2m->vmid;
+
+    p2m->max_mapped_gfn = 0;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
+    radix_tree_init(&p2m->mem_access_settings);
+
+    spin_unlock(&p2m->lock);
+
+    return ret;
+}
+
+static void p2m_free_one(struct p2m_domain *p2m)
+{
+    mfn_t mfn;
+    unsigned int i;
     struct page_info *pg;
 
     spin_lock(&p2m->lock);
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+        if ( pg != p2m->root )
+            free_domheap_page(pg);
+
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+    {
+        mfn = _mfn(page_to_mfn(p2m->root) + i);
+        clear_domain_page(mfn);
+    }
+    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
+    p2m->root = NULL;
+
+    radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
+    spin_unlock(&p2m->lock);
+
+    xfree(p2m);
+}
+
+static struct p2m_domain *p2m_init_one(struct domain *d)
+{
+    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
+
+    if ( !p2m )
+        return NULL;
+
+    if ( p2m_initialise(d, p2m) )
+        goto free_p2m;
+
+    return p2m;
+
+free_p2m:
+    xfree(p2m);
+    return NULL;
+}
+
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct page_info *pg = NULL;
+    mfn_t mfn;
+    unsigned int i;
+
+    spin_lock(&p2m->lock);
 
-    if ( p2m->root )
-        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+        if ( pg != p2m->root )
+        {
+            mfn = _mfn(page_to_mfn(pg));
+            clear_domain_page(mfn);
+            free_domheap_page(pg);
+        }
 
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+    {
+        mfn = _mfn(page_to_mfn(p2m->root) + i);
+        clear_domain_page(mfn);
+    }
+    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
     p2m->root = NULL;
 
     p2m_free_vmid(d);
@@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
     spin_unlock(&p2m->lock);
 }
 
-int p2m_init(struct domain *d)
+static int p2m_init_hostp2m(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
     int rc = 0;
@@ -1437,6 +1521,8 @@ int p2m_init(struct domain *d)
     if ( rc != 0 )
         goto err;
 
+    p2m->vttbr.vttbr_vmid = p2m->vmid;
+
     d->arch.vttbr = 0;
 
     p2m->root = NULL;
@@ -1454,6 +1540,74 @@ err:
     return rc;
 }
 
+static void p2m_teardown_altp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( !d->arch.altp2m_p2m[i] )
+            continue;
+
+        p2m = d->arch.altp2m_p2m[i];
+        p2m_free_one(p2m);
+        d->arch.altp2m_vttbr[i] = INVALID_MFN;
+        d->arch.altp2m_p2m[i] = NULL;
+    }
+
+    d->arch.altp2m_active = false;
+}
+
+static int p2m_init_altp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    spin_lock_init(&d->arch.altp2m_lock);
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        d->arch.altp2m_vttbr[i] = INVALID_MFN;
+        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
+        if ( p2m == NULL )
+        {
+            p2m_teardown_altp2m(d);
+            return -ENOMEM;
+        }
+        p2m->p2m_class = p2m_alternate;
+        p2m->access_required = 1;
+        _atomic_set(&p2m->active_vcpus, 0);
+    }
+
+    return 0;
+}
+
+void p2m_teardown(struct domain *d)
+{
+    /*
+     * We must teardown altp2m unconditionally because
+     * we initialise it unconditionally.
+     */
+    p2m_teardown_altp2m(d);
+
+    p2m_teardown_hostp2m(d);
+}
+
+int p2m_init(struct domain *d)
+{
+    int rc = 0;
+
+    rc = p2m_init_hostp2m(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_init_altp2m(d);
+    if ( rc )
+        p2m_teardown_hostp2m(d);
+
+    return rc;
+}
+
 int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 2039f16..6b9770f 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -29,6 +29,9 @@ enum domain_type {
 #define is_64bit_domain(d) (0)
 #endif
 
+#define MAX_ALTP2M      10 /* arbitrary */
+#define INVALID_ALTP2M  0xffff
+
 extern int dom0_11_mapping;
 #define is_domain_direct_mapped(d) ((d) == hardware_domain && dom0_11_mapping)
 
@@ -130,6 +133,9 @@ struct arch_domain
 
     /* altp2m: allow multiple copies of host p2m */
     bool_t altp2m_active;
+    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
+    spinlock_t altp2m_lock;
+    uint64_t altp2m_vttbr[MAX_ALTP2M];
 }  __cacheline_aligned;
 
 struct arch_vcpu
diff --git a/xen/include/asm-arm/hvm/hvm.h b/xen/include/asm-arm/hvm/hvm.h
index 96c455c..28d5298 100644
--- a/xen/include/asm-arm/hvm/hvm.h
+++ b/xen/include/asm-arm/hvm/hvm.h
@@ -19,6 +19,18 @@
 #ifndef __ASM_ARM_HVM_HVM_H__
 #define __ASM_ARM_HVM_HVM_H__
 
+struct vttbr_data {
+    union {
+        struct {
+            u64 vttbr_baddr :40, /* variable res0: from 0-(x-1) bit */
+                res1        :8,
+                vttbr_vmid  :8,
+                res2        :8;
+        };
+        u64 vttbr;
+    };
+};
+
 struct hvm_function_table {
     char *name;
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 0d1e61e..a78d547 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -8,6 +8,9 @@
 #include <xen/p2m-common.h>
 #include <public/memory.h>
 
+#include <asm/atomic.h>
+#include <asm/hvm/hvm.h>
+
 #define paddr_bits PADDR_BITS
 
 /* Holds the bit size of IPAs in p2m tables.  */
@@ -17,6 +20,11 @@ struct domain;
 
 extern void memory_type_changed(struct domain *);
 
+typedef enum {
+    p2m_host,
+    p2m_alternate,
+} p2m_class_t;
+
 /* Per-p2m-table state */
 struct p2m_domain {
     /* Lock that protects updates to the p2m */
@@ -66,6 +74,18 @@ struct p2m_domain {
     /* Radix tree to store the p2m_access_t settings as the pte's don't have
      * enough available bits to store this information. */
     struct radix_tree_root mem_access_settings;
+
+    /* Alternate p2m: count of vcpu's currently using this p2m. */
+    atomic_t active_vcpus;
+
+    /* Choose between: host/alternate */
+    p2m_class_t p2m_class;
+
+    /* Back pointer to domain */
+    struct domain *domain;
+
+    /* VTTBR information */
+    struct vttbr_data vttbr;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (3 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 15:39   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines Sergej Proskurin
                   ` (31 subsequent siblings)
  36 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
specific domain. This commit adopts the x86
HVMOP_altp2m_set_domain_state implementation.  The function
p2m_flush_altp2m is currently implemented in form of a stub.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/Makefile        |  1 +
 xen/arch/arm/altp2m.c        | 68 ++++++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/hvm.c           | 30 ++++++++++++++++++-
 xen/arch/arm/p2m.c           | 46 ++++++++++++++++++++++++++++++
 xen/include/asm-arm/altp2m.h | 20 ++-----------
 xen/include/asm-arm/domain.h |  9 ++++++
 xen/include/asm-arm/p2m.h    | 19 +++++++++++++
 7 files changed, 175 insertions(+), 18 deletions(-)
 create mode 100644 xen/arch/arm/altp2m.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 9e38da3..abd6f1a 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -41,6 +41,7 @@ obj-y += decode.o
 obj-y += processor.o
 obj-y += smc.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-y += altp2m.o
 
 #obj-bin-y += ....o
 
diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
new file mode 100644
index 0000000..1d2505f
--- /dev/null
+++ b/xen/arch/arm/altp2m.c
@@ -0,0 +1,68 @@
+/*
+ * arch/arm/altp2m.c
+ *
+ * Alternate p2m
+ * Copyright (c) 2016 Sergej Proskurin <proskurin@sec.in.tum.de>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/p2m.h>
+#include <asm/altp2m.h>
+#include <asm/hvm/hvm.h>
+
+void altp2m_vcpu_reset(struct vcpu *v)
+{
+    struct altp2mvcpu *av = &vcpu_altp2m(v);
+
+    av->p2midx = INVALID_ALTP2M;
+}
+
+void altp2m_vcpu_initialise(struct vcpu *v)
+{
+    if ( v != current )
+        vcpu_pause(v);
+
+    altp2m_vcpu_reset(v);
+    vcpu_altp2m(v).p2midx = 0;
+    atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+
+    if ( v != current )
+        vcpu_unpause(v);
+}
+
+void altp2m_vcpu_destroy(struct vcpu *v)
+{
+    struct p2m_domain *p2m;
+
+    if ( v != current )
+        vcpu_pause(v);
+
+    if ( (p2m = p2m_get_altp2m(v)) )
+        atomic_dec(&p2m->active_vcpus);
+
+    altp2m_vcpu_reset(v);
+
+    if ( v != current )
+        vcpu_unpause(v);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 8e8e0f7..cb90a55 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -104,8 +104,36 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_set_domain_state:
-        rc = -EOPNOTSUPP;
+    {
+        struct vcpu *v;
+        bool_t ostate;
+
+        if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        ostate = d->arch.altp2m_active;
+        d->arch.altp2m_active = !!a.u.domain_state.state;
+
+        /* If the alternate p2m state has changed, handle appropriately */
+        if ( d->arch.altp2m_active != ostate &&
+             (ostate || !(rc = p2m_init_altp2m_by_id(d, 0))) )
+        {
+            for_each_vcpu( d, v )
+            {
+                if ( !ostate )
+                    altp2m_vcpu_initialise(v);
+                else
+                    altp2m_vcpu_destroy(v);
+            }
+
+            if ( ostate )
+                p2m_flush_altp2m(d);
+        }
         break;
+    }
 
     case HVMOP_altp2m_vcpu_enable_notify:
         rc = -EOPNOTSUPP;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e72ca7a..4a745fd 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2064,6 +2064,52 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
     return ret;
 }
 
+struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
+{
+    unsigned int index = vcpu_altp2m(v).p2midx;
+
+    if ( index == INVALID_ALTP2M )
+        return NULL;
+
+    BUG_ON(index >= MAX_ALTP2M);
+
+    return v->domain->arch.altp2m_p2m[index];
+}
+
+static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
+{
+    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+    struct vttbr_data *vttbr = &p2m->vttbr;
+
+    p2m->lowest_mapped_gfn = INVALID_GFN;
+    p2m->max_mapped_gfn = 0;
+
+    vttbr->vttbr_baddr = page_to_maddr(p2m->root);
+    vttbr->vttbr_vmid = p2m->vmid;
+
+    d->arch.altp2m_vttbr[i] = vttbr->vttbr;
+}
+
+int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+    int rc = -EINVAL;
+
+    if ( idx >= MAX_ALTP2M )
+        return rc;
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] == INVALID_MFN )
+    {
+        p2m_init_altp2m_helper(d, idx);
+        rc = 0;
+    }
+
+    altp2m_unlock(d);
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 16ae9d6..ec4aa09 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -36,22 +36,8 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
     return 0;
 }
 
-static inline void altp2m_vcpu_initialise(struct vcpu *v)
-{
-    /* Not implemented on ARM, should not be reached. */
-    BUG();
-}
-
-static inline void altp2m_vcpu_destroy(struct vcpu *v)
-{
-    /* Not implemented on ARM, should not be reached. */
-    BUG();
-}
-
-static inline void altp2m_vcpu_reset(struct vcpu *v)
-{
-    /* Not implemented on ARM, should not be reached. */
-    BUG();
-}
+void altp2m_vcpu_initialise(struct vcpu *v);
+void altp2m_vcpu_destroy(struct vcpu *v);
+void altp2m_vcpu_reset(struct vcpu *v);
 
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 6b9770f..8bcd618 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -138,6 +138,12 @@ struct arch_domain
     uint64_t altp2m_vttbr[MAX_ALTP2M];
 }  __cacheline_aligned;
 
+struct altp2mvcpu {
+    uint16_t p2midx; /* alternate p2m index */
+};
+
+#define vcpu_altp2m(v) ((v)->arch.avcpu)
+
 struct arch_vcpu
 {
     struct {
@@ -267,6 +273,9 @@ struct arch_vcpu
     struct vtimer phys_timer;
     struct vtimer virt_timer;
     bool_t vtimer_initialized;
+
+    /* Alternate p2m context */
+    struct altp2mvcpu avcpu;
 }  __cacheline_aligned;
 
 void vcpu_show_execution_state(struct vcpu *);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index a78d547..8ee78e0 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -121,6 +121,25 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
     /* Not supported on ARM. */
 }
 
+/*
+ * Alternate p2m: shadow p2m tables used for alternate memory views.
+ */
+
+#define altp2m_lock(d)      spin_lock(&(d)->arch.altp2m_lock)
+#define altp2m_unlock(d)    spin_unlock(&(d)->arch.altp2m_lock)
+
+/* Get current alternate p2m table */
+struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
+
+/* Flush all the alternate p2m's for a domain */
+static inline void p2m_flush_altp2m(struct domain *d)
+{
+    /* Not supported on ARM. */
+}
+
+/* Make a specific alternate p2m valid */
+int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (4 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 12:12   ` Sergej Proskurin
                     ` (2 more replies)
  2016-07-04 11:45 ` [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
                   ` (30 subsequent siblings)
  36 siblings, 3 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The current implementation differentiates between flushing and
destroying altp2m views. This commit adds the functions
p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or
individual altp2m views without destroying the entire table. In this
way, altp2m views can be reused at a later point in time.

In addition, the implementation clears all altp2m entries during the
process of flushing. The same applies to hostp2m entries, when it is
destroyed. In this way, further domain and p2m allocations will not
unintentionally reuse old p2m mappings.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c        | 67 +++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/p2m.h | 15 ++++++++---
 2 files changed, 78 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4a745fd..ae789e6 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
     return rc;
 }
 
+/* Reset this p2m table to be empty */
+static void p2m_flush_table(struct p2m_domain *p2m)
+{
+    struct page_info *top, *pg;
+    mfn_t mfn;
+    unsigned int i;
+
+    /* Check whether the p2m table has already been flushed before. */
+    if ( p2m->root == NULL)
+        return;
+
+    spin_lock(&p2m->lock);
+
+    /*
+     * "Host" p2m tables can have shared entries &c that need a bit more care
+     * when discarding them
+     */
+    ASSERT(!p2m_is_hostp2m(p2m));
+
+    /* Zap the top level of the trie */
+    top = p2m->root;
+
+    /* Clear all concatenated first level pages */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+    {
+        mfn = _mfn(page_to_mfn(top + i));
+        clear_domain_page(mfn);
+    }
+
+    /* Free the rest of the trie pages back to the paging pool */
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+        if ( pg != top  )
+        {
+            /*
+             * Before freeing the individual pages, we clear them to prevent
+             * reusing old table entries in future p2m allocations.
+             */
+            mfn = _mfn(page_to_mfn(pg));
+            clear_domain_page(mfn);
+            free_domheap_page(pg);
+        }
+
+    page_list_add(top, &p2m->pages);
+
+    /* Invalidate VTTBR */
+    p2m->vttbr.vttbr = 0;
+    p2m->vttbr.vttbr_baddr = INVALID_MFN;
+
+    spin_unlock(&p2m->lock);
+}
+
+void p2m_flush_altp2m(struct domain *d)
+{
+    unsigned int i;
+
+    altp2m_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        p2m_flush_table(d->arch.altp2m_p2m[i]);
+        flush_tlb();
+        d->arch.altp2m_vttbr[i] = INVALID_MFN;
+    }
+
+    altp2m_unlock(d);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8ee78e0..51d784f 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -132,10 +132,7 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
 
 /* Flush all the alternate p2m's for a domain */
-static inline void p2m_flush_altp2m(struct domain *d)
-{
-    /* Not supported on ARM. */
-}
+void p2m_flush_altp2m(struct domain *d);
 
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
@@ -289,6 +286,16 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
+static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
+{
+    return p2m->p2m_class == p2m_host;
+}
+
+static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
+{
+    return p2m->p2m_class == p2m_alternate;
+}
+
 /* vm_event and mem_access are supported on any ARM guest */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
 {
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (5 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
                   ` (29 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c        |  3 ++-
 xen/arch/arm/p2m.c        | 23 +++++++++++++++++++++++
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index cb90a55..005d7c6 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -140,7 +140,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_create_p2m:
-        rc = -EOPNOTSUPP;
+        if ( !(rc = p2m_init_next_altp2m(d, &a.u.view.view)) )
+            rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
         break;
 
     case HVMOP_altp2m_destroy_p2m:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ae789e6..6c41b98 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2110,6 +2110,29 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
     return rc;
 }
 
+int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
+{
+    int rc = -EINVAL;
+    unsigned int i;
+
+    altp2m_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( d->arch.altp2m_vttbr[i] != INVALID_MFN )
+            continue;
+
+        p2m_init_altp2m_helper(d, i);
+        *idx = i;
+        rc = 0;
+
+        break;
+    }
+
+    altp2m_unlock(d);
+    return rc;
+}
+
 /* Reset this p2m table to be empty */
 static void p2m_flush_table(struct p2m_domain *p2m)
 {
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 51d784f..c51532a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -137,6 +137,9 @@ void p2m_flush_altp2m(struct domain *d);
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Find an available alternate p2m and make it valid */
+int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (6 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 16:32   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
                   ` (28 subsequent siblings)
  36 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c        |  2 +-
 xen/arch/arm/p2m.c        | 32 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 005d7c6..f4ec5cf 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -145,7 +145,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_destroy_p2m:
-        rc = -EOPNOTSUPP;
+        rc = p2m_destroy_altp2m_by_id(d, a.u.view.view);
         break;
 
     case HVMOP_altp2m_switch_p2m:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6c41b98..f82f1ea 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2200,6 +2200,38 @@ void p2m_flush_altp2m(struct domain *d)
     altp2m_unlock(d);
 }
 
+int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+    struct p2m_domain *p2m;
+    int rc = -EBUSY;
+
+    if ( !idx || idx >= MAX_ALTP2M )
+        return rc;
+
+    domain_pause_except_self(d);
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+    {
+        p2m = d->arch.altp2m_p2m[idx];
+
+        if ( !_atomic_read(p2m->active_vcpus) )
+        {
+            p2m_flush_table(p2m);
+            flush_tlb();
+            d->arch.altp2m_vttbr[idx] = INVALID_MFN;
+            rc = 0;
+        }
+    }
+
+    altp2m_unlock(d);
+
+    domain_unpause_except_self(d);
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index c51532a..255a282 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -140,6 +140,9 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
 /* Find an available alternate p2m and make it valid */
 int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 
+/* Make a specific alternate p2m invalid */
+int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (7 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table Sergej Proskurin
                   ` (27 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c        |  2 +-
 xen/arch/arm/p2m.c        | 32 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index f4ec5cf..9a536b2 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -149,7 +149,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_switch_p2m:
-        rc = -EOPNOTSUPP;
+        rc = p2m_switch_domain_altp2m_by_id(d, a.u.view.view);
         break;
 
     case HVMOP_altp2m_set_mem_access:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f82f1ea..8bf23ee 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2232,6 +2232,38 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
     return rc;
 }
 
+int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+    struct vcpu *v;
+    int rc = -EINVAL;
+
+    if ( idx >= MAX_ALTP2M )
+        return rc;
+
+    domain_pause_except_self(d);
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+    {
+        for_each_vcpu( d, v )
+            if ( idx != vcpu_altp2m(v).p2midx )
+            {
+                atomic_dec(&p2m_get_altp2m(v)->active_vcpus);
+                vcpu_altp2m(v).p2midx = idx;
+                atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+            }
+
+        rc = 0;
+    }
+
+    altp2m_unlock(d);
+
+    domain_unpause_except_self(d);
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 255a282..783db5c 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -143,6 +143,9 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 /* Make a specific alternate p2m invalid */
 int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Switch alternate p2m for entire domain */
+int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (8 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 18:43   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m Sergej Proskurin
                   ` (26 subsequent siblings)
  36 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The initially named function "p2m_alloc_table" allocated pages solely
required for p2m. The new implementation leaves p2m allocation related
parts inside of this function (which is made static) and provides an
overlay function "p2m_table_init" that can be called from extern to
generally initialize p2m tables.  Therefore, it distinguishes between
the domain's p2m and altp2m mappings, which are allocated similarly.

NOTE: Inside the function "p2m_alloc_table" we do not lock the p2m lock
anymore. Also, we flush the TLBs outside of the function
"p2m_alloc_table". Instead, we perform the associate locking and TLB
flushing as part of the function p2m_table_init. This allows us to
provide a uniform interface for p2m-related table allocation, which can
be used for altp2m (and potentially nested p2m tables in the future
implementation) -- as it is done in the x86 implementation.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 53 +++++++++++++++++++++++++++++++++++++----------
 xen/include/asm-arm/p2m.h |  2 +-
 3 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 6ce4645..6102ed0 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -573,7 +573,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     if ( (rc = domain_io_init(d)) != 0 )
         goto fail;
 
-    if ( (rc = p2m_alloc_table(d)) != 0 )
+    if ( (rc = p2m_table_init(d)) != 0 )
         goto fail;
 
     switch ( config->gic_version )
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8bf23ee..7e721f9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1315,35 +1315,66 @@ void guest_physmap_remove_page(struct domain *d,
                       d->arch.p2m.default_access);
 }
 
-int p2m_alloc_table(struct domain *d)
+static int p2m_alloc_table(struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
-    struct page_info *page;
+    struct page_info *page = NULL;
     unsigned int i;
 
     page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
     if ( page == NULL )
         return -ENOMEM;
 
-    spin_lock(&p2m->lock);
-
-    /* Clear both first level pages */
+    /* Clear all first level pages */
     for ( i = 0; i < P2M_ROOT_PAGES; i++ )
         clear_and_clean_page(page + i);
 
     p2m->root = page;
 
-    d->arch.vttbr = page_to_maddr(p2m->root)
-        | ((uint64_t)p2m->vmid&0xff)<<48;
+    p2m->vttbr.vttbr = 0;
+    p2m->vttbr.vttbr_vmid = p2m->vmid & 0xff;
+    p2m->vttbr.vttbr_baddr = page_to_maddr(p2m->root);
 
-    /* Make sure that all TLBs corresponding to the new VMID are flushed
-     * before using it
+    return 0;
+}
+
+int p2m_table_init(struct domain *d)
+{
+    int i = 0;
+    int rc = -ENOMEM;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    spin_lock(&p2m->lock);
+
+    rc = p2m_alloc_table(p2m);
+    if ( rc != 0 )
+        goto out;
+
+    d->arch.vttbr = d->arch.p2m.vttbr.vttbr;
+
+    /*
+     * Make sure that all TLBs corresponding to the new VMID are flushed
+     * before using it.
      */
     flush_tlb_domain(d);
 
     spin_unlock(&p2m->lock);
 
-    return 0;
+    if ( hvm_altp2m_supported() )
+    {
+        /* Init alternate p2m data */
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+        {
+            d->arch.altp2m_vttbr[i] = INVALID_MFN;
+            rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
+            if ( rc != 0 )
+                goto out;
+        }
+
+        d->arch.altp2m_active = 0;
+    }
+
+out:
+    return rc;
 }
 
 #define MAX_VMID 256
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 783db5c..451b097 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -171,7 +171,7 @@ int relinquish_p2m_mapping(struct domain *d);
  *
  * Returns 0 for success or -errno.
  */
-int p2m_alloc_table(struct domain *d);
+int p2m_table_init(struct domain *d);
 
 /* Context switch */
 void p2m_save_state(struct vcpu *p);
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (9 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 12:30   ` Sergej Proskurin
  2016-07-04 20:32   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
                   ` (25 subsequent siblings)
  36 siblings, 2 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit makes sure that the TLB of a domain considers flushing all
of the associated altp2m views. Therefore, in case a different domain
(not the currently active domain) shall flush its TLBs, the current
implementation loops over all VTTBRs of the different altp2m mappings
per vCPU and flushes the TLBs. This way, a change of one of the altp2m
mapping is considered.  At this point, it must be considered that the
domain --whose TLBs are to be flushed-- is not locked.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 63 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7e721f9..019f10e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,8 @@
 #include <asm/hardirq.h>
 #include <asm/page.h>
 
+#include <asm/altp2m.h>
+
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
 static unsigned int __read_mostly p2m_root_level;
@@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
                  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
 }
 
+static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    uint16_t index = vcpu_altp2m(v).p2midx;
+
+    if ( index == INVALID_ALTP2M )
+        return INVALID_MFN;
+
+    BUG_ON(index >= MAX_ALTP2M);
+
+    return d->arch.altp2m_vttbr[index];
+}
+
+static void p2m_load_altp2m_VTTBR(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    uint64_t vttbr = p2m_get_altp2m_vttbr(v);
+
+    if ( is_idle_domain(d) )
+        return;
+
+    BUG_ON(vttbr == INVALID_MFN);
+    WRITE_SYSREG64(vttbr, VTTBR_EL2);
+
+    isb(); /* Ensure update is visible */
+}
+
 static void p2m_load_VTTBR(struct domain *d)
 {
     if ( is_idle_domain(d) )
         return;
+
     BUG_ON(!d->arch.vttbr);
     WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
+
     isb(); /* Ensure update is visible */
 }
 
@@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
     WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
     isb();
 
-    p2m_load_VTTBR(n->domain);
+    if ( altp2m_active(n->domain) )
+        p2m_load_altp2m_VTTBR(n);
+    else
+        p2m_load_VTTBR(n->domain);
+
     isb();
 
     if ( is_32bit_domain(n->domain) )
@@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
 void flush_tlb_domain(struct domain *d)
 {
     unsigned long flags = 0;
+    struct vcpu *v = NULL;
 
-    /* Update the VTTBR if necessary with the domain d. In this case,
-     * it's only necessary to flush TLBs on every CPUs with the current VMID
-     * (our domain).
+    /*
+     * Update the VTTBR if necessary with the domain d. In this case, it is only
+     * necessary to flush TLBs on every CPUs with the current VMID (our
+     * domain).
      */
     if ( d != current->domain )
     {
         local_irq_save(flags);
-        p2m_load_VTTBR(d);
-    }
 
-    flush_tlb();
+        /* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
+        if ( altp2m_active(d) )
+        {
+            for_each_vcpu( d, v )
+            {
+                p2m_load_altp2m_VTTBR(v);
+                flush_tlb();
+            }
+        }
+        else
+        {
+            p2m_load_VTTBR(d);
+            flush_tlb();
+        }
+    }
+    else
+        flush_tlb();
 
     if ( d != current->domain )
     {
-        p2m_load_VTTBR(current->domain);
+        /* Make sure altp2m mapping is valid. */
+        if ( altp2m_active(current->domain) )
+            p2m_load_altp2m_VTTBR(current);
+        else
+            p2m_load_VTTBR(current->domain);
         local_irq_restore(flags);
     }
 }
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (10 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-15 13:45   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m Sergej Proskurin
                   ` (24 subsequent siblings)
  36 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit changes the prototype of the following functions:
- apply_p2m_changes
- apply_one_level
- p2m_shatter_page
- p2m_create_table
- __p2m_lookup
- __p2m_get_mem_access

These changes are required as our implementation reuses most of the
existing ARM p2m implementation to set page table attributes of the
individual altp2m views. Therefore, exiting function prototypes have
been extended to hold another argument (of type struct p2m_domain *).
This allows to specify the p2m/altp2m domain that should be processed by
the individual function -- instead of accessing the host's default p2m
domain.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 80 +++++++++++++++++++++++++++++-------------------------
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 019f10e..9c8fefd 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -200,9 +200,8 @@ void flush_tlb_domain(struct domain *d)
  * There are no processor functions to do a stage 2 only lookup therefore we
  * do a a software walk.
  */
-static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+static paddr_t __p2m_lookup(struct p2m_domain *p2m, paddr_t paddr, p2m_type_t *t)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
     const unsigned int offsets[4] = {
         zeroeth_table_offset(paddr),
         first_table_offset(paddr),
@@ -282,10 +281,11 @@ err:
 paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
 {
     paddr_t ret;
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     spin_lock(&p2m->lock);
-    ret = __p2m_lookup(d, paddr, t);
+    ret = __p2m_lookup(p2m, paddr, t);
     spin_unlock(&p2m->lock);
 
     return ret;
@@ -441,10 +441,12 @@ static inline void p2m_remove_pte(lpae_t *p, bool_t flush_cache)
  *
  * level_shift is the number of bits at the level we want to create.
  */
-static int p2m_create_table(struct domain *d, lpae_t *entry,
-                            int level_shift, bool_t flush_cache)
+static int p2m_create_table(struct domain *d,
+                            struct p2m_domain *p2m,
+                            lpae_t *entry,
+                            int level_shift,
+                            bool_t flush_cache)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
     struct page_info *page;
     lpae_t *p;
     lpae_t pte;
@@ -502,10 +504,9 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
     return 0;
 }
 
-static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
+static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
                                 xenmem_access_t *access)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
     void *i;
     unsigned int index;
 
@@ -548,7 +549,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
          * No setting was found in the Radix tree. Check if the
          * entry exists in the page-tables.
          */
-        paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
+        paddr_t maddr = __p2m_lookup(p2m, gfn_x(gfn) << PAGE_SHIFT, NULL);
         if ( INVALID_PADDR == maddr )
             return -ESRCH;
 
@@ -677,17 +678,17 @@ static const paddr_t level_shifts[] =
     { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
 
 static int p2m_shatter_page(struct domain *d,
+                            struct p2m_domain *p2m,
                             lpae_t *entry,
                             unsigned int level,
                             bool_t flush_cache)
 {
     const paddr_t level_shift = level_shifts[level];
-    int rc = p2m_create_table(d, entry,
+    int rc = p2m_create_table(d, p2m, entry,
                               level_shift - PAGE_SHIFT, flush_cache);
 
     if ( !rc )
     {
-        struct p2m_domain *p2m = &d->arch.p2m;
         p2m->stats.shattered[level]++;
         p2m->stats.mappings[level]--;
         p2m->stats.mappings[level+1] += LPAE_ENTRIES;
@@ -704,6 +705,7 @@ static int p2m_shatter_page(struct domain *d,
  * -ve == (-Exxx) error.
  */
 static int apply_one_level(struct domain *d,
+                           struct p2m_domain *p2m,
                            lpae_t *entry,
                            unsigned int level,
                            bool_t flush_cache,
@@ -721,7 +723,6 @@ static int apply_one_level(struct domain *d,
     const paddr_t level_mask = level_masks[level];
     const paddr_t level_shift = level_shifts[level];
 
-    struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t pte;
     const lpae_t orig_pte = *entry;
     int rc;
@@ -776,7 +777,7 @@ static int apply_one_level(struct domain *d,
          * L3) or mem_access is in use. Create a page table and
          * continue to descend so we try smaller allocations.
          */
-        rc = p2m_create_table(d, entry, 0, flush_cache);
+        rc = p2m_create_table(d, p2m, entry, 0, flush_cache);
         if ( rc < 0 )
             return rc;
 
@@ -834,7 +835,7 @@ static int apply_one_level(struct domain *d,
             /* Not present -> create table entry and descend */
             if ( !p2m_valid(orig_pte) )
             {
-                rc = p2m_create_table(d, entry, 0, flush_cache);
+                rc = p2m_create_table(d, p2m, entry, 0, flush_cache);
                 if ( rc < 0 )
                     return rc;
                 return P2M_ONE_DESCEND;
@@ -844,7 +845,7 @@ static int apply_one_level(struct domain *d,
             if ( p2m_mapping(orig_pte) )
             {
                 *flush = true;
-                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                rc = p2m_shatter_page(d, p2m, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
             } /* else: an existing table mapping -> descend */
@@ -881,7 +882,7 @@ static int apply_one_level(struct domain *d,
                  * and descend.
                  */
                 *flush = true;
-                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                rc = p2m_shatter_page(d, p2m, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
 
@@ -966,7 +967,7 @@ static int apply_one_level(struct domain *d,
             /* Shatter large pages as we descend */
             if ( p2m_mapping(orig_pte) )
             {
-                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                rc = p2m_shatter_page(d, p2m, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
             } /* else: an existing table mapping -> descend */
@@ -1011,6 +1012,7 @@ static void update_reference_mapping(struct page_info *page,
 }
 
 static int apply_p2m_changes(struct domain *d,
+                     struct p2m_domain *p2m,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -1021,7 +1023,6 @@ static int apply_p2m_changes(struct domain *d,
                      p2m_access_t a)
 {
     int rc, ret;
-    struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t *mappings[4] = { NULL, NULL, NULL, NULL };
     struct page_info *pages[4] = { NULL, NULL, NULL, NULL };
     paddr_t addr, orig_maddr = maddr;
@@ -1148,7 +1149,7 @@ static int apply_p2m_changes(struct domain *d,
             lpae_t *entry = &mappings[level][offset];
             lpae_t old_entry = *entry;
 
-            ret = apply_one_level(d, entry,
+            ret = apply_one_level(d, p2m, entry,
                                   level, flush_pt, op,
                                   start_gpaddr, end_gpaddr,
                                   &addr, &maddr, &flush,
@@ -1257,7 +1258,7 @@ out:
          * addr keeps the address of the end of the last successfully-inserted
          * mapping.
          */
-        apply_p2m_changes(d, REMOVE, start_gpaddr, addr, orig_maddr,
+        apply_p2m_changes(d, p2m, REMOVE, start_gpaddr, addr, orig_maddr,
                           mattr, 0, p2m_invalid, d->arch.p2m.default_access);
     }
 
@@ -1268,7 +1269,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return apply_p2m_changes(d, ALLOCATE, start, end,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), ALLOCATE,
+                             start, end,
                              0, MATTR_MEM, 0, p2m_ram_rw,
                              d->arch.p2m.default_access);
 }
@@ -1278,7 +1280,7 @@ int map_regions_rw_cache(struct domain *d,
                          unsigned long nr,
                          unsigned long mfn)
 {
-    return apply_p2m_changes(d, INSERT,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), INSERT,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
@@ -1291,7 +1293,7 @@ int unmap_regions_rw_cache(struct domain *d,
                            unsigned long nr,
                            unsigned long mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), REMOVE,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
@@ -1304,7 +1306,7 @@ int map_mmio_regions(struct domain *d,
                      unsigned long nr,
                      unsigned long mfn)
 {
-    return apply_p2m_changes(d, INSERT,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), INSERT,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
@@ -1317,7 +1319,7 @@ int unmap_mmio_regions(struct domain *d,
                        unsigned long nr,
                        unsigned long mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), REMOVE,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
@@ -1352,7 +1354,7 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return apply_p2m_changes(d, INSERT,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), INSERT,
                              pfn_to_paddr(gfn_x(gfn)),
                              pfn_to_paddr(gfn_x(gfn) + (1 << page_order)),
                              pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, t,
@@ -1363,7 +1365,7 @@ void guest_physmap_remove_page(struct domain *d,
                                gfn_t gfn,
                                mfn_t mfn, unsigned int page_order)
 {
-    apply_p2m_changes(d, REMOVE,
+    apply_p2m_changes(d, p2m_get_hostp2m(d), REMOVE,
                       pfn_to_paddr(gfn_x(gfn)),
                       pfn_to_paddr(gfn_x(gfn) + (1<<page_order)),
                       pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, p2m_invalid,
@@ -1696,9 +1698,9 @@ int p2m_init(struct domain *d)
 
 int relinquish_p2m_mapping(struct domain *d)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
-    return apply_p2m_changes(d, RELINQUISH,
+    return apply_p2m_changes(d, p2m, RELINQUISH,
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
@@ -1708,12 +1710,12 @@ int relinquish_p2m_mapping(struct domain *d)
 
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
     end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
 
-    return apply_p2m_changes(d, CACHEFLUSH,
+    return apply_p2m_changes(d, p2m, CACHEFLUSH,
                              pfn_to_paddr(start_mfn),
                              pfn_to_paddr(end_mfn),
                              pfn_to_paddr(INVALID_MFN),
@@ -1743,6 +1745,9 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
     xenmem_access_t xma;
     p2m_type_t t;
     struct page_info *page = NULL;
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     rc = gva_to_ipa(gva, &ipa, flag);
     if ( rc < 0 )
@@ -1752,7 +1757,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
      * We do this first as this is faster in the default case when no
      * permission is set on the page.
      */
-    rc = __p2m_get_mem_access(current->domain, _gfn(paddr_to_pfn(ipa)), &xma);
+    rc = __p2m_get_mem_access(p2m, _gfn(paddr_to_pfn(ipa)), &xma);
     if ( rc < 0 )
         goto err;
 
@@ -1801,7 +1806,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
      * We had a mem_access permission limiting the access, but the page type
      * could also be limiting, so we need to check that as well.
      */
-    maddr = __p2m_lookup(current->domain, ipa, &t);
+    maddr = __p2m_lookup(p2m, ipa, &t);
     if ( maddr == INVALID_PADDR )
         goto err;
 
@@ -2125,7 +2130,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
         return 0;
     }
 
-    rc = apply_p2m_changes(d, MEMACCESS,
+    rc = apply_p2m_changes(d, p2m, MEMACCESS,
                            pfn_to_paddr(gfn_x(gfn) + start),
                            pfn_to_paddr(gfn_x(gfn) + nr),
                            0, MATTR_MEM, mask, 0, a);
@@ -2141,10 +2146,11 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
                        xenmem_access_t *access)
 {
     int ret;
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     spin_lock(&p2m->lock);
-    ret = __p2m_get_mem_access(d, gfn, access);
+    ret = __p2m_get_mem_access(p2m, gfn, access);
     spin_unlock(&p2m->lock);
 
     return ret;
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (11 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 20:34   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
                   ` (23 subsequent siblings)
  36 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adapts get_page_from_gva to consider the currently mapped
altp2m view during address translation. We also adapt the function's
prototype (provide "struct vcpu *" instead of "struct domain *"). This
change is required, as the function indirectly calls the function
gva_to_ma_par, which requires the mmu to use the current p2m mapping. So
if the caller is interested in a page that must be claimed from a vCPU
other than current, it must temporarily set the altp2m view that is used
by the vCPU in question.  Therefore, we need to provide the particular
vCPU to this function.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/guestcopy.c |  6 +++---
 xen/arch/arm/p2m.c       | 19 +++++++++++++------
 xen/arch/arm/traps.c     |  2 +-
 xen/include/asm-arm/mm.h |  2 +-
 4 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index ce1c3c3..413125f 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
         if ( page == NULL )
             return len;
 
@@ -64,7 +64,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
         if ( page == NULL )
             return len;
 
@@ -96,7 +96,7 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
         unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
         struct page_info *page;
 
-        page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
+        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
         if ( page == NULL )
             return len;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 9c8fefd..23b482f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1829,10 +1829,11 @@ err:
     return page;
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
                                     unsigned long flags)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
     struct page_info *page = NULL;
     paddr_t maddr = 0;
     int rc;
@@ -1844,17 +1845,23 @@ struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
         unsigned long irq_flags;
 
         local_irq_save(irq_flags);
-        p2m_load_VTTBR(d);
+
+        if ( altp2m_active(d) )
+            p2m_load_altp2m_VTTBR(v);
+        else
+            p2m_load_VTTBR(d);
 
         rc = gvirt_to_maddr(va, &maddr, flags);
 
-        p2m_load_VTTBR(current->domain);
+        if ( altp2m_active(current->domain) )
+            p2m_load_altp2m_VTTBR(current);
+        else
+            p2m_load_VTTBR(current->domain);
+
         local_irq_restore(irq_flags);
     }
     else
-    {
         rc = gvirt_to_maddr(va, &maddr, flags);
-    }
 
     if ( rc )
         goto err;
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 44926ca..6995971 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -957,7 +957,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
         return;
     }
 
-    page = get_page_from_gva(v->domain, sp, GV2M_READ);
+    page = get_page_from_gva(v, sp, GV2M_READ);
     if ( page == NULL )
     {
         printk("Failed to convert stack to physical address\n");
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 68cf203..19eadd2 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -281,7 +281,7 @@ static inline void *page_to_virt(const struct page_info *pg)
     return mfn_to_virt(page_to_mfn(pg));
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
                                     unsigned long flags);
 
 /*
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (12 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-05 12:49   ` Julien Grall
  2016-07-06 17:08   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
                   ` (22 subsequent siblings)
  36 siblings, 2 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The HVMOP HVMOP_altp2m_set_mem_access allows to set gfn permissions of
(currently one page at a time) of a specific altp2m view. In case the
view does not hold the requested gfn entry, it will be first copied from
the hostp2m table and then modified as requested.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c |   7 +-
 xen/arch/arm/p2m.c | 207 +++++++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 200 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 9a536b2..8218737 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -153,7 +153,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_set_mem_access:
-        rc = -EOPNOTSUPP;
+        if ( a.u.set_mem_access.pad )
+            rc = -EINVAL;
+        else
+            rc = p2m_set_mem_access(d, _gfn(a.u.set_mem_access.gfn), 1, 0, 0,
+                                    a.u.set_mem_access.hvmmem_access,
+                                    a.u.set_mem_access.view);
         break;
 
     case HVMOP_altp2m_change_gfn:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 23b482f..395ea0f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2085,6 +2085,159 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
     return false;
 }
 
+static int p2m_get_gfn_level_and_attr(struct p2m_domain *p2m,
+                                      paddr_t paddr, unsigned int *level,
+                                      unsigned long *mattr)
+{
+    const unsigned int offsets[4] = {
+        zeroeth_table_offset(paddr),
+        first_table_offset(paddr),
+        second_table_offset(paddr),
+        third_table_offset(paddr)
+    };
+    lpae_t pte, *map;
+    unsigned int root_table;
+
+    ASSERT(spin_is_locked(&p2m->lock));
+    BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
+
+    if ( P2M_ROOT_PAGES > 1 )
+    {
+        /*
+         * Concatenated root-level tables. The table number will be
+         * the offset at the previous level. It is not possible to
+         * concatenate a level-0 root.
+         */
+        ASSERT(P2M_ROOT_LEVEL > 0);
+        root_table = offsets[P2M_ROOT_LEVEL - 1];
+        if ( root_table >= P2M_ROOT_PAGES )
+            goto err;
+    }
+    else
+        root_table = 0;
+
+    map = __map_domain_page(p2m->root + root_table);
+
+    ASSERT(P2M_ROOT_LEVEL < 4);
+
+    /* Find the p2m level of the wanted paddr */
+    for ( *level = P2M_ROOT_LEVEL ; *level < 4 ; (*level)++ )
+    {
+        pte = map[offsets[*level]];
+
+        if ( *level == 3 || !p2m_table(pte) )
+            /* Done */
+            break;
+
+        ASSERT(*level < 3);
+
+        /* Map for next level */
+        unmap_domain_page(map);
+        map = map_domain_page(_mfn(pte.p2m.base));
+    }
+
+    unmap_domain_page(map);
+
+    if ( !p2m_valid(pte) )
+        goto err;
+
+    /* Provide mattr information of the paddr */
+    *mattr = pte.p2m.mattr;
+
+    return 0;
+
+err:
+    return -EINVAL;
+}
+
+static inline
+int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
+                              struct p2m_domain *ap2m, p2m_access_t a,
+                              gfn_t gfn)
+{
+    p2m_type_t p2mt;
+    xenmem_access_t xma_old;
+    paddr_t gpa = pfn_to_paddr(gfn_x(gfn));
+    paddr_t maddr, mask = 0;
+    unsigned int level;
+    unsigned long mattr;
+    int rc;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    /* Check if entry is part of the altp2m view. */
+    spin_lock(&ap2m->lock);
+    maddr = __p2m_lookup(ap2m, gpa, &p2mt);
+    spin_unlock(&ap2m->lock);
+
+    /* Check host p2m if no valid entry in ap2m. */
+    if ( maddr == INVALID_PADDR )
+    {
+        /* Check if entry is part of the host p2m view. */
+        spin_lock(&hp2m->lock);
+        maddr = __p2m_lookup(hp2m, gpa, &p2mt);
+        if ( maddr == INVALID_PADDR || p2mt != p2m_ram_rw )
+            goto out;
+
+        rc = __p2m_get_mem_access(hp2m, gfn, &xma_old);
+        if ( rc )
+            goto out;
+
+        rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
+        if ( rc )
+            goto out;
+        spin_unlock(&hp2m->lock);
+
+        mask = level_masks[level];
+
+        /* If this is a superpage, copy that first. */
+        if ( level != 3 )
+        {
+            rc = apply_p2m_changes(d, ap2m, INSERT,
+                                   gpa & mask,
+                                   (gpa + level_sizes[level]) & mask,
+                                   maddr & mask, mattr, 0, p2mt,
+                                   memaccess[xma_old]);
+            if ( rc < 0 )
+                goto out;
+        }
+    }
+    else
+    {
+        spin_lock(&ap2m->lock);
+        rc = p2m_get_gfn_level_and_attr(ap2m, gpa, &level, &mattr);
+        spin_unlock(&ap2m->lock);
+        if ( rc )
+            goto out;
+    }
+
+    /* Set mem access attributes - currently supporting only one (4K) page. */
+    mask = level_masks[3];
+    return apply_p2m_changes(d, ap2m, INSERT,
+                             gpa & mask,
+                             (gpa + level_sizes[level]) & mask,
+                             maddr & mask, mattr, 0, p2mt, a);
+
+out:
+    if ( spin_is_locked(&hp2m->lock) )
+        spin_unlock(&hp2m->lock);
+
+    return -ESRCH;
+}
+
 /*
  * Set access type for a region of pfns.
  * If gfn == INVALID_GFN, sets the default access type.
@@ -2093,7 +2246,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
                         uint32_t start, uint32_t mask, xenmem_access_t access,
                         unsigned int altp2m_idx)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
     p2m_access_t a;
     long rc = 0;
 
@@ -2112,35 +2265,63 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
 #undef ACCESS
     };
 
+    /* altp2m view 0 is treated as the hostp2m */
+    if ( altp2m_idx )
+    {
+        if ( altp2m_idx >= MAX_ALTP2M ||
+             d->arch.altp2m_vttbr[altp2m_idx] == INVALID_MFN )
+            return -EINVAL;
+
+        ap2m = d->arch.altp2m_p2m[altp2m_idx];
+    }
+
     switch ( access )
     {
     case 0 ... ARRAY_SIZE(memaccess) - 1:
         a = memaccess[access];
         break;
     case XENMEM_access_default:
-        a = p2m->default_access;
+        a = hp2m->default_access;
         break;
     default:
         return -EINVAL;
     }
 
-    /*
-     * Flip mem_access_enabled to true when a permission is set, as to prevent
-     * allocating or inserting super-pages.
-     */
-    p2m->mem_access_enabled = true;
-
     /* If request to set default access. */
     if ( gfn_x(gfn) == INVALID_GFN )
     {
-        p2m->default_access = a;
+        hp2m->default_access = a;
         return 0;
     }
 
-    rc = apply_p2m_changes(d, p2m, MEMACCESS,
-                           pfn_to_paddr(gfn_x(gfn) + start),
-                           pfn_to_paddr(gfn_x(gfn) + nr),
-                           0, MATTR_MEM, mask, 0, a);
+
+    if ( ap2m )
+    {
+        /*
+         * Flip mem_access_enabled to true when a permission is set, as to prevent
+         * allocating or inserting super-pages.
+         */
+        ap2m->mem_access_enabled = true;
+
+        /*
+         * ARM altp2m currently supports only setting of memory access rights
+         * of only one (4K) page at a time.
+         */
+        rc = p2m_set_altp2m_mem_access(d, hp2m, ap2m, a, gfn);
+    }
+    else
+    {
+        /*
+         * Flip mem_access_enabled to true when a permission is set, as to prevent
+         * allocating or inserting super-pages.
+         */
+        hp2m->mem_access_enabled = true;
+
+        rc = apply_p2m_changes(d, hp2m, MEMACCESS,
+                               pfn_to_paddr(gfn_x(gfn) + start),
+                               pfn_to_paddr(gfn_x(gfn) + nr),
+                               0, MATTR_MEM, mask, 0, a);
+    }
     if ( rc < 0 )
         return rc;
     else if ( rc > 0 )
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (13 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 20:53   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM Sergej Proskurin
                   ` (21 subsequent siblings)
  36 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adds the function p2m_altp2m_lazy_copy implementing the
altp2m paging mechanism. The function p2m_altp2m_lazy_copy lazily copies
the hostp2m's mapping into the currently active altp2m view on 2nd stage
instruction or data access violations. Every altp2m violation generates
a vm_event.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c           | 130 ++++++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/traps.c         | 102 +++++++++++++++++++++++++++------
 xen/include/asm-arm/altp2m.h |   4 +-
 xen/include/asm-arm/p2m.h    |  17 ++++--
 4 files changed, 224 insertions(+), 29 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 395ea0f..96892a5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,7 @@
 #include <asm/hardirq.h>
 #include <asm/page.h>
 
+#include <asm/vm_event.h>
 #include <asm/altp2m.h>
 
 #ifdef CONFIG_ARM_64
@@ -1955,6 +1956,12 @@ void __init setup_virt_paging(void)
     smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }
 
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+{
+    if ( altp2m_active(v->domain) )
+        p2m_switch_vcpu_altp2m_by_id(v, idx);
+}
+
 bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
 {
     int rc;
@@ -1962,13 +1969,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
     xenmem_access_t xma;
     vm_event_request_t *req;
     struct vcpu *v = current;
-    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     /* Mem_access is not in use. */
     if ( !p2m->mem_access_enabled )
         return true;
 
-    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
+    rc = p2m_get_mem_access(d, _gfn(paddr_to_pfn(gpa)), &xma);
     if ( rc )
         return true;
 
@@ -2074,6 +2082,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
         req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
         req->vcpu_id = v->vcpu_id;
 
+        vm_event_fill_regs(req);
+
+        if ( altp2m_active(v->domain) )
+        {
+            req->flags |= VM_EVENT_FLAG_ALTERNATE_P2M;
+            req->altp2m_idx = vcpu_altp2m(v).p2midx;
+        }
+
         mem_access_send_req(v->domain, req);
         xfree(req);
     }
@@ -2356,6 +2372,116 @@ struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
     return v->domain->arch.altp2m_p2m[index];
 }
 
+bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
+{
+    struct domain *d = v->domain;
+    bool_t rc = 0;
+
+    if ( idx >= MAX_ALTP2M )
+        return rc;
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+    {
+        if ( idx != vcpu_altp2m(v).p2midx )
+        {
+            atomic_dec(&p2m_get_altp2m(v)->active_vcpus);
+            vcpu_altp2m(v).p2midx = idx;
+            atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+        }
+        rc = 1;
+    }
+
+    altp2m_unlock(d);
+
+    return rc;
+}
+
+/*
+ * If the fault is for a not present entry:
+ *     if the entry in the host p2m has a valid mfn, copy it and retry
+ *     else indicate that outer handler should handle fault
+ *
+ * If the fault is for a present entry:
+ *     indicate that outer handler should handle fault
+ */
+bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
+                            unsigned long gva, struct npfec npfec,
+                            struct p2m_domain **ap2m)
+{
+    struct domain *d = v->domain;
+    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
+    p2m_type_t p2mt;
+    xenmem_access_t xma;
+    paddr_t maddr, mask = 0;
+    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
+    unsigned int level;
+    unsigned long mattr;
+    int rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    *ap2m = p2m_get_altp2m(v);
+    if ( *ap2m == NULL)
+        return 0;
+
+    /* Check if entry is part of the altp2m view */
+    spin_lock(&(*ap2m)->lock);
+    maddr = __p2m_lookup(*ap2m, gpa, NULL);
+    spin_unlock(&(*ap2m)->lock);
+    if ( maddr != INVALID_PADDR )
+        return 0;
+
+    /* Check if entry is part of the host p2m view */
+    spin_lock(&hp2m->lock);
+    maddr = __p2m_lookup(hp2m, gpa, &p2mt);
+    if ( maddr == INVALID_PADDR )
+        goto out;
+
+    rc = __p2m_get_mem_access(hp2m, gfn, &xma);
+    if ( rc )
+        goto out;
+
+    rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
+    if ( rc )
+        goto out;
+    spin_unlock(&hp2m->lock);
+
+    mask = level_masks[level];
+
+    rc = apply_p2m_changes(d, *ap2m, INSERT,
+                           pfn_to_paddr(gfn_x(gfn)) & mask,
+                           (pfn_to_paddr(gfn_x(gfn)) + level_sizes[level]) & mask,
+                           maddr & mask, mattr, 0, p2mt,
+                           memaccess[xma]);
+    if ( rc )
+    {
+        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m %lx\n",
+                (unsigned long)pfn_to_paddr(gfn_x(gfn)), (unsigned long)(maddr), (unsigned long)*ap2m);
+        domain_crash(hp2m->domain);
+    }
+
+    return 1;
+
+out:
+    spin_unlock(&hp2m->lock);
+    return 0;
+}
+
 static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
 {
     struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 6995971..78db2cf 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -48,6 +48,8 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 
+#include <asm/altp2m.h>
+
 /* The base of the stack must always be double-word aligned, which means
  * that both the kernel half of struct cpu_user_regs (which is pushed in
  * entry.S) and struct cpu_info (which lives at the bottom of a Xen
@@ -2383,35 +2385,64 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
 {
     int rc;
     register_t gva = READ_SYSREG(FAR_EL2);
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = NULL;
+    paddr_t gpa;
+
+    if ( hsr.iabt.s1ptw )
+        gpa = get_faulting_ipa();
+    else
+    {
+        /*
+         * Flush the TLB to make sure the DTLB is clear before
+         * doing GVA->IPA translation. If we got here because of
+         * an entry only present in the ITLB, this translation may
+         * still be inaccurate.
+         */
+        flush_tlb_local();
+
+        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
+        if ( rc == -EFAULT )
+            goto bad_insn_abort;
+    }
 
     switch ( hsr.iabt.ifsc & 0x3f )
     {
+    case FSC_FLT_TRANS ... FSC_FLT_TRANS + 3:
+    {
+        if ( altp2m_active(d) )
+        {
+            const struct npfec npfec = {
+                .insn_fetch = 1,
+                .gla_valid = 1,
+                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+            };
+
+            /*
+             * Copy the entire page of the failing instruction into the
+             * currently active altp2m view.
+             */
+            if ( p2m_altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
+                return;
+
+            rc = p2m_mem_access_check(gpa, gva, npfec);
+
+            /* Trap was triggered by mem_access, work here is done */
+            if ( !rc )
+                return;
+        }
+
+        break;
+    }
     case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
     {
-        paddr_t gpa;
         const struct npfec npfec = {
             .insn_fetch = 1,
             .gla_valid = 1,
             .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
         };
 
-        if ( hsr.iabt.s1ptw )
-            gpa = get_faulting_ipa();
-        else
-        {
-            /*
-             * Flush the TLB to make sure the DTLB is clear before
-             * doing GVA->IPA translation. If we got here because of
-             * an entry only present in the ITLB, this translation may
-             * still be inaccurate.
-             */
-            flush_tlb_local();
-
-            rc = gva_to_ipa(gva, &gpa, GV2M_READ);
-            if ( rc == -EFAULT )
-                goto bad_insn_abort;
-        }
-
         rc = p2m_mem_access_check(gpa, gva, npfec);
 
         /* Trap was triggered by mem_access, work here is done */
@@ -2429,6 +2460,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
                                      const union hsr hsr)
 {
     const struct hsr_dabt dabt = hsr.dabt;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = NULL;
     int rc;
     mmio_info_t info;
 
@@ -2449,6 +2482,12 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
         info.gpa = get_faulting_ipa();
     else
     {
+        /*
+         * When using altp2m, this flush is required to get rid of old TLB
+         * entries and use the new, lazily copied, ap2m entries.
+         */
+        flush_tlb_local();
+
         rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
         if ( rc == -EFAULT )
             goto bad_data_abort;
@@ -2456,6 +2495,33 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
 
     switch ( dabt.dfsc & 0x3f )
     {
+    case FSC_FLT_TRANS ... FSC_FLT_TRANS + 3:
+    {
+        if ( altp2m_active(current->domain) )
+        {
+            const struct npfec npfec = {
+                .read_access = !dabt.write,
+                .write_access = dabt.write,
+                .gla_valid = 1,
+                .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+            };
+
+            /*
+             * Copy the entire page of the failing data access into the
+             * currently active altp2m view.
+             */
+            if ( p2m_altp2m_lazy_copy(v, info.gpa, info.gva, npfec, &p2m) )
+                return;
+
+            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
+
+            /* Trap was triggered by mem_access, work here is done */
+            if ( !rc )
+                return;
+        }
+
+        break;
+    }
     case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
     {
         const struct npfec npfec = {
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index ec4aa09..2a87d14 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -31,9 +31,7 @@ static inline bool_t altp2m_active(const struct domain *d)
 /* Alternate p2m VCPU */
 static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 {
-    /* Not implemented on ARM, should not be reached. */
-    BUG();
-    return 0;
+    return vcpu_altp2m(v).p2midx;
 }
 
 void altp2m_vcpu_initialise(struct vcpu *v);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 451b097..b82e4b9 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -115,12 +115,6 @@ void p2m_mem_access_emulate_check(struct vcpu *v,
     /* Not supported on ARM. */
 }
 
-static inline
-void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
-{
-    /* Not supported on ARM. */
-}
-
 /*
  * Alternate p2m: shadow p2m tables used for alternate memory views.
  */
@@ -131,12 +125,23 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 /* Get current alternate p2m table */
 struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
 
+/* Switch alternate p2m for a single vcpu */
+bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx);
+
+/* Check to see if vcpu should be switched to a different p2m. */
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
+
 /* Flush all the alternate p2m's for a domain */
 void p2m_flush_altp2m(struct domain *d);
 
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Alternate p2m paging */
+bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
+                            unsigned long gla, struct npfec npfec,
+                            struct p2m_domain **ap2m);
+
 /* Find an available alternate p2m and make it valid */
 int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (14 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-07 16:27   ` Wei Liu
  2016-07-04 11:45 ` [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
                   ` (20 subsequent siblings)
  36 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Ian Jackson, Wei Liu

The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
This parameter allows further usage of altp2m on ARM. For this, we
define an additional altp2m field for PV domains as part of the
libxl_domain_type struct. This field can be set only on ARM systems
through the "altp2m" switch in the domain's configuration file (i.e.
set altp2m=1).

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 tools/libxl/libxl_create.c  |  1 +
 tools/libxl/libxl_dom.c     | 14 ++++++++++++++
 tools/libxl/libxl_types.idl |  1 +
 tools/libxl/xl_cmdimpl.c    |  5 +++++
 4 files changed, 21 insertions(+)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b99472..40b5f61 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -400,6 +400,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
             b_info->cmdline = b_info->u.pv.cmdline;
             b_info->u.pv.cmdline = NULL;
         }
+        libxl_defbool_setdefault(&b_info->u.pv.altp2m, false);
         break;
     default:
         LOG(ERROR, "invalid domain type %s in create info",
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index ec29060..ab023a2 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -277,6 +277,16 @@ err:
 }
 #endif
 
+#if defined(__arm__) || defined(__aarch64__)
+static void pv_set_conf_params(xc_interface *handle, uint32_t domid,
+                               libxl_domain_build_info *const info)
+{
+    if ( libxl_defbool_val(info->u.pv.altp2m) )
+        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
+                         libxl_defbool_val(info->u.pv.altp2m));
+}
+#endif
+
 static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
                                 libxl_domain_build_info *const info)
 {
@@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
             return rc;
 #endif
     }
+#if defined(__arm__) || defined(__aarch64__)
+    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
+        pv_set_conf_params(ctx->xch, domid, info);
+#endif
 
     rc = libxl__arch_domain_create(gc, d_config, domid);
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index ef614be..0a164f9 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                       ("features", string, {'const': True}),
                                       # Use host's E820 for PCI passthrough.
                                       ("e820_host", libxl_defbool),
+                                      ("altp2m", libxl_defbool),
                                       ])),
                  ("invalid", None),
                  ], keyvar_init_val = "LIBXL_DOMAIN_TYPE_INVALID")),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 6459eec..12c6e48 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1718,6 +1718,11 @@ static void parse_config_data(const char *config_source,
             exit(1);
         }
 
+#if defined(__arm__) || defined(__aarch64__)
+        /* Enable altp2m for PV guests solely on ARM */
+        xlu_cfg_get_defbool(config, "altp2m", &b_info->u.pv.altp2m, 0);
+#endif
+
         break;
     }
     default:
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (15 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 20:58   ` Julien Grall
  2016-07-04 11:45 ` [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM Sergej Proskurin
                   ` (19 subsequent siblings)
  36 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 96892a5..de97a12 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -51,7 +51,8 @@ static bool_t p2m_mapping(lpae_t pte)
 
 void p2m_dump_info(struct domain *d)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     spin_lock(&p2m->lock);
     printk("p2m mappings for domain %d (vmid %d):\n",
@@ -71,7 +72,8 @@ void memory_type_changed(struct domain *d)
 
 void dump_p2m_lookup(struct domain *d, paddr_t addr)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (16 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 13:38   ` Razvan Cojocaru
  2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support " Sergej Proskurin
                   ` (18 subsequent siblings)
  36 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergej Proskurin, Tamas K Lengyel, Ian Jackson, Wei Liu, Razvan Cojocaru

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 tools/tests/xen-access/xen-access.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index f26e723..ef21d0d 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -333,8 +333,9 @@ void usage(char* progname)
 {
     fprintf(stderr, "Usage: %s [-m] <domain_id> write|exec", progname);
 #if defined(__i386__) || defined(__x86_64__)
-            fprintf(stderr, "|breakpoint|altp2m_write|altp2m_exec");
+            fprintf(stderr, "|breakpoint");
 #endif
+            fprintf(stderr, "|altp2m_write|altp2m_exec");
             fprintf(stderr,
             "\n"
             "Logs first page writes, execs, or breakpoint traps that occur on the domain.\n"
@@ -402,6 +403,7 @@ int main(int argc, char *argv[])
     {
         breakpoint = 1;
     }
+#endif
     else if ( !strcmp(argv[0], "altp2m_write") )
     {
         default_access = XENMEM_access_rx;
@@ -412,7 +414,6 @@ int main(int argc, char *argv[])
         default_access = XENMEM_access_rw;
         altp2m = 1;
     }
-#endif
     else
     {
         usage(argv[0]);
@@ -485,12 +486,14 @@ int main(int argc, char *argv[])
             goto exit;
         }
 
+#if defined(__i386__) || defined(__x86_64__)
         rc = xc_monitor_singlestep( xch, domain_id, 1 );
         if ( rc < 0 )
         {
             ERROR("Error %d failed to enable singlestep monitoring!\n", rc);
             goto exit;
         }
+#endif
     }
 
     if ( !altp2m )
@@ -540,7 +543,9 @@ int main(int argc, char *argv[])
                 rc = xc_altp2m_switch_to_view( xch, domain_id, 0 );
                 rc = xc_altp2m_destroy_view(xch, domain_id, altp2m_view_id);
                 rc = xc_altp2m_set_domain_state(xch, domain_id, 0);
+#if defined(__i386__) || defined(__x86_64__)
                 rc = xc_monitor_singlestep(xch, domain_id, 0);
+#endif
             } else {
                 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
                 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, START_PFN,
@@ -695,9 +700,11 @@ int main(int argc, char *argv[])
 exit:
     if ( altp2m )
     {
+#if defined(__i386__) || defined(__x86_64__)
         uint32_t vcpu_id;
         for ( vcpu_id = 0; vcpu_id<XEN_LEGACY_MAX_VCPUS; vcpu_id++)
             rc = control_singlestep(xch, domain_id, vcpu_id, 0);
+#endif
     }
 
     /* Tear down domain xenaccess */
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (17 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
                   ` (17 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The Xen altp2m subsystem is currently supported only on x86-64 based
architectures. By utilizing ARM's virtualization extensions, we intend
to implement altp2m support for ARM architectures and thus further
extend current Virtual Machine Introspection (VMI) capabilities on ARM.

With this commit, Xen is now able to activate altp2m support on ARM by
means of the command-line argument 'altp2m' (bool).

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c            | 22 ++++++++++++++++++++
 xen/include/asm-arm/hvm/hvm.h | 47 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 69 insertions(+)
 create mode 100644 xen/include/asm-arm/hvm/hvm.h

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index d999bde..3615036 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,28 @@
 
 #include <asm/hypercall.h>
 
+#include <asm/hvm/hvm.h>
+
+/* Xen command-line option enabling altp2m */
+static bool_t __initdata opt_altp2m_enabled = 0;
+boolean_param("altp2m", opt_altp2m_enabled);
+
+struct hvm_function_table hvm_funcs __read_mostly = {
+    .name = "ARM_HVM",
+};
+
+/* Initcall enabling hvm functionality. */
+static int __init hvm_enable(void)
+{
+    if ( opt_altp2m_enabled )
+        hvm_funcs.altp2m_supported = 1;
+    else
+        hvm_funcs.altp2m_supported = 0;
+
+    return 0;
+}
+presmp_initcall(hvm_enable);
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
diff --git a/xen/include/asm-arm/hvm/hvm.h b/xen/include/asm-arm/hvm/hvm.h
new file mode 100644
index 0000000..96c455c
--- /dev/null
+++ b/xen/include/asm-arm/hvm/hvm.h
@@ -0,0 +1,47 @@
+/*
+ * include/asm-arm/hvm/hvm.h
+ *
+ * Copyright (c) 2016, Sergej Proskurin <proskurin@sec.in.tum.de>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_HVM_HVM_H__
+#define __ASM_ARM_HVM_HVM_H__
+
+struct hvm_function_table {
+    char *name;
+
+    /* Necessary hardware support for alternate p2m's. */
+    bool_t altp2m_supported;
+};
+
+extern struct hvm_function_table hvm_funcs;
+
+/* Returns true if hardware supports alternate p2m's */
+static inline bool_t hvm_altp2m_supported(void)
+{
+    return hvm_funcs.altp2m_supported;
+}
+
+#endif /* __ASM_ARM_HVM_HVM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (18 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support " Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
                   ` (16 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit moves the altp2m-related code from x86 to ARM. Functions
that are no yet supported notify the caller or print a BUG message
stating their absence.

Also, the struct arch_domain is extended with the altp2m_active
attribute, represeting the current altp2m activity configuration of the
domain.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c           | 82 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/altp2m.h | 22 ++++++++++--
 xen/include/asm-arm/domain.h |  3 ++
 3 files changed, 105 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 3615036..1118f22 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,7 @@
 
 #include <asm/hypercall.h>
 
+#include <asm/altp2m.h>
 #include <asm/hvm/hvm.h>
 
 /* Xen command-line option enabling altp2m */
@@ -54,6 +55,83 @@ static int __init hvm_enable(void)
 }
 presmp_initcall(hvm_enable);
 
+static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    struct xen_hvm_altp2m_op a;
+    struct domain *d = NULL;
+    int rc = 0;
+
+    if ( !hvm_altp2m_supported() )
+        return -EOPNOTSUPP;
+
+    if ( copy_from_guest(&a, arg, 1) )
+        return -EFAULT;
+
+    if ( a.pad1 || a.pad2 ||
+         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
+         (a.cmd < HVMOP_altp2m_get_domain_state) ||
+         (a.cmd > HVMOP_altp2m_change_gfn) )
+        return -EINVAL;
+
+    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
+        rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
+
+    if ( d == NULL )
+        return -ESRCH;
+
+    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
+         (a.cmd != HVMOP_altp2m_set_domain_state) &&
+         !d->arch.altp2m_active )
+    {
+        rc = -EOPNOTSUPP;
+        goto out;
+    }
+
+    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
+        goto out;
+
+    switch ( a.cmd )
+    {
+    case HVMOP_altp2m_get_domain_state:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_set_domain_state:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_vcpu_enable_notify:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_create_p2m:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_destroy_p2m:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_switch_p2m:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_set_mem_access:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_change_gfn:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
@@ -102,6 +180,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             rc = -EINVAL;
         break;
 
+    case HVMOP_altp2m:
+        rc = do_altp2m_op(arg);
+        break;
+
     default:
     {
         gdprintk(XENLOG_DEBUG, "HVMOP op=%lu: not implemented\n", op);
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a87747a..16ae9d6 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -2,6 +2,7 @@
  * Alternate p2m
  *
  * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2016, Sergej Proskurin <proskurin@sec.in.tum.de>.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
@@ -24,8 +25,7 @@
 /* Alternate p2m on/off per domain */
 static inline bool_t altp2m_active(const struct domain *d)
 {
-    /* Not implemented on ARM. */
-    return 0;
+    return d->arch.altp2m_active;
 }
 
 /* Alternate p2m VCPU */
@@ -36,4 +36,22 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
     return 0;
 }
 
+static inline void altp2m_vcpu_initialise(struct vcpu *v)
+{
+    /* Not implemented on ARM, should not be reached. */
+    BUG();
+}
+
+static inline void altp2m_vcpu_destroy(struct vcpu *v)
+{
+    /* Not implemented on ARM, should not be reached. */
+    BUG();
+}
+
+static inline void altp2m_vcpu_reset(struct vcpu *v)
+{
+    /* Not implemented on ARM, should not be reached. */
+    BUG();
+}
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 979f7de..2039f16 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -127,6 +127,9 @@ struct arch_domain
     paddr_t efi_acpi_gpa;
     paddr_t efi_acpi_len;
 #endif
+
+    /* altp2m: allow multiple copies of host p2m */
+    bool_t altp2m_active;
 }  __cacheline_aligned;
 
 struct arch_vcpu
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (19 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
                   ` (15 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adopts the x86 HVMOP_altp2m_get_domain_state implementation.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 1118f22..8e8e0f7 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -93,7 +93,14 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
     switch ( a.cmd )
     {
     case HVMOP_altp2m_get_domain_state:
-        rc = -EOPNOTSUPP;
+        if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        a.u.domain_state.state = altp2m_active(d);
+        rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
         break;
 
     case HVMOP_altp2m_set_domain_state:
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (20 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
                   ` (14 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The p2m intialization now invokes intialization routines responsible for
the allocation and intitialization of altp2m structures. The same
applies to teardown routines. The functionality has been adopted from
the x86 altp2m implementation.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c            | 166 ++++++++++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/domain.h  |   6 ++
 xen/include/asm-arm/hvm/hvm.h |  12 +++
 xen/include/asm-arm/p2m.h     |  20 +++++
 4 files changed, 198 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index aa4e774..e72ca7a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1400,19 +1400,103 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    int ret = 0;
+
+    spin_lock_init(&p2m->lock);
+    INIT_PAGE_LIST_HEAD(&p2m->pages);
+
+    spin_lock(&p2m->lock);
+
+    p2m->domain = d;
+    p2m->access_required = false;
+    p2m->mem_access_enabled = false;
+    p2m->default_access = p2m_access_rwx;
+    p2m->p2m_class = p2m_host;
+    p2m->root = NULL;
+
+    /* Adopt VMID of the associated domain */
+    p2m->vmid = d->arch.p2m.vmid;
+    p2m->vttbr.vttbr = 0;
+    p2m->vttbr.vttbr_vmid = p2m->vmid;
+
+    p2m->max_mapped_gfn = 0;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
+    radix_tree_init(&p2m->mem_access_settings);
+
+    spin_unlock(&p2m->lock);
+
+    return ret;
+}
+
+static void p2m_free_one(struct p2m_domain *p2m)
+{
+    mfn_t mfn;
+    unsigned int i;
     struct page_info *pg;
 
     spin_lock(&p2m->lock);
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
-        free_domheap_page(pg);
+        if ( pg != p2m->root )
+            free_domheap_page(pg);
+
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+    {
+        mfn = _mfn(page_to_mfn(p2m->root) + i);
+        clear_domain_page(mfn);
+    }
+    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
+    p2m->root = NULL;
+
+    radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
+    spin_unlock(&p2m->lock);
+
+    xfree(p2m);
+}
+
+static struct p2m_domain *p2m_init_one(struct domain *d)
+{
+    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
+
+    if ( !p2m )
+        return NULL;
+
+    if ( p2m_initialise(d, p2m) )
+        goto free_p2m;
+
+    return p2m;
+
+free_p2m:
+    xfree(p2m);
+    return NULL;
+}
+
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct page_info *pg = NULL;
+    mfn_t mfn;
+    unsigned int i;
+
+    spin_lock(&p2m->lock);
 
-    if ( p2m->root )
-        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+        if ( pg != p2m->root )
+        {
+            mfn = _mfn(page_to_mfn(pg));
+            clear_domain_page(mfn);
+            free_domheap_page(pg);
+        }
 
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+    {
+        mfn = _mfn(page_to_mfn(p2m->root) + i);
+        clear_domain_page(mfn);
+    }
+    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
     p2m->root = NULL;
 
     p2m_free_vmid(d);
@@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
     spin_unlock(&p2m->lock);
 }
 
-int p2m_init(struct domain *d)
+static int p2m_init_hostp2m(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
     int rc = 0;
@@ -1437,6 +1521,8 @@ int p2m_init(struct domain *d)
     if ( rc != 0 )
         goto err;
 
+    p2m->vttbr.vttbr_vmid = p2m->vmid;
+
     d->arch.vttbr = 0;
 
     p2m->root = NULL;
@@ -1454,6 +1540,74 @@ err:
     return rc;
 }
 
+static void p2m_teardown_altp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( !d->arch.altp2m_p2m[i] )
+            continue;
+
+        p2m = d->arch.altp2m_p2m[i];
+        p2m_free_one(p2m);
+        d->arch.altp2m_vttbr[i] = INVALID_MFN;
+        d->arch.altp2m_p2m[i] = NULL;
+    }
+
+    d->arch.altp2m_active = false;
+}
+
+static int p2m_init_altp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    spin_lock_init(&d->arch.altp2m_lock);
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        d->arch.altp2m_vttbr[i] = INVALID_MFN;
+        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
+        if ( p2m == NULL )
+        {
+            p2m_teardown_altp2m(d);
+            return -ENOMEM;
+        }
+        p2m->p2m_class = p2m_alternate;
+        p2m->access_required = 1;
+        _atomic_set(&p2m->active_vcpus, 0);
+    }
+
+    return 0;
+}
+
+void p2m_teardown(struct domain *d)
+{
+    /*
+     * We must teardown altp2m unconditionally because
+     * we initialise it unconditionally.
+     */
+    p2m_teardown_altp2m(d);
+
+    p2m_teardown_hostp2m(d);
+}
+
+int p2m_init(struct domain *d)
+{
+    int rc = 0;
+
+    rc = p2m_init_hostp2m(d);
+    if ( rc )
+        return rc;
+
+    rc = p2m_init_altp2m(d);
+    if ( rc )
+        p2m_teardown_hostp2m(d);
+
+    return rc;
+}
+
 int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 2039f16..6b9770f 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -29,6 +29,9 @@ enum domain_type {
 #define is_64bit_domain(d) (0)
 #endif
 
+#define MAX_ALTP2M      10 /* arbitrary */
+#define INVALID_ALTP2M  0xffff
+
 extern int dom0_11_mapping;
 #define is_domain_direct_mapped(d) ((d) == hardware_domain && dom0_11_mapping)
 
@@ -130,6 +133,9 @@ struct arch_domain
 
     /* altp2m: allow multiple copies of host p2m */
     bool_t altp2m_active;
+    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
+    spinlock_t altp2m_lock;
+    uint64_t altp2m_vttbr[MAX_ALTP2M];
 }  __cacheline_aligned;
 
 struct arch_vcpu
diff --git a/xen/include/asm-arm/hvm/hvm.h b/xen/include/asm-arm/hvm/hvm.h
index 96c455c..28d5298 100644
--- a/xen/include/asm-arm/hvm/hvm.h
+++ b/xen/include/asm-arm/hvm/hvm.h
@@ -19,6 +19,18 @@
 #ifndef __ASM_ARM_HVM_HVM_H__
 #define __ASM_ARM_HVM_HVM_H__
 
+struct vttbr_data {
+    union {
+        struct {
+            u64 vttbr_baddr :40, /* variable res0: from 0-(x-1) bit */
+                res1        :8,
+                vttbr_vmid  :8,
+                res2        :8;
+        };
+        u64 vttbr;
+    };
+};
+
 struct hvm_function_table {
     char *name;
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 0d1e61e..a78d547 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -8,6 +8,9 @@
 #include <xen/p2m-common.h>
 #include <public/memory.h>
 
+#include <asm/atomic.h>
+#include <asm/hvm/hvm.h>
+
 #define paddr_bits PADDR_BITS
 
 /* Holds the bit size of IPAs in p2m tables.  */
@@ -17,6 +20,11 @@ struct domain;
 
 extern void memory_type_changed(struct domain *);
 
+typedef enum {
+    p2m_host,
+    p2m_alternate,
+} p2m_class_t;
+
 /* Per-p2m-table state */
 struct p2m_domain {
     /* Lock that protects updates to the p2m */
@@ -66,6 +74,18 @@ struct p2m_domain {
     /* Radix tree to store the p2m_access_t settings as the pte's don't have
      * enough available bits to store this information. */
     struct radix_tree_root mem_access_settings;
+
+    /* Alternate p2m: count of vcpu's currently using this p2m. */
+    atomic_t active_vcpus;
+
+    /* Choose between: host/alternate */
+    p2m_class_t p2m_class;
+
+    /* Back pointer to domain */
+    struct domain *domain;
+
+    /* VTTBR information */
+    struct vttbr_data vttbr;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (21 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines Sergej Proskurin
                   ` (13 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
specific domain. This commit adopts the x86
HVMOP_altp2m_set_domain_state implementation.  The function
p2m_flush_altp2m is currently implemented in form of a stub.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/Makefile        |  1 +
 xen/arch/arm/altp2m.c        | 68 ++++++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/hvm.c           | 30 ++++++++++++++++++-
 xen/arch/arm/p2m.c           | 46 ++++++++++++++++++++++++++++++
 xen/include/asm-arm/altp2m.h | 20 ++-----------
 xen/include/asm-arm/domain.h |  9 ++++++
 xen/include/asm-arm/p2m.h    | 19 +++++++++++++
 7 files changed, 175 insertions(+), 18 deletions(-)
 create mode 100644 xen/arch/arm/altp2m.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 9e38da3..abd6f1a 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -41,6 +41,7 @@ obj-y += decode.o
 obj-y += processor.o
 obj-y += smc.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
+obj-y += altp2m.o
 
 #obj-bin-y += ....o
 
diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
new file mode 100644
index 0000000..1d2505f
--- /dev/null
+++ b/xen/arch/arm/altp2m.c
@@ -0,0 +1,68 @@
+/*
+ * arch/arm/altp2m.c
+ *
+ * Alternate p2m
+ * Copyright (c) 2016 Sergej Proskurin <proskurin@sec.in.tum.de>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/p2m.h>
+#include <asm/altp2m.h>
+#include <asm/hvm/hvm.h>
+
+void altp2m_vcpu_reset(struct vcpu *v)
+{
+    struct altp2mvcpu *av = &vcpu_altp2m(v);
+
+    av->p2midx = INVALID_ALTP2M;
+}
+
+void altp2m_vcpu_initialise(struct vcpu *v)
+{
+    if ( v != current )
+        vcpu_pause(v);
+
+    altp2m_vcpu_reset(v);
+    vcpu_altp2m(v).p2midx = 0;
+    atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+
+    if ( v != current )
+        vcpu_unpause(v);
+}
+
+void altp2m_vcpu_destroy(struct vcpu *v)
+{
+    struct p2m_domain *p2m;
+
+    if ( v != current )
+        vcpu_pause(v);
+
+    if ( (p2m = p2m_get_altp2m(v)) )
+        atomic_dec(&p2m->active_vcpus);
+
+    altp2m_vcpu_reset(v);
+
+    if ( v != current )
+        vcpu_unpause(v);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 8e8e0f7..cb90a55 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -104,8 +104,36 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_set_domain_state:
-        rc = -EOPNOTSUPP;
+    {
+        struct vcpu *v;
+        bool_t ostate;
+
+        if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        ostate = d->arch.altp2m_active;
+        d->arch.altp2m_active = !!a.u.domain_state.state;
+
+        /* If the alternate p2m state has changed, handle appropriately */
+        if ( d->arch.altp2m_active != ostate &&
+             (ostate || !(rc = p2m_init_altp2m_by_id(d, 0))) )
+        {
+            for_each_vcpu( d, v )
+            {
+                if ( !ostate )
+                    altp2m_vcpu_initialise(v);
+                else
+                    altp2m_vcpu_destroy(v);
+            }
+
+            if ( ostate )
+                p2m_flush_altp2m(d);
+        }
         break;
+    }
 
     case HVMOP_altp2m_vcpu_enable_notify:
         rc = -EOPNOTSUPP;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e72ca7a..4a745fd 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2064,6 +2064,52 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
     return ret;
 }
 
+struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
+{
+    unsigned int index = vcpu_altp2m(v).p2midx;
+
+    if ( index == INVALID_ALTP2M )
+        return NULL;
+
+    BUG_ON(index >= MAX_ALTP2M);
+
+    return v->domain->arch.altp2m_p2m[index];
+}
+
+static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
+{
+    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+    struct vttbr_data *vttbr = &p2m->vttbr;
+
+    p2m->lowest_mapped_gfn = INVALID_GFN;
+    p2m->max_mapped_gfn = 0;
+
+    vttbr->vttbr_baddr = page_to_maddr(p2m->root);
+    vttbr->vttbr_vmid = p2m->vmid;
+
+    d->arch.altp2m_vttbr[i] = vttbr->vttbr;
+}
+
+int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+    int rc = -EINVAL;
+
+    if ( idx >= MAX_ALTP2M )
+        return rc;
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] == INVALID_MFN )
+    {
+        p2m_init_altp2m_helper(d, idx);
+        rc = 0;
+    }
+
+    altp2m_unlock(d);
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 16ae9d6..ec4aa09 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -36,22 +36,8 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
     return 0;
 }
 
-static inline void altp2m_vcpu_initialise(struct vcpu *v)
-{
-    /* Not implemented on ARM, should not be reached. */
-    BUG();
-}
-
-static inline void altp2m_vcpu_destroy(struct vcpu *v)
-{
-    /* Not implemented on ARM, should not be reached. */
-    BUG();
-}
-
-static inline void altp2m_vcpu_reset(struct vcpu *v)
-{
-    /* Not implemented on ARM, should not be reached. */
-    BUG();
-}
+void altp2m_vcpu_initialise(struct vcpu *v);
+void altp2m_vcpu_destroy(struct vcpu *v);
+void altp2m_vcpu_reset(struct vcpu *v);
 
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 6b9770f..8bcd618 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -138,6 +138,12 @@ struct arch_domain
     uint64_t altp2m_vttbr[MAX_ALTP2M];
 }  __cacheline_aligned;
 
+struct altp2mvcpu {
+    uint16_t p2midx; /* alternate p2m index */
+};
+
+#define vcpu_altp2m(v) ((v)->arch.avcpu)
+
 struct arch_vcpu
 {
     struct {
@@ -267,6 +273,9 @@ struct arch_vcpu
     struct vtimer phys_timer;
     struct vtimer virt_timer;
     bool_t vtimer_initialized;
+
+    /* Alternate p2m context */
+    struct altp2mvcpu avcpu;
 }  __cacheline_aligned;
 
 void vcpu_show_execution_state(struct vcpu *);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index a78d547..8ee78e0 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -121,6 +121,25 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
     /* Not supported on ARM. */
 }
 
+/*
+ * Alternate p2m: shadow p2m tables used for alternate memory views.
+ */
+
+#define altp2m_lock(d)      spin_lock(&(d)->arch.altp2m_lock)
+#define altp2m_unlock(d)    spin_unlock(&(d)->arch.altp2m_lock)
+
+/* Get current alternate p2m table */
+struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
+
+/* Flush all the alternate p2m's for a domain */
+static inline void p2m_flush_altp2m(struct domain *d)
+{
+    /* Not supported on ARM. */
+}
+
+/* Make a specific alternate p2m valid */
+int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (22 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
                   ` (12 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The current implementation differentiates between flushing and
destroying altp2m views. This commit adds the functions
p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or
individual altp2m views without destroying the entire table. In this
way, altp2m views can be reused at a later point in time.

In addition, the implementation clears all altp2m entries during the
process of flushing. The same applies to hostp2m entries, when it is
destroyed. In this way, further domain and p2m allocations will not
unintentionally reuse old p2m mappings.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c        | 67 +++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/p2m.h | 15 ++++++++---
 2 files changed, 78 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4a745fd..ae789e6 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
     return rc;
 }
 
+/* Reset this p2m table to be empty */
+static void p2m_flush_table(struct p2m_domain *p2m)
+{
+    struct page_info *top, *pg;
+    mfn_t mfn;
+    unsigned int i;
+
+    /* Check whether the p2m table has already been flushed before. */
+    if ( p2m->root == NULL)
+        return;
+
+    spin_lock(&p2m->lock);
+
+    /*
+     * "Host" p2m tables can have shared entries &c that need a bit more care
+     * when discarding them
+     */
+    ASSERT(!p2m_is_hostp2m(p2m));
+
+    /* Zap the top level of the trie */
+    top = p2m->root;
+
+    /* Clear all concatenated first level pages */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+    {
+        mfn = _mfn(page_to_mfn(top + i));
+        clear_domain_page(mfn);
+    }
+
+    /* Free the rest of the trie pages back to the paging pool */
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+        if ( pg != top  )
+        {
+            /*
+             * Before freeing the individual pages, we clear them to prevent
+             * reusing old table entries in future p2m allocations.
+             */
+            mfn = _mfn(page_to_mfn(pg));
+            clear_domain_page(mfn);
+            free_domheap_page(pg);
+        }
+
+    page_list_add(top, &p2m->pages);
+
+    /* Invalidate VTTBR */
+    p2m->vttbr.vttbr = 0;
+    p2m->vttbr.vttbr_baddr = INVALID_MFN;
+
+    spin_unlock(&p2m->lock);
+}
+
+void p2m_flush_altp2m(struct domain *d)
+{
+    unsigned int i;
+
+    altp2m_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        p2m_flush_table(d->arch.altp2m_p2m[i]);
+        flush_tlb();
+        d->arch.altp2m_vttbr[i] = INVALID_MFN;
+    }
+
+    altp2m_unlock(d);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8ee78e0..51d784f 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -132,10 +132,7 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
 
 /* Flush all the alternate p2m's for a domain */
-static inline void p2m_flush_altp2m(struct domain *d)
-{
-    /* Not supported on ARM. */
-}
+void p2m_flush_altp2m(struct domain *d);
 
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
@@ -289,6 +286,16 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
+static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
+{
+    return p2m->p2m_class == p2m_host;
+}
+
+static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
+{
+    return p2m->p2m_class == p2m_alternate;
+}
+
 /* vm_event and mem_access are supported on any ARM guest */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
 {
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (23 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
                   ` (11 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c        |  3 ++-
 xen/arch/arm/p2m.c        | 23 +++++++++++++++++++++++
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index cb90a55..005d7c6 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -140,7 +140,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_create_p2m:
-        rc = -EOPNOTSUPP;
+        if ( !(rc = p2m_init_next_altp2m(d, &a.u.view.view)) )
+            rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
         break;
 
     case HVMOP_altp2m_destroy_p2m:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ae789e6..6c41b98 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2110,6 +2110,29 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
     return rc;
 }
 
+int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
+{
+    int rc = -EINVAL;
+    unsigned int i;
+
+    altp2m_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( d->arch.altp2m_vttbr[i] != INVALID_MFN )
+            continue;
+
+        p2m_init_altp2m_helper(d, i);
+        *idx = i;
+        rc = 0;
+
+        break;
+    }
+
+    altp2m_unlock(d);
+    return rc;
+}
+
 /* Reset this p2m table to be empty */
 static void p2m_flush_table(struct p2m_domain *p2m)
 {
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 51d784f..c51532a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -137,6 +137,9 @@ void p2m_flush_altp2m(struct domain *d);
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Find an available alternate p2m and make it valid */
+int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (24 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
                   ` (10 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c        |  2 +-
 xen/arch/arm/p2m.c        | 32 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 005d7c6..f4ec5cf 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -145,7 +145,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_destroy_p2m:
-        rc = -EOPNOTSUPP;
+        rc = p2m_destroy_altp2m_by_id(d, a.u.view.view);
         break;
 
     case HVMOP_altp2m_switch_p2m:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6c41b98..f82f1ea 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2200,6 +2200,38 @@ void p2m_flush_altp2m(struct domain *d)
     altp2m_unlock(d);
 }
 
+int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+    struct p2m_domain *p2m;
+    int rc = -EBUSY;
+
+    if ( !idx || idx >= MAX_ALTP2M )
+        return rc;
+
+    domain_pause_except_self(d);
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+    {
+        p2m = d->arch.altp2m_p2m[idx];
+
+        if ( !_atomic_read(p2m->active_vcpus) )
+        {
+            p2m_flush_table(p2m);
+            flush_tlb();
+            d->arch.altp2m_vttbr[idx] = INVALID_MFN;
+            rc = 0;
+        }
+    }
+
+    altp2m_unlock(d);
+
+    domain_unpause_except_self(d);
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index c51532a..255a282 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -140,6 +140,9 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
 /* Find an available alternate p2m and make it valid */
 int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 
+/* Make a specific alternate p2m invalid */
+int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (25 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table Sergej Proskurin
                   ` (9 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c        |  2 +-
 xen/arch/arm/p2m.c        | 32 ++++++++++++++++++++++++++++++++
 xen/include/asm-arm/p2m.h |  3 +++
 3 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index f4ec5cf..9a536b2 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -149,7 +149,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_switch_p2m:
-        rc = -EOPNOTSUPP;
+        rc = p2m_switch_domain_altp2m_by_id(d, a.u.view.view);
         break;
 
     case HVMOP_altp2m_set_mem_access:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index f82f1ea..8bf23ee 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2232,6 +2232,38 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
     return rc;
 }
 
+int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+    struct vcpu *v;
+    int rc = -EINVAL;
+
+    if ( idx >= MAX_ALTP2M )
+        return rc;
+
+    domain_pause_except_self(d);
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+    {
+        for_each_vcpu( d, v )
+            if ( idx != vcpu_altp2m(v).p2midx )
+            {
+                atomic_dec(&p2m_get_altp2m(v)->active_vcpus);
+                vcpu_altp2m(v).p2midx = idx;
+                atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+            }
+
+        rc = 0;
+    }
+
+    altp2m_unlock(d);
+
+    domain_unpause_except_self(d);
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 255a282..783db5c 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -143,6 +143,9 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 /* Make a specific alternate p2m invalid */
 int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Switch alternate p2m for entire domain */
+int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx);
+
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
 #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (26 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m Sergej Proskurin
                   ` (8 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The initially named function "p2m_alloc_table" allocated pages solely
required for p2m. The new implementation leaves p2m allocation related
parts inside of this function (which is made static) and provides an
overlay function "p2m_table_init" that can be called from extern to
generally initialize p2m tables.  Therefore, it distinguishes between
the domain's p2m and altp2m mappings, which are allocated similarly.

NOTE: Inside the function "p2m_alloc_table" we do not lock the p2m lock
anymore. Also, we flush the TLBs outside of the function
"p2m_alloc_table". Instead, we perform the associate locking and TLB
flushing as part of the function p2m_table_init. This allows us to
provide a uniform interface for p2m-related table allocation, which can
be used for altp2m (and potentially nested p2m tables in the future
implementation) -- as it is done in the x86 implementation.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/domain.c     |  2 +-
 xen/arch/arm/p2m.c        | 53 +++++++++++++++++++++++++++++++++++++----------
 xen/include/asm-arm/p2m.h |  2 +-
 3 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 6ce4645..6102ed0 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -573,7 +573,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     if ( (rc = domain_io_init(d)) != 0 )
         goto fail;
 
-    if ( (rc = p2m_alloc_table(d)) != 0 )
+    if ( (rc = p2m_table_init(d)) != 0 )
         goto fail;
 
     switch ( config->gic_version )
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8bf23ee..7e721f9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1315,35 +1315,66 @@ void guest_physmap_remove_page(struct domain *d,
                       d->arch.p2m.default_access);
 }
 
-int p2m_alloc_table(struct domain *d)
+static int p2m_alloc_table(struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
-    struct page_info *page;
+    struct page_info *page = NULL;
     unsigned int i;
 
     page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
     if ( page == NULL )
         return -ENOMEM;
 
-    spin_lock(&p2m->lock);
-
-    /* Clear both first level pages */
+    /* Clear all first level pages */
     for ( i = 0; i < P2M_ROOT_PAGES; i++ )
         clear_and_clean_page(page + i);
 
     p2m->root = page;
 
-    d->arch.vttbr = page_to_maddr(p2m->root)
-        | ((uint64_t)p2m->vmid&0xff)<<48;
+    p2m->vttbr.vttbr = 0;
+    p2m->vttbr.vttbr_vmid = p2m->vmid & 0xff;
+    p2m->vttbr.vttbr_baddr = page_to_maddr(p2m->root);
 
-    /* Make sure that all TLBs corresponding to the new VMID are flushed
-     * before using it
+    return 0;
+}
+
+int p2m_table_init(struct domain *d)
+{
+    int i = 0;
+    int rc = -ENOMEM;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    spin_lock(&p2m->lock);
+
+    rc = p2m_alloc_table(p2m);
+    if ( rc != 0 )
+        goto out;
+
+    d->arch.vttbr = d->arch.p2m.vttbr.vttbr;
+
+    /*
+     * Make sure that all TLBs corresponding to the new VMID are flushed
+     * before using it.
      */
     flush_tlb_domain(d);
 
     spin_unlock(&p2m->lock);
 
-    return 0;
+    if ( hvm_altp2m_supported() )
+    {
+        /* Init alternate p2m data */
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+        {
+            d->arch.altp2m_vttbr[i] = INVALID_MFN;
+            rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
+            if ( rc != 0 )
+                goto out;
+        }
+
+        d->arch.altp2m_active = 0;
+    }
+
+out:
+    return rc;
 }
 
 #define MAX_VMID 256
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 783db5c..451b097 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -171,7 +171,7 @@ int relinquish_p2m_mapping(struct domain *d);
  *
  * Returns 0 for success or -errno.
  */
-int p2m_alloc_table(struct domain *d);
+int p2m_table_init(struct domain *d);
 
 /* Context switch */
 void p2m_save_state(struct vcpu *p);
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (27 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:45 ` [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
                   ` (7 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit makes sure that the TLB of a domain considers flushing all
of the associated altp2m views. Therefore, in case a different domain
(not the currently active domain) shall flush its TLBs, the current
implementation loops over all VTTBRs of the different altp2m mappings
per vCPU and flushes the TLBs. This way, a change of one of the altp2m
mapping is considered.  At this point, it must be considered that the
domain --whose TLBs are to be flushed-- is not locked.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 63 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7e721f9..019f10e 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,8 @@
 #include <asm/hardirq.h>
 #include <asm/page.h>
 
+#include <asm/altp2m.h>
+
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
 static unsigned int __read_mostly p2m_root_level;
@@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
                  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
 }
 
+static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    uint16_t index = vcpu_altp2m(v).p2midx;
+
+    if ( index == INVALID_ALTP2M )
+        return INVALID_MFN;
+
+    BUG_ON(index >= MAX_ALTP2M);
+
+    return d->arch.altp2m_vttbr[index];
+}
+
+static void p2m_load_altp2m_VTTBR(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    uint64_t vttbr = p2m_get_altp2m_vttbr(v);
+
+    if ( is_idle_domain(d) )
+        return;
+
+    BUG_ON(vttbr == INVALID_MFN);
+    WRITE_SYSREG64(vttbr, VTTBR_EL2);
+
+    isb(); /* Ensure update is visible */
+}
+
 static void p2m_load_VTTBR(struct domain *d)
 {
     if ( is_idle_domain(d) )
         return;
+
     BUG_ON(!d->arch.vttbr);
     WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
+
     isb(); /* Ensure update is visible */
 }
 
@@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
     WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
     isb();
 
-    p2m_load_VTTBR(n->domain);
+    if ( altp2m_active(n->domain) )
+        p2m_load_altp2m_VTTBR(n);
+    else
+        p2m_load_VTTBR(n->domain);
+
     isb();
 
     if ( is_32bit_domain(n->domain) )
@@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
 void flush_tlb_domain(struct domain *d)
 {
     unsigned long flags = 0;
+    struct vcpu *v = NULL;
 
-    /* Update the VTTBR if necessary with the domain d. In this case,
-     * it's only necessary to flush TLBs on every CPUs with the current VMID
-     * (our domain).
+    /*
+     * Update the VTTBR if necessary with the domain d. In this case, it is only
+     * necessary to flush TLBs on every CPUs with the current VMID (our
+     * domain).
      */
     if ( d != current->domain )
     {
         local_irq_save(flags);
-        p2m_load_VTTBR(d);
-    }
 
-    flush_tlb();
+        /* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
+        if ( altp2m_active(d) )
+        {
+            for_each_vcpu( d, v )
+            {
+                p2m_load_altp2m_VTTBR(v);
+                flush_tlb();
+            }
+        }
+        else
+        {
+            p2m_load_VTTBR(d);
+            flush_tlb();
+        }
+    }
+    else
+        flush_tlb();
 
     if ( d != current->domain )
     {
-        p2m_load_VTTBR(current->domain);
+        /* Make sure altp2m mapping is valid. */
+        if ( altp2m_active(current->domain) )
+            p2m_load_altp2m_VTTBR(current);
+        else
+            p2m_load_VTTBR(current->domain);
         local_irq_restore(flags);
     }
 }
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (28 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m Sergej Proskurin
@ 2016-07-04 11:45 ` Sergej Proskurin
  2016-07-04 11:46 ` [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m Sergej Proskurin
                   ` (6 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit changes the prototype of the following functions:
- apply_p2m_changes
- apply_one_level
- p2m_shatter_page
- p2m_create_table
- __p2m_lookup
- __p2m_get_mem_access

These changes are required as our implementation reuses most of the
existing ARM p2m implementation to set page table attributes of the
individual altp2m views. Therefore, exiting function prototypes have
been extended to hold another argument (of type struct p2m_domain *).
This allows to specify the p2m/altp2m domain that should be processed by
the individual function -- instead of accessing the host's default p2m
domain.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 80 +++++++++++++++++++++++++++++-------------------------
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 019f10e..9c8fefd 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -200,9 +200,8 @@ void flush_tlb_domain(struct domain *d)
  * There are no processor functions to do a stage 2 only lookup therefore we
  * do a a software walk.
  */
-static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+static paddr_t __p2m_lookup(struct p2m_domain *p2m, paddr_t paddr, p2m_type_t *t)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
     const unsigned int offsets[4] = {
         zeroeth_table_offset(paddr),
         first_table_offset(paddr),
@@ -282,10 +281,11 @@ err:
 paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
 {
     paddr_t ret;
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     spin_lock(&p2m->lock);
-    ret = __p2m_lookup(d, paddr, t);
+    ret = __p2m_lookup(p2m, paddr, t);
     spin_unlock(&p2m->lock);
 
     return ret;
@@ -441,10 +441,12 @@ static inline void p2m_remove_pte(lpae_t *p, bool_t flush_cache)
  *
  * level_shift is the number of bits at the level we want to create.
  */
-static int p2m_create_table(struct domain *d, lpae_t *entry,
-                            int level_shift, bool_t flush_cache)
+static int p2m_create_table(struct domain *d,
+                            struct p2m_domain *p2m,
+                            lpae_t *entry,
+                            int level_shift,
+                            bool_t flush_cache)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
     struct page_info *page;
     lpae_t *p;
     lpae_t pte;
@@ -502,10 +504,9 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
     return 0;
 }
 
-static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
+static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
                                 xenmem_access_t *access)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
     void *i;
     unsigned int index;
 
@@ -548,7 +549,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
          * No setting was found in the Radix tree. Check if the
          * entry exists in the page-tables.
          */
-        paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
+        paddr_t maddr = __p2m_lookup(p2m, gfn_x(gfn) << PAGE_SHIFT, NULL);
         if ( INVALID_PADDR == maddr )
             return -ESRCH;
 
@@ -677,17 +678,17 @@ static const paddr_t level_shifts[] =
     { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
 
 static int p2m_shatter_page(struct domain *d,
+                            struct p2m_domain *p2m,
                             lpae_t *entry,
                             unsigned int level,
                             bool_t flush_cache)
 {
     const paddr_t level_shift = level_shifts[level];
-    int rc = p2m_create_table(d, entry,
+    int rc = p2m_create_table(d, p2m, entry,
                               level_shift - PAGE_SHIFT, flush_cache);
 
     if ( !rc )
     {
-        struct p2m_domain *p2m = &d->arch.p2m;
         p2m->stats.shattered[level]++;
         p2m->stats.mappings[level]--;
         p2m->stats.mappings[level+1] += LPAE_ENTRIES;
@@ -704,6 +705,7 @@ static int p2m_shatter_page(struct domain *d,
  * -ve == (-Exxx) error.
  */
 static int apply_one_level(struct domain *d,
+                           struct p2m_domain *p2m,
                            lpae_t *entry,
                            unsigned int level,
                            bool_t flush_cache,
@@ -721,7 +723,6 @@ static int apply_one_level(struct domain *d,
     const paddr_t level_mask = level_masks[level];
     const paddr_t level_shift = level_shifts[level];
 
-    struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t pte;
     const lpae_t orig_pte = *entry;
     int rc;
@@ -776,7 +777,7 @@ static int apply_one_level(struct domain *d,
          * L3) or mem_access is in use. Create a page table and
          * continue to descend so we try smaller allocations.
          */
-        rc = p2m_create_table(d, entry, 0, flush_cache);
+        rc = p2m_create_table(d, p2m, entry, 0, flush_cache);
         if ( rc < 0 )
             return rc;
 
@@ -834,7 +835,7 @@ static int apply_one_level(struct domain *d,
             /* Not present -> create table entry and descend */
             if ( !p2m_valid(orig_pte) )
             {
-                rc = p2m_create_table(d, entry, 0, flush_cache);
+                rc = p2m_create_table(d, p2m, entry, 0, flush_cache);
                 if ( rc < 0 )
                     return rc;
                 return P2M_ONE_DESCEND;
@@ -844,7 +845,7 @@ static int apply_one_level(struct domain *d,
             if ( p2m_mapping(orig_pte) )
             {
                 *flush = true;
-                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                rc = p2m_shatter_page(d, p2m, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
             } /* else: an existing table mapping -> descend */
@@ -881,7 +882,7 @@ static int apply_one_level(struct domain *d,
                  * and descend.
                  */
                 *flush = true;
-                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                rc = p2m_shatter_page(d, p2m, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
 
@@ -966,7 +967,7 @@ static int apply_one_level(struct domain *d,
             /* Shatter large pages as we descend */
             if ( p2m_mapping(orig_pte) )
             {
-                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                rc = p2m_shatter_page(d, p2m, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
             } /* else: an existing table mapping -> descend */
@@ -1011,6 +1012,7 @@ static void update_reference_mapping(struct page_info *page,
 }
 
 static int apply_p2m_changes(struct domain *d,
+                     struct p2m_domain *p2m,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -1021,7 +1023,6 @@ static int apply_p2m_changes(struct domain *d,
                      p2m_access_t a)
 {
     int rc, ret;
-    struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t *mappings[4] = { NULL, NULL, NULL, NULL };
     struct page_info *pages[4] = { NULL, NULL, NULL, NULL };
     paddr_t addr, orig_maddr = maddr;
@@ -1148,7 +1149,7 @@ static int apply_p2m_changes(struct domain *d,
             lpae_t *entry = &mappings[level][offset];
             lpae_t old_entry = *entry;
 
-            ret = apply_one_level(d, entry,
+            ret = apply_one_level(d, p2m, entry,
                                   level, flush_pt, op,
                                   start_gpaddr, end_gpaddr,
                                   &addr, &maddr, &flush,
@@ -1257,7 +1258,7 @@ out:
          * addr keeps the address of the end of the last successfully-inserted
          * mapping.
          */
-        apply_p2m_changes(d, REMOVE, start_gpaddr, addr, orig_maddr,
+        apply_p2m_changes(d, p2m, REMOVE, start_gpaddr, addr, orig_maddr,
                           mattr, 0, p2m_invalid, d->arch.p2m.default_access);
     }
 
@@ -1268,7 +1269,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return apply_p2m_changes(d, ALLOCATE, start, end,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), ALLOCATE,
+                             start, end,
                              0, MATTR_MEM, 0, p2m_ram_rw,
                              d->arch.p2m.default_access);
 }
@@ -1278,7 +1280,7 @@ int map_regions_rw_cache(struct domain *d,
                          unsigned long nr,
                          unsigned long mfn)
 {
-    return apply_p2m_changes(d, INSERT,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), INSERT,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
@@ -1291,7 +1293,7 @@ int unmap_regions_rw_cache(struct domain *d,
                            unsigned long nr,
                            unsigned long mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), REMOVE,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
@@ -1304,7 +1306,7 @@ int map_mmio_regions(struct domain *d,
                      unsigned long nr,
                      unsigned long mfn)
 {
-    return apply_p2m_changes(d, INSERT,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), INSERT,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
@@ -1317,7 +1319,7 @@ int unmap_mmio_regions(struct domain *d,
                        unsigned long nr,
                        unsigned long mfn)
 {
-    return apply_p2m_changes(d, REMOVE,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), REMOVE,
                              pfn_to_paddr(start_gfn),
                              pfn_to_paddr(start_gfn + nr),
                              pfn_to_paddr(mfn),
@@ -1352,7 +1354,7 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return apply_p2m_changes(d, INSERT,
+    return apply_p2m_changes(d, p2m_get_hostp2m(d), INSERT,
                              pfn_to_paddr(gfn_x(gfn)),
                              pfn_to_paddr(gfn_x(gfn) + (1 << page_order)),
                              pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, t,
@@ -1363,7 +1365,7 @@ void guest_physmap_remove_page(struct domain *d,
                                gfn_t gfn,
                                mfn_t mfn, unsigned int page_order)
 {
-    apply_p2m_changes(d, REMOVE,
+    apply_p2m_changes(d, p2m_get_hostp2m(d), REMOVE,
                       pfn_to_paddr(gfn_x(gfn)),
                       pfn_to_paddr(gfn_x(gfn) + (1<<page_order)),
                       pfn_to_paddr(mfn_x(mfn)), MATTR_MEM, 0, p2m_invalid,
@@ -1696,9 +1698,9 @@ int p2m_init(struct domain *d)
 
 int relinquish_p2m_mapping(struct domain *d)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
-    return apply_p2m_changes(d, RELINQUISH,
+    return apply_p2m_changes(d, p2m, RELINQUISH,
                               pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
@@ -1708,12 +1710,12 @@ int relinquish_p2m_mapping(struct domain *d)
 
 int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
     end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
 
-    return apply_p2m_changes(d, CACHEFLUSH,
+    return apply_p2m_changes(d, p2m, CACHEFLUSH,
                              pfn_to_paddr(start_mfn),
                              pfn_to_paddr(end_mfn),
                              pfn_to_paddr(INVALID_MFN),
@@ -1743,6 +1745,9 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
     xenmem_access_t xma;
     p2m_type_t t;
     struct page_info *page = NULL;
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     rc = gva_to_ipa(gva, &ipa, flag);
     if ( rc < 0 )
@@ -1752,7 +1757,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
      * We do this first as this is faster in the default case when no
      * permission is set on the page.
      */
-    rc = __p2m_get_mem_access(current->domain, _gfn(paddr_to_pfn(ipa)), &xma);
+    rc = __p2m_get_mem_access(p2m, _gfn(paddr_to_pfn(ipa)), &xma);
     if ( rc < 0 )
         goto err;
 
@@ -1801,7 +1806,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
      * We had a mem_access permission limiting the access, but the page type
      * could also be limiting, so we need to check that as well.
      */
-    maddr = __p2m_lookup(current->domain, ipa, &t);
+    maddr = __p2m_lookup(p2m, ipa, &t);
     if ( maddr == INVALID_PADDR )
         goto err;
 
@@ -2125,7 +2130,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
         return 0;
     }
 
-    rc = apply_p2m_changes(d, MEMACCESS,
+    rc = apply_p2m_changes(d, p2m, MEMACCESS,
                            pfn_to_paddr(gfn_x(gfn) + start),
                            pfn_to_paddr(gfn_x(gfn) + nr),
                            0, MATTR_MEM, mask, 0, a);
@@ -2141,10 +2146,11 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
                        xenmem_access_t *access)
 {
     int ret;
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     spin_lock(&p2m->lock);
-    ret = __p2m_get_mem_access(d, gfn, access);
+    ret = __p2m_get_mem_access(p2m, gfn, access);
     spin_unlock(&p2m->lock);
 
     return ret;
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (29 preceding siblings ...)
  2016-07-04 11:45 ` [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
@ 2016-07-04 11:46 ` Sergej Proskurin
  2016-07-04 11:46 ` [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
                   ` (5 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adapts get_page_from_gva to consider the currently mapped
altp2m view during address translation. We also adapt the function's
prototype (provide "struct vcpu *" instead of "struct domain *"). This
change is required, as the function indirectly calls the function
gva_to_ma_par, which requires the mmu to use the current p2m mapping. So
if the caller is interested in a page that must be claimed from a vCPU
other than current, it must temporarily set the altp2m view that is used
by the vCPU in question.  Therefore, we need to provide the particular
vCPU to this function.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/guestcopy.c |  6 +++---
 xen/arch/arm/p2m.c       | 19 +++++++++++++------
 xen/arch/arm/traps.c     |  2 +-
 xen/include/asm-arm/mm.h |  2 +-
 4 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index ce1c3c3..413125f 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
         if ( page == NULL )
             return len;
 
@@ -64,7 +64,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
         if ( page == NULL )
             return len;
 
@@ -96,7 +96,7 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
         unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
         struct page_info *page;
 
-        page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
+        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
         if ( page == NULL )
             return len;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 9c8fefd..23b482f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1829,10 +1829,11 @@ err:
     return page;
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
                                     unsigned long flags)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
     struct page_info *page = NULL;
     paddr_t maddr = 0;
     int rc;
@@ -1844,17 +1845,23 @@ struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
         unsigned long irq_flags;
 
         local_irq_save(irq_flags);
-        p2m_load_VTTBR(d);
+
+        if ( altp2m_active(d) )
+            p2m_load_altp2m_VTTBR(v);
+        else
+            p2m_load_VTTBR(d);
 
         rc = gvirt_to_maddr(va, &maddr, flags);
 
-        p2m_load_VTTBR(current->domain);
+        if ( altp2m_active(current->domain) )
+            p2m_load_altp2m_VTTBR(current);
+        else
+            p2m_load_VTTBR(current->domain);
+
         local_irq_restore(irq_flags);
     }
     else
-    {
         rc = gvirt_to_maddr(va, &maddr, flags);
-    }
 
     if ( rc )
         goto err;
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 44926ca..6995971 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -957,7 +957,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
         return;
     }
 
-    page = get_page_from_gva(v->domain, sp, GV2M_READ);
+    page = get_page_from_gva(v, sp, GV2M_READ);
     if ( page == NULL )
     {
         printk("Failed to convert stack to physical address\n");
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 68cf203..19eadd2 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -281,7 +281,7 @@ static inline void *page_to_virt(const struct page_info *pg)
     return mfn_to_virt(page_to_mfn(pg));
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
                                     unsigned long flags);
 
 /*
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (30 preceding siblings ...)
  2016-07-04 11:46 ` [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m Sergej Proskurin
@ 2016-07-04 11:46 ` Sergej Proskurin
  2016-07-04 11:46 ` [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
                   ` (4 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The HVMOP HVMOP_altp2m_set_mem_access allows to set gfn permissions of
(currently one page at a time) of a specific altp2m view. In case the
view does not hold the requested gfn entry, it will be first copied from
the hostp2m table and then modified as requested.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c |   7 +-
 xen/arch/arm/p2m.c | 207 +++++++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 200 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 9a536b2..8218737 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -153,7 +153,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_set_mem_access:
-        rc = -EOPNOTSUPP;
+        if ( a.u.set_mem_access.pad )
+            rc = -EINVAL;
+        else
+            rc = p2m_set_mem_access(d, _gfn(a.u.set_mem_access.gfn), 1, 0, 0,
+                                    a.u.set_mem_access.hvmmem_access,
+                                    a.u.set_mem_access.view);
         break;
 
     case HVMOP_altp2m_change_gfn:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 23b482f..395ea0f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2085,6 +2085,159 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
     return false;
 }
 
+static int p2m_get_gfn_level_and_attr(struct p2m_domain *p2m,
+                                      paddr_t paddr, unsigned int *level,
+                                      unsigned long *mattr)
+{
+    const unsigned int offsets[4] = {
+        zeroeth_table_offset(paddr),
+        first_table_offset(paddr),
+        second_table_offset(paddr),
+        third_table_offset(paddr)
+    };
+    lpae_t pte, *map;
+    unsigned int root_table;
+
+    ASSERT(spin_is_locked(&p2m->lock));
+    BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
+
+    if ( P2M_ROOT_PAGES > 1 )
+    {
+        /*
+         * Concatenated root-level tables. The table number will be
+         * the offset at the previous level. It is not possible to
+         * concatenate a level-0 root.
+         */
+        ASSERT(P2M_ROOT_LEVEL > 0);
+        root_table = offsets[P2M_ROOT_LEVEL - 1];
+        if ( root_table >= P2M_ROOT_PAGES )
+            goto err;
+    }
+    else
+        root_table = 0;
+
+    map = __map_domain_page(p2m->root + root_table);
+
+    ASSERT(P2M_ROOT_LEVEL < 4);
+
+    /* Find the p2m level of the wanted paddr */
+    for ( *level = P2M_ROOT_LEVEL ; *level < 4 ; (*level)++ )
+    {
+        pte = map[offsets[*level]];
+
+        if ( *level == 3 || !p2m_table(pte) )
+            /* Done */
+            break;
+
+        ASSERT(*level < 3);
+
+        /* Map for next level */
+        unmap_domain_page(map);
+        map = map_domain_page(_mfn(pte.p2m.base));
+    }
+
+    unmap_domain_page(map);
+
+    if ( !p2m_valid(pte) )
+        goto err;
+
+    /* Provide mattr information of the paddr */
+    *mattr = pte.p2m.mattr;
+
+    return 0;
+
+err:
+    return -EINVAL;
+}
+
+static inline
+int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
+                              struct p2m_domain *ap2m, p2m_access_t a,
+                              gfn_t gfn)
+{
+    p2m_type_t p2mt;
+    xenmem_access_t xma_old;
+    paddr_t gpa = pfn_to_paddr(gfn_x(gfn));
+    paddr_t maddr, mask = 0;
+    unsigned int level;
+    unsigned long mattr;
+    int rc;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    /* Check if entry is part of the altp2m view. */
+    spin_lock(&ap2m->lock);
+    maddr = __p2m_lookup(ap2m, gpa, &p2mt);
+    spin_unlock(&ap2m->lock);
+
+    /* Check host p2m if no valid entry in ap2m. */
+    if ( maddr == INVALID_PADDR )
+    {
+        /* Check if entry is part of the host p2m view. */
+        spin_lock(&hp2m->lock);
+        maddr = __p2m_lookup(hp2m, gpa, &p2mt);
+        if ( maddr == INVALID_PADDR || p2mt != p2m_ram_rw )
+            goto out;
+
+        rc = __p2m_get_mem_access(hp2m, gfn, &xma_old);
+        if ( rc )
+            goto out;
+
+        rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
+        if ( rc )
+            goto out;
+        spin_unlock(&hp2m->lock);
+
+        mask = level_masks[level];
+
+        /* If this is a superpage, copy that first. */
+        if ( level != 3 )
+        {
+            rc = apply_p2m_changes(d, ap2m, INSERT,
+                                   gpa & mask,
+                                   (gpa + level_sizes[level]) & mask,
+                                   maddr & mask, mattr, 0, p2mt,
+                                   memaccess[xma_old]);
+            if ( rc < 0 )
+                goto out;
+        }
+    }
+    else
+    {
+        spin_lock(&ap2m->lock);
+        rc = p2m_get_gfn_level_and_attr(ap2m, gpa, &level, &mattr);
+        spin_unlock(&ap2m->lock);
+        if ( rc )
+            goto out;
+    }
+
+    /* Set mem access attributes - currently supporting only one (4K) page. */
+    mask = level_masks[3];
+    return apply_p2m_changes(d, ap2m, INSERT,
+                             gpa & mask,
+                             (gpa + level_sizes[level]) & mask,
+                             maddr & mask, mattr, 0, p2mt, a);
+
+out:
+    if ( spin_is_locked(&hp2m->lock) )
+        spin_unlock(&hp2m->lock);
+
+    return -ESRCH;
+}
+
 /*
  * Set access type for a region of pfns.
  * If gfn == INVALID_GFN, sets the default access type.
@@ -2093,7 +2246,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
                         uint32_t start, uint32_t mask, xenmem_access_t access,
                         unsigned int altp2m_idx)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
     p2m_access_t a;
     long rc = 0;
 
@@ -2112,35 +2265,63 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
 #undef ACCESS
     };
 
+    /* altp2m view 0 is treated as the hostp2m */
+    if ( altp2m_idx )
+    {
+        if ( altp2m_idx >= MAX_ALTP2M ||
+             d->arch.altp2m_vttbr[altp2m_idx] == INVALID_MFN )
+            return -EINVAL;
+
+        ap2m = d->arch.altp2m_p2m[altp2m_idx];
+    }
+
     switch ( access )
     {
     case 0 ... ARRAY_SIZE(memaccess) - 1:
         a = memaccess[access];
         break;
     case XENMEM_access_default:
-        a = p2m->default_access;
+        a = hp2m->default_access;
         break;
     default:
         return -EINVAL;
     }
 
-    /*
-     * Flip mem_access_enabled to true when a permission is set, as to prevent
-     * allocating or inserting super-pages.
-     */
-    p2m->mem_access_enabled = true;
-
     /* If request to set default access. */
     if ( gfn_x(gfn) == INVALID_GFN )
     {
-        p2m->default_access = a;
+        hp2m->default_access = a;
         return 0;
     }
 
-    rc = apply_p2m_changes(d, p2m, MEMACCESS,
-                           pfn_to_paddr(gfn_x(gfn) + start),
-                           pfn_to_paddr(gfn_x(gfn) + nr),
-                           0, MATTR_MEM, mask, 0, a);
+
+    if ( ap2m )
+    {
+        /*
+         * Flip mem_access_enabled to true when a permission is set, as to prevent
+         * allocating or inserting super-pages.
+         */
+        ap2m->mem_access_enabled = true;
+
+        /*
+         * ARM altp2m currently supports only setting of memory access rights
+         * of only one (4K) page at a time.
+         */
+        rc = p2m_set_altp2m_mem_access(d, hp2m, ap2m, a, gfn);
+    }
+    else
+    {
+        /*
+         * Flip mem_access_enabled to true when a permission is set, as to prevent
+         * allocating or inserting super-pages.
+         */
+        hp2m->mem_access_enabled = true;
+
+        rc = apply_p2m_changes(d, hp2m, MEMACCESS,
+                               pfn_to_paddr(gfn_x(gfn) + start),
+                               pfn_to_paddr(gfn_x(gfn) + nr),
+                               0, MATTR_MEM, mask, 0, a);
+    }
     if ( rc < 0 )
         return rc;
     else if ( rc > 0 )
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (31 preceding siblings ...)
  2016-07-04 11:46 ` [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
@ 2016-07-04 11:46 ` Sergej Proskurin
  2016-07-04 11:46 ` [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM Sergej Proskurin
                   ` (3 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adds the function p2m_altp2m_lazy_copy implementing the
altp2m paging mechanism. The function p2m_altp2m_lazy_copy lazily copies
the hostp2m's mapping into the currently active altp2m view on 2nd stage
instruction or data access violations. Every altp2m violation generates
a vm_event.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c           | 130 ++++++++++++++++++++++++++++++++++++++++++-
 xen/arch/arm/traps.c         | 102 +++++++++++++++++++++++++++------
 xen/include/asm-arm/altp2m.h |   4 +-
 xen/include/asm-arm/p2m.h    |  17 ++++--
 4 files changed, 224 insertions(+), 29 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 395ea0f..96892a5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -15,6 +15,7 @@
 #include <asm/hardirq.h>
 #include <asm/page.h>
 
+#include <asm/vm_event.h>
 #include <asm/altp2m.h>
 
 #ifdef CONFIG_ARM_64
@@ -1955,6 +1956,12 @@ void __init setup_virt_paging(void)
     smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }
 
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+{
+    if ( altp2m_active(v->domain) )
+        p2m_switch_vcpu_altp2m_by_id(v, idx);
+}
+
 bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
 {
     int rc;
@@ -1962,13 +1969,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
     xenmem_access_t xma;
     vm_event_request_t *req;
     struct vcpu *v = current;
-    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     /* Mem_access is not in use. */
     if ( !p2m->mem_access_enabled )
         return true;
 
-    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
+    rc = p2m_get_mem_access(d, _gfn(paddr_to_pfn(gpa)), &xma);
     if ( rc )
         return true;
 
@@ -2074,6 +2082,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
         req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
         req->vcpu_id = v->vcpu_id;
 
+        vm_event_fill_regs(req);
+
+        if ( altp2m_active(v->domain) )
+        {
+            req->flags |= VM_EVENT_FLAG_ALTERNATE_P2M;
+            req->altp2m_idx = vcpu_altp2m(v).p2midx;
+        }
+
         mem_access_send_req(v->domain, req);
         xfree(req);
     }
@@ -2356,6 +2372,116 @@ struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
     return v->domain->arch.altp2m_p2m[index];
 }
 
+bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
+{
+    struct domain *d = v->domain;
+    bool_t rc = 0;
+
+    if ( idx >= MAX_ALTP2M )
+        return rc;
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
+    {
+        if ( idx != vcpu_altp2m(v).p2midx )
+        {
+            atomic_dec(&p2m_get_altp2m(v)->active_vcpus);
+            vcpu_altp2m(v).p2midx = idx;
+            atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
+        }
+        rc = 1;
+    }
+
+    altp2m_unlock(d);
+
+    return rc;
+}
+
+/*
+ * If the fault is for a not present entry:
+ *     if the entry in the host p2m has a valid mfn, copy it and retry
+ *     else indicate that outer handler should handle fault
+ *
+ * If the fault is for a present entry:
+ *     indicate that outer handler should handle fault
+ */
+bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
+                            unsigned long gva, struct npfec npfec,
+                            struct p2m_domain **ap2m)
+{
+    struct domain *d = v->domain;
+    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
+    p2m_type_t p2mt;
+    xenmem_access_t xma;
+    paddr_t maddr, mask = 0;
+    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
+    unsigned int level;
+    unsigned long mattr;
+    int rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    *ap2m = p2m_get_altp2m(v);
+    if ( *ap2m == NULL)
+        return 0;
+
+    /* Check if entry is part of the altp2m view */
+    spin_lock(&(*ap2m)->lock);
+    maddr = __p2m_lookup(*ap2m, gpa, NULL);
+    spin_unlock(&(*ap2m)->lock);
+    if ( maddr != INVALID_PADDR )
+        return 0;
+
+    /* Check if entry is part of the host p2m view */
+    spin_lock(&hp2m->lock);
+    maddr = __p2m_lookup(hp2m, gpa, &p2mt);
+    if ( maddr == INVALID_PADDR )
+        goto out;
+
+    rc = __p2m_get_mem_access(hp2m, gfn, &xma);
+    if ( rc )
+        goto out;
+
+    rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
+    if ( rc )
+        goto out;
+    spin_unlock(&hp2m->lock);
+
+    mask = level_masks[level];
+
+    rc = apply_p2m_changes(d, *ap2m, INSERT,
+                           pfn_to_paddr(gfn_x(gfn)) & mask,
+                           (pfn_to_paddr(gfn_x(gfn)) + level_sizes[level]) & mask,
+                           maddr & mask, mattr, 0, p2mt,
+                           memaccess[xma]);
+    if ( rc )
+    {
+        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m %lx\n",
+                (unsigned long)pfn_to_paddr(gfn_x(gfn)), (unsigned long)(maddr), (unsigned long)*ap2m);
+        domain_crash(hp2m->domain);
+    }
+
+    return 1;
+
+out:
+    spin_unlock(&hp2m->lock);
+    return 0;
+}
+
 static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
 {
     struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 6995971..78db2cf 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -48,6 +48,8 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 
+#include <asm/altp2m.h>
+
 /* The base of the stack must always be double-word aligned, which means
  * that both the kernel half of struct cpu_user_regs (which is pushed in
  * entry.S) and struct cpu_info (which lives at the bottom of a Xen
@@ -2383,35 +2385,64 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
 {
     int rc;
     register_t gva = READ_SYSREG(FAR_EL2);
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = NULL;
+    paddr_t gpa;
+
+    if ( hsr.iabt.s1ptw )
+        gpa = get_faulting_ipa();
+    else
+    {
+        /*
+         * Flush the TLB to make sure the DTLB is clear before
+         * doing GVA->IPA translation. If we got here because of
+         * an entry only present in the ITLB, this translation may
+         * still be inaccurate.
+         */
+        flush_tlb_local();
+
+        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
+        if ( rc == -EFAULT )
+            goto bad_insn_abort;
+    }
 
     switch ( hsr.iabt.ifsc & 0x3f )
     {
+    case FSC_FLT_TRANS ... FSC_FLT_TRANS + 3:
+    {
+        if ( altp2m_active(d) )
+        {
+            const struct npfec npfec = {
+                .insn_fetch = 1,
+                .gla_valid = 1,
+                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+            };
+
+            /*
+             * Copy the entire page of the failing instruction into the
+             * currently active altp2m view.
+             */
+            if ( p2m_altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
+                return;
+
+            rc = p2m_mem_access_check(gpa, gva, npfec);
+
+            /* Trap was triggered by mem_access, work here is done */
+            if ( !rc )
+                return;
+        }
+
+        break;
+    }
     case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
     {
-        paddr_t gpa;
         const struct npfec npfec = {
             .insn_fetch = 1,
             .gla_valid = 1,
             .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
         };
 
-        if ( hsr.iabt.s1ptw )
-            gpa = get_faulting_ipa();
-        else
-        {
-            /*
-             * Flush the TLB to make sure the DTLB is clear before
-             * doing GVA->IPA translation. If we got here because of
-             * an entry only present in the ITLB, this translation may
-             * still be inaccurate.
-             */
-            flush_tlb_local();
-
-            rc = gva_to_ipa(gva, &gpa, GV2M_READ);
-            if ( rc == -EFAULT )
-                goto bad_insn_abort;
-        }
-
         rc = p2m_mem_access_check(gpa, gva, npfec);
 
         /* Trap was triggered by mem_access, work here is done */
@@ -2429,6 +2460,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
                                      const union hsr hsr)
 {
     const struct hsr_dabt dabt = hsr.dabt;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = NULL;
     int rc;
     mmio_info_t info;
 
@@ -2449,6 +2482,12 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
         info.gpa = get_faulting_ipa();
     else
     {
+        /*
+         * When using altp2m, this flush is required to get rid of old TLB
+         * entries and use the new, lazily copied, ap2m entries.
+         */
+        flush_tlb_local();
+
         rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
         if ( rc == -EFAULT )
             goto bad_data_abort;
@@ -2456,6 +2495,33 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
 
     switch ( dabt.dfsc & 0x3f )
     {
+    case FSC_FLT_TRANS ... FSC_FLT_TRANS + 3:
+    {
+        if ( altp2m_active(current->domain) )
+        {
+            const struct npfec npfec = {
+                .read_access = !dabt.write,
+                .write_access = dabt.write,
+                .gla_valid = 1,
+                .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+            };
+
+            /*
+             * Copy the entire page of the failing data access into the
+             * currently active altp2m view.
+             */
+            if ( p2m_altp2m_lazy_copy(v, info.gpa, info.gva, npfec, &p2m) )
+                return;
+
+            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
+
+            /* Trap was triggered by mem_access, work here is done */
+            if ( !rc )
+                return;
+        }
+
+        break;
+    }
     case FSC_FLT_PERM ... FSC_FLT_PERM + 3:
     {
         const struct npfec npfec = {
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index ec4aa09..2a87d14 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -31,9 +31,7 @@ static inline bool_t altp2m_active(const struct domain *d)
 /* Alternate p2m VCPU */
 static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 {
-    /* Not implemented on ARM, should not be reached. */
-    BUG();
-    return 0;
+    return vcpu_altp2m(v).p2midx;
 }
 
 void altp2m_vcpu_initialise(struct vcpu *v);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 451b097..b82e4b9 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -115,12 +115,6 @@ void p2m_mem_access_emulate_check(struct vcpu *v,
     /* Not supported on ARM. */
 }
 
-static inline
-void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
-{
-    /* Not supported on ARM. */
-}
-
 /*
  * Alternate p2m: shadow p2m tables used for alternate memory views.
  */
@@ -131,12 +125,23 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 /* Get current alternate p2m table */
 struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
 
+/* Switch alternate p2m for a single vcpu */
+bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx);
+
+/* Check to see if vcpu should be switched to a different p2m. */
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
+
 /* Flush all the alternate p2m's for a domain */
 void p2m_flush_altp2m(struct domain *d);
 
 /* Make a specific alternate p2m valid */
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
 
+/* Alternate p2m paging */
+bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
+                            unsigned long gla, struct npfec npfec,
+                            struct p2m_domain **ap2m);
+
 /* Find an available alternate p2m and make it valid */
 int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (32 preceding siblings ...)
  2016-07-04 11:46 ` [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
@ 2016-07-04 11:46 ` Sergej Proskurin
  2016-07-04 11:46 ` [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
                   ` (2 subsequent siblings)
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Ian Jackson, Wei Liu

The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
This parameter allows further usage of altp2m on ARM. For this, we
define an additional altp2m field for PV domains as part of the
libxl_domain_type struct. This field can be set only on ARM systems
through the "altp2m" switch in the domain's configuration file (i.e.
set altp2m=1).

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 tools/libxl/libxl_create.c  |  1 +
 tools/libxl/libxl_dom.c     | 14 ++++++++++++++
 tools/libxl/libxl_types.idl |  1 +
 tools/libxl/xl_cmdimpl.c    |  5 +++++
 4 files changed, 21 insertions(+)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b99472..40b5f61 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -400,6 +400,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
             b_info->cmdline = b_info->u.pv.cmdline;
             b_info->u.pv.cmdline = NULL;
         }
+        libxl_defbool_setdefault(&b_info->u.pv.altp2m, false);
         break;
     default:
         LOG(ERROR, "invalid domain type %s in create info",
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index ec29060..ab023a2 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -277,6 +277,16 @@ err:
 }
 #endif
 
+#if defined(__arm__) || defined(__aarch64__)
+static void pv_set_conf_params(xc_interface *handle, uint32_t domid,
+                               libxl_domain_build_info *const info)
+{
+    if ( libxl_defbool_val(info->u.pv.altp2m) )
+        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
+                         libxl_defbool_val(info->u.pv.altp2m));
+}
+#endif
+
 static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
                                 libxl_domain_build_info *const info)
 {
@@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
             return rc;
 #endif
     }
+#if defined(__arm__) || defined(__aarch64__)
+    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
+        pv_set_conf_params(ctx->xch, domid, info);
+#endif
 
     rc = libxl__arch_domain_create(gc, d_config, domid);
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index ef614be..0a164f9 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                       ("features", string, {'const': True}),
                                       # Use host's E820 for PCI passthrough.
                                       ("e820_host", libxl_defbool),
+                                      ("altp2m", libxl_defbool),
                                       ])),
                  ("invalid", None),
                  ], keyvar_init_val = "LIBXL_DOMAIN_TYPE_INVALID")),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 6459eec..12c6e48 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1718,6 +1718,11 @@ static void parse_config_data(const char *config_source,
             exit(1);
         }
 
+#if defined(__arm__) || defined(__aarch64__)
+        /* Enable altp2m for PV guests solely on ARM */
+        xlu_cfg_get_defbool(config, "altp2m", &b_info->u.pv.altp2m, 0);
+#endif
+
         break;
     }
     default:
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (33 preceding siblings ...)
  2016-07-04 11:46 ` [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM Sergej Proskurin
@ 2016-07-04 11:46 ` Sergej Proskurin
  2016-07-04 11:46 ` [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM Sergej Proskurin
  2016-07-04 12:52 ` [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Andrew Cooper
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 96892a5..de97a12 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -51,7 +51,8 @@ static bool_t p2m_mapping(lpae_t pte)
 
 void p2m_dump_info(struct domain *d)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     spin_lock(&p2m->lock);
     printk("p2m mappings for domain %d (vmid %d):\n",
@@ -71,7 +72,8 @@ void memory_type_changed(struct domain *d)
 
 void dump_p2m_lookup(struct domain *d, paddr_t addr)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
 
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (34 preceding siblings ...)
  2016-07-04 11:46 ` [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
@ 2016-07-04 11:46 ` Sergej Proskurin
  2016-07-04 12:52 ` [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Andrew Cooper
  36 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 11:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergej Proskurin, Tamas K Lengyel, Ian Jackson, Wei Liu, Razvan Cojocaru

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 tools/tests/xen-access/xen-access.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index f26e723..ef21d0d 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -333,8 +333,9 @@ void usage(char* progname)
 {
     fprintf(stderr, "Usage: %s [-m] <domain_id> write|exec", progname);
 #if defined(__i386__) || defined(__x86_64__)
-            fprintf(stderr, "|breakpoint|altp2m_write|altp2m_exec");
+            fprintf(stderr, "|breakpoint");
 #endif
+            fprintf(stderr, "|altp2m_write|altp2m_exec");
             fprintf(stderr,
             "\n"
             "Logs first page writes, execs, or breakpoint traps that occur on the domain.\n"
@@ -402,6 +403,7 @@ int main(int argc, char *argv[])
     {
         breakpoint = 1;
     }
+#endif
     else if ( !strcmp(argv[0], "altp2m_write") )
     {
         default_access = XENMEM_access_rx;
@@ -412,7 +414,6 @@ int main(int argc, char *argv[])
         default_access = XENMEM_access_rw;
         altp2m = 1;
     }
-#endif
     else
     {
         usage(argv[0]);
@@ -485,12 +486,14 @@ int main(int argc, char *argv[])
             goto exit;
         }
 
+#if defined(__i386__) || defined(__x86_64__)
         rc = xc_monitor_singlestep( xch, domain_id, 1 );
         if ( rc < 0 )
         {
             ERROR("Error %d failed to enable singlestep monitoring!\n", rc);
             goto exit;
         }
+#endif
     }
 
     if ( !altp2m )
@@ -540,7 +543,9 @@ int main(int argc, char *argv[])
                 rc = xc_altp2m_switch_to_view( xch, domain_id, 0 );
                 rc = xc_altp2m_destroy_view(xch, domain_id, altp2m_view_id);
                 rc = xc_altp2m_set_domain_state(xch, domain_id, 0);
+#if defined(__i386__) || defined(__x86_64__)
                 rc = xc_monitor_singlestep(xch, domain_id, 0);
+#endif
             } else {
                 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
                 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, START_PFN,
@@ -695,9 +700,11 @@ int main(int argc, char *argv[])
 exit:
     if ( altp2m )
     {
+#if defined(__i386__) || defined(__x86_64__)
         uint32_t vcpu_id;
         for ( vcpu_id = 0; vcpu_id<XEN_LEGACY_MAX_VCPUS; vcpu_id++)
             rc = control_singlestep(xch, domain_id, vcpu_id, 0);
+#endif
     }
 
     /* Tear down domain xenaccess */
-- 
2.8.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
  2016-07-04 11:45 ` [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines Sergej Proskurin
@ 2016-07-04 12:12   ` Sergej Proskurin
  2016-07-04 15:42     ` Julien Grall
  2016-07-04 15:55   ` Julien Grall
  2016-07-04 16:20   ` Julien Grall
  2 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 12:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini

ARM allows the use of concatenated root (first-level) page tables (there
are P2M_ROOT_PAGES consecutive pages that are used for the root level
page table. We need to prevent freeing one of these concatenated pages
during the process of flushing in p2m_flush_table (simply because new
pages might be re-inserted at a later point in time into the page table).


On 07/04/2016 01:45 PM, Sergej Proskurin wrote:
> The current implementation differentiates between flushing and
> destroying altp2m views. This commit adds the functions
> p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or
> individual altp2m views without destroying the entire table. In this
> way, altp2m views can be reused at a later point in time.
> 
> In addition, the implementation clears all altp2m entries during the
> process of flushing. The same applies to hostp2m entries, when it is
> destroyed. In this way, further domain and p2m allocations will not
> unintentionally reuse old p2m mappings.
> 
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c        | 67 +++++++++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/p2m.h | 15 ++++++++---
>  2 files changed, 78 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 4a745fd..ae789e6 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
>      return rc;
>  }
>  
> +/* Reset this p2m table to be empty */
> +static void p2m_flush_table(struct p2m_domain *p2m)
> +{
> +    struct page_info *top, *pg;
> +    mfn_t mfn;
> +    unsigned int i;
> +
> +    /* Check whether the p2m table has already been flushed before. */
> +    if ( p2m->root == NULL)
> +        return;
> +
> +    spin_lock(&p2m->lock);
> +
> +    /*
> +     * "Host" p2m tables can have shared entries &c that need a bit more care
> +     * when discarding them
> +     */
> +    ASSERT(!p2m_is_hostp2m(p2m));
> +
> +    /* Zap the top level of the trie */
> +    top = p2m->root;
> +
> +    /* Clear all concatenated first level pages */
> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
> +    {
> +        mfn = _mfn(page_to_mfn(top + i));
> +        clear_domain_page(mfn);
> +    }
> +
> +    /* Free the rest of the trie pages back to the paging pool */
> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
> +        if ( pg != top  )
> +        {
> +            /*
> +             * Before freeing the individual pages, we clear them to prevent
> +             * reusing old table entries in future p2m allocations.
> +             */
> +            mfn = _mfn(page_to_mfn(pg));
> +            clear_domain_page(mfn);
> +            free_domheap_page(pg);
> +        }

At this point, we prevent only the first root level page from being
freed. In case there are multiple consecutive first level pages, one of
them will be freed in the upper loop (and potentially crash the guest if
the table is reused at a later point in time). However, testing for
every concatenated page in the if clause of the while loop would further
decrease the flushing performance. Thus, my question is, whether there
is a good way to solve this issue?

> +
> +    page_list_add(top, &p2m->pages);
> +
> +    /* Invalidate VTTBR */
> +    p2m->vttbr.vttbr = 0;
> +    p2m->vttbr.vttbr_baddr = INVALID_MFN;
> +
> +    spin_unlock(&p2m->lock);
> +}
> +
> +void p2m_flush_altp2m(struct domain *d)
> +{
> +    unsigned int i;
> +
> +    altp2m_lock(d);
> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        p2m_flush_table(d->arch.altp2m_p2m[i]);
> +        flush_tlb();
> +        d->arch.altp2m_vttbr[i] = INVALID_MFN;
> +    }
> +
> +    altp2m_unlock(d);
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 8ee78e0..51d784f 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -132,10 +132,7 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>  struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
>  
>  /* Flush all the alternate p2m's for a domain */
> -static inline void p2m_flush_altp2m(struct domain *d)
> -{
> -    /* Not supported on ARM. */
> -}
> +void p2m_flush_altp2m(struct domain *d);
>  
>  /* Make a specific alternate p2m valid */
>  int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
> @@ -289,6 +286,16 @@ static inline int get_page_and_type(struct page_info *page,
>  /* get host p2m table */
>  #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
>  
> +static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
> +{
> +    return p2m->p2m_class == p2m_host;
> +}
> +
> +static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
> +{
> +    return p2m->p2m_class == p2m_alternate;
> +}
> +
>  /* vm_event and mem_access are supported on any ARM guest */
>  static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
>  {
> 

Cheers,
Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.
  2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM Sergej Proskurin
@ 2016-07-04 12:15   ` Andrew Cooper
  2016-07-04 13:02     ` Sergej Proskurin
  2016-07-04 13:25   ` Julien Grall
  2016-07-04 17:42   ` Julien Grall
  2 siblings, 1 reply; 126+ messages in thread
From: Andrew Cooper @ 2016-07-04 12:15 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Julien Grall, Stefano Stabellini

On 04/07/16 12:45, Sergej Proskurin wrote:
> The Xen altp2m subsystem is currently supported only on x86-64 based
> architectures. By utilizing ARM's virtualization extensions, we intend
> to implement altp2m support for ARM architectures and thus further
> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>
> With this commit, Xen is now able to activate altp2m support on ARM by
> means of the command-line argument 'altp2m' (bool).
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>

In addition, please patch docs/misc/xen-command-line.markdown to
indicate that the altp2m option is available for x86 Intel and ARM.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-04 11:45 ` [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m Sergej Proskurin
@ 2016-07-04 12:30   ` Sergej Proskurin
  2016-07-04 20:32   ` Julien Grall
  1 sibling, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 12:30 UTC (permalink / raw)
  To: xen-devel; +Cc: Julien Grall, Stefano Stabellini



On 07/04/2016 01:45 PM, Sergej Proskurin wrote:
> This commit makes sure that the TLB of a domain considers flushing all
> of the associated altp2m views. Therefore, in case a different domain
> (not the currently active domain) shall flush its TLBs, the current
> implementation loops over all VTTBRs of the different altp2m mappings
> per vCPU and flushes the TLBs. This way, a change of one of the altp2m
> mapping is considered.  At this point, it must be considered that the
> domain --whose TLBs are to be flushed-- is not locked.
> 
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 63 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 7e721f9..019f10e 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -15,6 +15,8 @@
>  #include <asm/hardirq.h>
>  #include <asm/page.h>
>  
> +#include <asm/altp2m.h>
> +
>  #ifdef CONFIG_ARM_64
>  static unsigned int __read_mostly p2m_root_order;
>  static unsigned int __read_mostly p2m_root_level;
> @@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>                   P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
>  }
>  
> +static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    uint16_t index = vcpu_altp2m(v).p2midx;
> +
> +    if ( index == INVALID_ALTP2M )
> +        return INVALID_MFN;
> +
> +    BUG_ON(index >= MAX_ALTP2M);
> +
> +    return d->arch.altp2m_vttbr[index];
> +}
> +
> +static void p2m_load_altp2m_VTTBR(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    uint64_t vttbr = p2m_get_altp2m_vttbr(v);
> +
> +    if ( is_idle_domain(d) )
> +        return;
> +
> +    BUG_ON(vttbr == INVALID_MFN);
> +    WRITE_SYSREG64(vttbr, VTTBR_EL2);
> +
> +    isb(); /* Ensure update is visible */
> +}
> +
>  static void p2m_load_VTTBR(struct domain *d)
>  {
>      if ( is_idle_domain(d) )
>          return;
> +
>      BUG_ON(!d->arch.vttbr);
>      WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
> +
>      isb(); /* Ensure update is visible */
>  }
>  
> @@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
>      WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>      isb();
>  
> -    p2m_load_VTTBR(n->domain);
> +    if ( altp2m_active(n->domain) )
> +        p2m_load_altp2m_VTTBR(n);
> +    else
> +        p2m_load_VTTBR(n->domain);
> +
>      isb();
>  
>      if ( is_32bit_domain(n->domain) )
> @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
>  void flush_tlb_domain(struct domain *d)
>  {
>      unsigned long flags = 0;
> +    struct vcpu *v = NULL;
>  
> -    /* Update the VTTBR if necessary with the domain d. In this case,
> -     * it's only necessary to flush TLBs on every CPUs with the current VMID
> -     * (our domain).
> +    /*
> +     * Update the VTTBR if necessary with the domain d. In this case, it is only
> +     * necessary to flush TLBs on every CPUs with the current VMID (our
> +     * domain).
>       */
>      if ( d != current->domain )
>      {
>          local_irq_save(flags);
> -        p2m_load_VTTBR(d);
> -    }
>  
> -    flush_tlb();
> +        /* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
> +        if ( altp2m_active(d) )
> +        {
> +            for_each_vcpu( d, v )
> +            {
> +                p2m_load_altp2m_VTTBR(v);
> +                flush_tlb();
> +            }
> +        }
> +        else
> +        {
> +            p2m_load_VTTBR(d);
> +            flush_tlb();
> +        }
> +    }
> +    else
> +        flush_tlb();

Flushing of all vCPU's TLBs must also be performed on the current
domain. Does it make sense to flush the current domain's TLBs at this
point as well? If the current domain's TLBs were modified and needs
flushing, it will be done during the loading of the VTTBR at the next
VMENTRY anyway.

>  
>      if ( d != current->domain )
>      {
> -        p2m_load_VTTBR(current->domain);
> +        /* Make sure altp2m mapping is valid. */
> +        if ( altp2m_active(current->domain) )
> +            p2m_load_altp2m_VTTBR(current);
> +        else
> +            p2m_load_VTTBR(current->domain);
>          local_irq_restore(flags);
>      }
>  }
> 

-- 
M.Sc. Sergej Proskurin
Wissenschaftlicher Mitarbeiter

Technische Universität München
Fakultät für Informatik
Lehrstuhl für Sicherheit in der Informatik

Boltzmannstraße 3
85748 Garching (bei München)

Tel. +49 (0)89 289-18592
Fax +49 (0)89 289-18579

proskurin@sec.in.tum.de
www.sec.in.tum.de

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM.
  2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (35 preceding siblings ...)
  2016-07-04 11:46 ` [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM Sergej Proskurin
@ 2016-07-04 12:52 ` Andrew Cooper
  2016-07-04 13:05   ` Sergej Proskurin
  36 siblings, 1 reply; 126+ messages in thread
From: Andrew Cooper @ 2016-07-04 12:52 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel

On 04/07/16 12:45, Sergej Proskurin wrote:
> Hello together,
>
> Since this is my first contribution to the Xen development mailing list, I
> would like to shortly introduce myself. My name is Sergej Proskurin. I am a PhD
> Student at the Technical University of Munich. My research areas focus on
> Virtual Machine Introspection, Hypervisor/OS Security, and Reverse Engineering.
>
> The following patch series can be found on Github[0] and is part of my
> contribution to this year's Google Summer of Code (GSoC)[1]. My project is
> managed by the organization The Honeynet Project. As part of GSoC, I am being
> supervised by the Xen developer Tamas K. Lengyel <tamas@tklengyel.com>, George
> D. Webster, and Steven Maresca.
>
> In this patch series, we provide an implementation of the altp2m subsystem for
> ARM. Our implementation is based on the altp2m subsystem for x86, providing
> additional --alternate-- views on the guest's physical memory by means of the
> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
> extend the p2m subsystem. We extend libxl to support altp2m on ARM and modify
> xen-access to test the suggested functionality.
>
> [0] https://github.com/sergej-proskurin/xen (Branch arm-altp2m-patch)
> [1] https://summerofcode.withgoogle.com/projects/#4970052843470848
>
> Sergej Proskurin (18):
>   arm/altp2m: Add cmd-line support for altp2m on ARM.
>   arm/altp2m: Add first altp2m HVMOP stubs.
>   arm/altp2m: Add HVMOP_altp2m_get_domain_state.
>   arm/altp2m: Add altp2m init/teardown routines.
>   arm/altp2m: Add HVMOP_altp2m_set_domain_state.
>   arm/altp2m: Add a(p2m) table flushing routines.
>   arm/altp2m: Add HVMOP_altp2m_create_p2m.
>   arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
>   arm/altp2m: Add HVMOP_altp2m_switch_p2m.
>   arm/altp2m: Renamed and extended p2m_alloc_table.
>   arm/altp2m: Make flush_tlb_domain ready for altp2m.
>   arm/altp2m: Cosmetic fixes - function prototypes.
>   arm/altp2m: Make get_page_from_gva ready for altp2m.
>   arm/altp2m: Add HVMOP_altp2m_set_mem_access.
>   arm/altp2m: Add altp2m paging mechanism.
>   arm/altp2m: Extended libxl to activate altp2m on ARM.
>   arm/altp2m: Adjust debug information to altp2m.
>   arm/altp2m: Extend xen-access for altp2m on ARM.
>
>  tools/libxl/libxl_create.c          |   1 +
>  tools/libxl/libxl_dom.c             |  14 +
>  tools/libxl/libxl_types.idl         |   1 +
>  tools/libxl/xl_cmdimpl.c            |   5 +
>  tools/tests/xen-access/xen-access.c |  11 +-
>  xen/arch/arm/Makefile               |   1 +
>  xen/arch/arm/altp2m.c               |  68 +++
>  xen/arch/arm/domain.c               |   2 +-
>  xen/arch/arm/guestcopy.c            |   6 +-
>  xen/arch/arm/hvm.c                  | 145 ++++++
>  xen/arch/arm/p2m.c                  | 930 ++++++++++++++++++++++++++++++++----
>  xen/arch/arm/traps.c                | 104 +++-
>  xen/include/asm-arm/altp2m.h        |  12 +-
>  xen/include/asm-arm/domain.h        |  18 +
>  xen/include/asm-arm/hvm/hvm.h       |  59 +++
>  xen/include/asm-arm/mm.h            |   2 +-
>  xen/include/asm-arm/p2m.h           |  72 ++-
>  17 files changed, 1330 insertions(+), 121 deletions(-)
>  create mode 100644 xen/arch/arm/altp2m.c
>  create mode 100644 xen/include/asm-arm/hvm/hvm.h
>

I have taken a quick look over the series, and things seem to be in order.

However, I wonder whether it would be better to have do_altp2m_op()
common between x86 and ARM, to avoid the risk of divergence in the API.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.
  2016-07-04 12:15   ` Andrew Cooper
@ 2016-07-04 13:02     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 13:02 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel; +Cc: Julien Grall, Stefano Stabellini



On 07/04/2016 02:15 PM, Andrew Cooper wrote:
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> The Xen altp2m subsystem is currently supported only on x86-64 based
>> architectures. By utilizing ARM's virtualization extensions, we intend
>> to implement altp2m support for ARM architectures and thus further
>> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>>
>> With this commit, Xen is now able to activate altp2m support on ARM by
>> means of the command-line argument 'altp2m' (bool).
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> 
> In addition, please patch docs/misc/xen-command-line.markdown to
> indicate that the altp2m option is available for x86 Intel and ARM.
> 
> ~Andrew
> 

Will do, thank you.

Cheers,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM.
  2016-07-04 12:52 ` [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Andrew Cooper
@ 2016-07-04 13:05   ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 13:05 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel



On 07/04/2016 02:52 PM, Andrew Cooper wrote:
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> Hello together,
>>
>> Since this is my first contribution to the Xen development mailing list, I
>> would like to shortly introduce myself. My name is Sergej Proskurin. I am a PhD
>> Student at the Technical University of Munich. My research areas focus on
>> Virtual Machine Introspection, Hypervisor/OS Security, and Reverse Engineering.
>>
>> The following patch series can be found on Github[0] and is part of my
>> contribution to this year's Google Summer of Code (GSoC)[1]. My project is
>> managed by the organization The Honeynet Project. As part of GSoC, I am being
>> supervised by the Xen developer Tamas K. Lengyel <tamas@tklengyel.com>, George
>> D. Webster, and Steven Maresca.
>>
>> In this patch series, we provide an implementation of the altp2m subsystem for
>> ARM. Our implementation is based on the altp2m subsystem for x86, providing
>> additional --alternate-- views on the guest's physical memory by means of the
>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
>> extend the p2m subsystem. We extend libxl to support altp2m on ARM and modify
>> xen-access to test the suggested functionality.
>>
>> [0] https://github.com/sergej-proskurin/xen (Branch arm-altp2m-patch)
>> [1] https://summerofcode.withgoogle.com/projects/#4970052843470848
>>
>> Sergej Proskurin (18):
>>   arm/altp2m: Add cmd-line support for altp2m on ARM.
>>   arm/altp2m: Add first altp2m HVMOP stubs.
>>   arm/altp2m: Add HVMOP_altp2m_get_domain_state.
>>   arm/altp2m: Add altp2m init/teardown routines.
>>   arm/altp2m: Add HVMOP_altp2m_set_domain_state.
>>   arm/altp2m: Add a(p2m) table flushing routines.
>>   arm/altp2m: Add HVMOP_altp2m_create_p2m.
>>   arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
>>   arm/altp2m: Add HVMOP_altp2m_switch_p2m.
>>   arm/altp2m: Renamed and extended p2m_alloc_table.
>>   arm/altp2m: Make flush_tlb_domain ready for altp2m.
>>   arm/altp2m: Cosmetic fixes - function prototypes.
>>   arm/altp2m: Make get_page_from_gva ready for altp2m.
>>   arm/altp2m: Add HVMOP_altp2m_set_mem_access.
>>   arm/altp2m: Add altp2m paging mechanism.
>>   arm/altp2m: Extended libxl to activate altp2m on ARM.
>>   arm/altp2m: Adjust debug information to altp2m.
>>   arm/altp2m: Extend xen-access for altp2m on ARM.
>>
>>  tools/libxl/libxl_create.c          |   1 +
>>  tools/libxl/libxl_dom.c             |  14 +
>>  tools/libxl/libxl_types.idl         |   1 +
>>  tools/libxl/xl_cmdimpl.c            |   5 +
>>  tools/tests/xen-access/xen-access.c |  11 +-
>>  xen/arch/arm/Makefile               |   1 +
>>  xen/arch/arm/altp2m.c               |  68 +++
>>  xen/arch/arm/domain.c               |   2 +-
>>  xen/arch/arm/guestcopy.c            |   6 +-
>>  xen/arch/arm/hvm.c                  | 145 ++++++
>>  xen/arch/arm/p2m.c                  | 930 ++++++++++++++++++++++++++++++++----
>>  xen/arch/arm/traps.c                | 104 +++-
>>  xen/include/asm-arm/altp2m.h        |  12 +-
>>  xen/include/asm-arm/domain.h        |  18 +
>>  xen/include/asm-arm/hvm/hvm.h       |  59 +++
>>  xen/include/asm-arm/mm.h            |   2 +-
>>  xen/include/asm-arm/p2m.h           |  72 ++-
>>  17 files changed, 1330 insertions(+), 121 deletions(-)
>>  create mode 100644 xen/arch/arm/altp2m.c
>>  create mode 100644 xen/include/asm-arm/hvm/hvm.h
>>
> 
> I have taken a quick look over the series, and things seem to be in order.
> 
> However, I wonder whether it would be better to have do_altp2m_op()
> common between x86 and ARM, to avoid the risk of divergence in the API.
> 
> ~Andrew
> 

It makes definitely sense to combine or rather pull out arch-independent
parts into a common place. We are still discussing what exactly needs to
be moved out. Thus, we are open for suggestions, thank you.

Cheers,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.
  2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM Sergej Proskurin
  2016-07-04 12:15   ` Andrew Cooper
@ 2016-07-04 13:25   ` Julien Grall
  2016-07-04 13:43     ` Sergej Proskurin
  2016-07-04 17:42   ` Julien Grall
  2 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 13:25 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> The Xen altp2m subsystem is currently supported only on x86-64 based
> architectures. By utilizing ARM's virtualization extensions, we intend
> to implement altp2m support for ARM architectures and thus further
> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>
> With this commit, Xen is now able to activate altp2m support on ARM by
> means of the command-line argument 'altp2m' (bool).
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>   xen/arch/arm/hvm.c            | 22 ++++++++++++++++++++
>   xen/include/asm-arm/hvm/hvm.h | 47 +++++++++++++++++++++++++++++++++++++++++++
>   2 files changed, 69 insertions(+)
>   create mode 100644 xen/include/asm-arm/hvm/hvm.h
>
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index d999bde..3615036 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -32,6 +32,28 @@
>
>   #include <asm/hypercall.h>
>
> +#include <asm/hvm/hvm.h>
> +
> +/* Xen command-line option enabling altp2m */
> +static bool_t __initdata opt_altp2m_enabled = 0;
> +boolean_param("altp2m", opt_altp2m_enabled);
> +
> +struct hvm_function_table hvm_funcs __read_mostly = {
> +    .name = "ARM_HVM",
> +};

I don't see any reason to introduce hvm_function_table on ARM. This 
structure is used to know whether the hardware will support altp2m. 
However, based on your implementation, this feature will this will not 
depend on the hardware for ARM.

> +
> +/* Initcall enabling hvm functionality. */
> +static int __init hvm_enable(void)
> +{
> +    if ( opt_altp2m_enabled )
> +        hvm_funcs.altp2m_supported = 1;
> +    else
> +        hvm_funcs.altp2m_supported = 0;
> +
> +    return 0;
> +}
> +presmp_initcall(hvm_enable);
> +
>   long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>   {
>       long rc = 0;
> diff --git a/xen/include/asm-arm/hvm/hvm.h b/xen/include/asm-arm/hvm/hvm.h
> new file mode 100644
> index 0000000..96c455c
> --- /dev/null
> +++ b/xen/include/asm-arm/hvm/hvm.h
> @@ -0,0 +1,47 @@
> +/*
> + * include/asm-arm/hvm/hvm.h
> + *
> + * Copyright (c) 2016, Sergej Proskurin <proskurin@sec.in.tum.de>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License, version 2,
> + * as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT ANY
> + * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
> + * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
> + * details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __ASM_ARM_HVM_HVM_H__
> +#define __ASM_ARM_HVM_HVM_H__
> +
> +struct hvm_function_table {
> +    char *name;
> +
> +    /* Necessary hardware support for alternate p2m's. */
> +    bool_t altp2m_supported;
> +};
> +
> +extern struct hvm_function_table hvm_funcs;
> +
> +/* Returns true if hardware supports alternate p2m's */

This comment is not true. The feature does not depend on the hardware 
for ARM.

> +static inline bool_t hvm_altp2m_supported(void)
> +{
> +    return hvm_funcs.altp2m_supported;

You could directly use opt_altp2m_enabled here.

> +}
> +
> +#endif /* __ASM_ARM_HVM_HVM_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-04 11:45 ` [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
@ 2016-07-04 13:36   ` Julien Grall
  2016-07-04 13:51     ` Sergej Proskurin
  2016-07-05 10:19   ` Julien Grall
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 13:36 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> This commit moves the altp2m-related code from x86 to ARM.

Looking at the code in the follow-up patches, I have the impression that 
the code is very similar (if not exactly) to the x86 code. If so, we 
should move the HVMOP for altp2m in common code rather than duplicating 
the code.

> Functions
> that are no yet supported notify the caller or print a BUG message
> stating their absence.
>
> Also, the struct arch_domain is extended with the altp2m_active
> attribute, represeting the current altp2m activity configuration of the

s/represeting/representing/

> domain.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>   xen/arch/arm/hvm.c           | 82 ++++++++++++++++++++++++++++++++++++++++++++
>   xen/include/asm-arm/altp2m.h | 22 ++++++++++--
>   xen/include/asm-arm/domain.h |  3 ++
>   3 files changed, 105 insertions(+), 2 deletions(-)

[..]

> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index a87747a..16ae9d6 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -2,6 +2,7 @@
>    * Alternate p2m
>    *
>    * Copyright (c) 2014, Intel Corporation.
> + * Copyright (c) 2016, Sergej Proskurin <proskurin@sec.in.tum.de>.
>    *
>    * This program is free software; you can redistribute it and/or modify it
>    * under the terms and conditions of the GNU General Public License,
> @@ -24,8 +25,7 @@
>   /* Alternate p2m on/off per domain */
>   static inline bool_t altp2m_active(const struct domain *d)
>   {
> -    /* Not implemented on ARM. */
> -    return 0;
> +    return d->arch.altp2m_active;
>   }
>
>   /* Alternate p2m VCPU */
> @@ -36,4 +36,22 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
>       return 0;
>   }
>
> +static inline void altp2m_vcpu_initialise(struct vcpu *v)
> +{
> +    /* Not implemented on ARM, should not be reached. */
> +    BUG();
> +}
> +
> +static inline void altp2m_vcpu_destroy(struct vcpu *v)
> +{
> +    /* Not implemented on ARM, should not be reached. */
> +    BUG();
> +}
> +
> +static inline void altp2m_vcpu_reset(struct vcpu *v)
> +{
> +    /* Not implemented on ARM, should not be reached. */
> +    BUG();
> +}

Those 3 helpers are not used by anyone in the code so far and you 
replace them by another implementation in patch #5.

So I would prefer if you introduce the helpers and implementation only 
when they will be used.

>   #endif /* __ASM_ARM_ALTP2M_H */
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 979f7de..2039f16 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -127,6 +127,9 @@ struct arch_domain
>       paddr_t efi_acpi_gpa;
>       paddr_t efi_acpi_len;
>   #endif
> +
> +    /* altp2m: allow multiple copies of host p2m */
> +    bool_t altp2m_active;
>   }  __cacheline_aligned;
>
>   struct arch_vcpu
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM.
  2016-07-04 11:45 ` [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM Sergej Proskurin
@ 2016-07-04 13:38   ` Razvan Cojocaru
  2016-07-06  8:44     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Razvan Cojocaru @ 2016-07-04 13:38 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Wei Liu, Tamas K Lengyel, Ian Jackson

On 07/04/16 14:45, Sergej Proskurin wrote:
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Tamas K Lengyel <tamas@tklengyel.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> ---
>  tools/tests/xen-access/xen-access.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)

Fair enough, looks like a trivial change.

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.
  2016-07-04 13:25   ` Julien Grall
@ 2016-07-04 13:43     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 13:43 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hello Julien,

On 07/04/2016 03:25 PM, Julien Grall wrote:
> Hello Sergej,
> 
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> The Xen altp2m subsystem is currently supported only on x86-64 based
>> architectures. By utilizing ARM's virtualization extensions, we intend
>> to implement altp2m support for ARM architectures and thus further
>> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>>
>> With this commit, Xen is now able to activate altp2m support on ARM by
>> means of the command-line argument 'altp2m' (bool).
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>   xen/arch/arm/hvm.c            | 22 ++++++++++++++++++++
>>   xen/include/asm-arm/hvm/hvm.h | 47
>> +++++++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 69 insertions(+)
>>   create mode 100644 xen/include/asm-arm/hvm/hvm.h
>>
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index d999bde..3615036 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -32,6 +32,28 @@
>>
>>   #include <asm/hypercall.h>
>>
>> +#include <asm/hvm/hvm.h>
>> +
>> +/* Xen command-line option enabling altp2m */
>> +static bool_t __initdata opt_altp2m_enabled = 0;
>> +boolean_param("altp2m", opt_altp2m_enabled);
>> +
>> +struct hvm_function_table hvm_funcs __read_mostly = {
>> +    .name = "ARM_HVM",
>> +};
> 
> I don't see any reason to introduce hvm_function_table on ARM. This
> structure is used to know whether the hardware will support altp2m.
> However, based on your implementation, this feature will this will not
> depend on the hardware for ARM.
> 

This is true: hvm_function_table is not of crucial importance. During
the implementation, we decided to pull out arch-independent parts out
the x86 and ARM implementation (still needs to be done) and hence reused
as much code as possible. However, this struct can be left out.

>> +
>> +/* Initcall enabling hvm functionality. */
>> +static int __init hvm_enable(void)
>> +{
>> +    if ( opt_altp2m_enabled )
>> +        hvm_funcs.altp2m_supported = 1;
>> +    else
>> +        hvm_funcs.altp2m_supported = 0;
>> +
>> +    return 0;
>> +}
>> +presmp_initcall(hvm_enable);
>> +
>>   long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>>   {
>>       long rc = 0;
>> diff --git a/xen/include/asm-arm/hvm/hvm.h
>> b/xen/include/asm-arm/hvm/hvm.h
>> new file mode 100644
>> index 0000000..96c455c
>> --- /dev/null
>> +++ b/xen/include/asm-arm/hvm/hvm.h
>> @@ -0,0 +1,47 @@
>> +/*
>> + * include/asm-arm/hvm/hvm.h
>> + *
>> + * Copyright (c) 2016, Sergej Proskurin <proskurin@sec.in.tum.de>
>> + *
>> + * This program is free software; you can redistribute it and/or
>> modify it
>> + * under the terms and conditions of the GNU General Public License,
>> version 2,
>> + * as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but
>> WITHOUT ANY
>> + * WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> FITNESS
>> + * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> more
>> + * details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> along with
>> + * this program; If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#ifndef __ASM_ARM_HVM_HVM_H__
>> +#define __ASM_ARM_HVM_HVM_H__
>> +
>> +struct hvm_function_table {
>> +    char *name;
>> +
>> +    /* Necessary hardware support for alternate p2m's. */
>> +    bool_t altp2m_supported;
>> +};
>> +
>> +extern struct hvm_function_table hvm_funcs;
>> +
>> +/* Returns true if hardware supports alternate p2m's */
> 
> This comment is not true. The feature does not depend on the hardware
> for ARM.
> 

True. I will change that.

>> +static inline bool_t hvm_altp2m_supported(void)
>> +{
>> +    return hvm_funcs.altp2m_supported;
> 
> You could directly use opt_altp2m_enabled here.
> 

Ok, thank you.

>> +}
>> +
>> +#endif /* __ASM_ARM_HVM_HVM_H__ */
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * tab-width: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>>
> 
> Regards,
> 

Thank you.

Cheers, Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-04 13:36   ` Julien Grall
@ 2016-07-04 13:51     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 13:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hello Julien,

On 07/04/2016 03:36 PM, Julien Grall wrote:
> Hello Sergej,
> 
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> This commit moves the altp2m-related code from x86 to ARM.
> 
> Looking at the code in the follow-up patches, I have the impression that
> the code is very similar (if not exactly) to the x86 code. If so, we
> should move the HVMOP for altp2m in common code rather than duplicating
> the code.
> 

You are correct: A big part of the code is very similar to the x86
implementation. We are already started thinking about which parts need
to be pulled out into a common place. Thank you.

>> Functions
>> that are no yet supported notify the caller or print a BUG message
>> stating their absence.
>>
>> Also, the struct arch_domain is extended with the altp2m_active
>> attribute, represeting the current altp2m activity configuration of the
> 
> s/represeting/representing/
> 

Ok.

>> domain.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>   xen/arch/arm/hvm.c           | 82
>> ++++++++++++++++++++++++++++++++++++++++++++
>>   xen/include/asm-arm/altp2m.h | 22 ++++++++++--
>>   xen/include/asm-arm/domain.h |  3 ++
>>   3 files changed, 105 insertions(+), 2 deletions(-)
> 
> [..]
> 
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index a87747a..16ae9d6 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -2,6 +2,7 @@
>>    * Alternate p2m
>>    *
>>    * Copyright (c) 2014, Intel Corporation.
>> + * Copyright (c) 2016, Sergej Proskurin <proskurin@sec.in.tum.de>.
>>    *
>>    * This program is free software; you can redistribute it and/or
>> modify it
>>    * under the terms and conditions of the GNU General Public License,
>> @@ -24,8 +25,7 @@
>>   /* Alternate p2m on/off per domain */
>>   static inline bool_t altp2m_active(const struct domain *d)
>>   {
>> -    /* Not implemented on ARM. */
>> -    return 0;
>> +    return d->arch.altp2m_active;
>>   }
>>
>>   /* Alternate p2m VCPU */
>> @@ -36,4 +36,22 @@ static inline uint16_t altp2m_vcpu_idx(const struct
>> vcpu *v)
>>       return 0;
>>   }
>>
>> +static inline void altp2m_vcpu_initialise(struct vcpu *v)
>> +{
>> +    /* Not implemented on ARM, should not be reached. */
>> +    BUG();
>> +}
>> +
>> +static inline void altp2m_vcpu_destroy(struct vcpu *v)
>> +{
>> +    /* Not implemented on ARM, should not be reached. */
>> +    BUG();
>> +}
>> +
>> +static inline void altp2m_vcpu_reset(struct vcpu *v)
>> +{
>> +    /* Not implemented on ARM, should not be reached. */
>> +    BUG();
>> +}
> 
> Those 3 helpers are not used by anyone in the code so far and you
> replace them by another implementation in patch #5.
> 
> So I would prefer if you introduce the helpers and implementation only
> when they will be used.
> 

Will do. Thank you.

>>   #endif /* __ASM_ARM_ALTP2M_H */
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 979f7de..2039f16 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -127,6 +127,9 @@ struct arch_domain
>>       paddr_t efi_acpi_gpa;
>>       paddr_t efi_acpi_len;
>>   #endif
>> +
>> +    /* altp2m: allow multiple copies of host p2m */
>> +    bool_t altp2m_active;
>>   }  __cacheline_aligned;
>>
>>   struct arch_vcpu
>>
> 
> Regards,
> 

Thank you.

Best regards,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 11:45 ` [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
@ 2016-07-04 15:17   ` Julien Grall
  2016-07-04 16:40     ` Sergej Proskurin
  2016-07-04 16:15   ` Julien Grall
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 15:17 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> The p2m intialization now invokes intialization routines responsible for

s/intialization/initialization/

> the allocation and intitialization of altp2m structures. The same

Ditto

> applies to teardown routines. The functionality has been adopted from
> the x86 altp2m implementation.

This patch would benefit to be split in 2:
    1) Moving p2m init/teardown in a separate function
    2) Introducing altp2m init/teardown

It will ease the review.

> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>   xen/arch/arm/p2m.c            | 166 ++++++++++++++++++++++++++++++++++++++++--
>   xen/include/asm-arm/domain.h  |   6 ++
>   xen/include/asm-arm/hvm/hvm.h |  12 +++
>   xen/include/asm-arm/p2m.h     |  20 +++++
>   4 files changed, 198 insertions(+), 6 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index aa4e774..e72ca7a 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1400,19 +1400,103 @@ static void p2m_free_vmid(struct domain *d)
>       spin_unlock(&vmid_alloc_lock);
>   }
>
> -void p2m_teardown(struct domain *d)
> +static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)

AFAICT, this function is only used by p2m_initialise_one. So I would 
prefer if you fold the code in the latter.

>   {
> -    struct p2m_domain *p2m = &d->arch.p2m;
> +    int ret = 0;
> +
> +    spin_lock_init(&p2m->lock);
> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
> +
> +    spin_lock(&p2m->lock);
> +
> +    p2m->domain = d;
> +    p2m->access_required = false;
> +    p2m->mem_access_enabled = false;
> +    p2m->default_access = p2m_access_rwx;
> +    p2m->p2m_class = p2m_host;
> +    p2m->root = NULL;
> +
> +    /* Adopt VMID of the associated domain */
> +    p2m->vmid = d->arch.p2m.vmid;

It looks like to me that re-using the same VMID will require more TLB 
flush (such as when a VCPU is migrated to another physical CPU). So 
could you explain why you decided to re-use the same VMID?

> +    p2m->vttbr.vttbr = 0;
> +    p2m->vttbr.vttbr_vmid = p2m->vmid;
> +
> +    p2m->max_mapped_gfn = 0;
> +    p2m->lowest_mapped_gfn = ULONG_MAX;
> +    radix_tree_init(&p2m->mem_access_settings);
> +
> +    spin_unlock(&p2m->lock);
> +
> +    return ret;
> +}
> +
> +static void p2m_free_one(struct p2m_domain *p2m)
> +{
> +    mfn_t mfn;
> +    unsigned int i;
>       struct page_info *pg;
>
>       spin_lock(&p2m->lock);
>
>       while ( (pg = page_list_remove_head(&p2m->pages)) )
> -        free_domheap_page(pg);
> +        if ( pg != p2m->root )
> +            free_domheap_page(pg);
> +
> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
> +    {
> +        mfn = _mfn(page_to_mfn(p2m->root) + i);
> +        clear_domain_page(mfn);
> +    }
> +    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
> +    p2m->root = NULL;
> +
> +    radix_tree_destroy(&p2m->mem_access_settings, NULL);
> +
> +    spin_unlock(&p2m->lock);
> +
> +    xfree(p2m);
> +}
> +
> +static struct p2m_domain *p2m_init_one(struct domain *d)
> +{
> +    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
> +
> +    if ( !p2m )
> +        return NULL;
> +
> +    if ( p2m_initialise(d, p2m) )
> +        goto free_p2m;
> +
> +    return p2m;
> +
> +free_p2m:
> +    xfree(p2m);
> +    return NULL;
> +}
> +
> +static void p2m_teardown_hostp2m(struct domain *d)

Why does p2m_teardown_hostp2m not use p2m_teardown_one to teardown the 
p2m? Assuming xfree(p2m) is move out of the function.

> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    struct page_info *pg = NULL;
> +    mfn_t mfn;
> +    unsigned int i;
> +
> +    spin_lock(&p2m->lock);
>
> -    if ( p2m->root )

Why did you remove this check? The p2m->root could be NULL if the an 
error occurred before create the root page table.

> -        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
> +        if ( pg != p2m->root )

Why this check? p2m->root will never be part of p2m->pages.

> +        {
> +            mfn = _mfn(page_to_mfn(pg));
> +            clear_domain_page(mfn);
> +            free_domheap_page(pg);
> +        }
>
> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
> +    {
> +        mfn = _mfn(page_to_mfn(p2m->root) + i);
> +        clear_domain_page(mfn);
> +    }
> +    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>       p2m->root = NULL;
>
>       p2m_free_vmid(d);
> @@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
>       spin_unlock(&p2m->lock);
>   }
>
> -int p2m_init(struct domain *d)
> +static int p2m_init_hostp2m(struct domain *d)

Why does p2m_init_hostp2m not use p2m_init_one to initialize the p2m?

>   {
>       struct p2m_domain *p2m = &d->arch.p2m;
>       int rc = 0;
> @@ -1437,6 +1521,8 @@ int p2m_init(struct domain *d)
>       if ( rc != 0 )
>           goto err;
>
> +    p2m->vttbr.vttbr_vmid = p2m->vmid;
> +
>       d->arch.vttbr = 0;
>
>       p2m->root = NULL;
> @@ -1454,6 +1540,74 @@ err:
>       return rc;
>   }
>
> +static void p2m_teardown_altp2m(struct domain *d)
> +{
> +    unsigned int i;
> +    struct p2m_domain *p2m;
> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        if ( !d->arch.altp2m_p2m[i] )
> +            continue;
> +
> +        p2m = d->arch.altp2m_p2m[i];
> +        p2m_free_one(p2m);
> +        d->arch.altp2m_vttbr[i] = INVALID_MFN;
> +        d->arch.altp2m_p2m[i] = NULL;

These 2 lines are not necessary because the domain is destroyed and the 
memory associated will be free very soon.

> +    }
> +
> +    d->arch.altp2m_active = false;
> +}
> +
> +static int p2m_init_altp2m(struct domain *d)
> +{
> +    unsigned int i;
> +    struct p2m_domain *p2m;
> +
> +    spin_lock_init(&d->arch.altp2m_lock);
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        d->arch.altp2m_vttbr[i] = INVALID_MFN;

The VTTBR will be stored in altp2m_p2m[i].vttbr. So why do you need to 
store in a different way?

Also, please don't mix value that have different meaning. MFN_INVALID 
indicates that a MFN is invalid not the VTTBR.

> +        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
> +        if ( p2m == NULL )
> +        {
> +            p2m_teardown_altp2m(d);

This call is not necessary. p2m_teardown_altp2m will be called by 
p2m_teardown as part of arch_domain_destroy.

> +            return -ENOMEM;
> +        }
> +        p2m->p2m_class = p2m_alternate;
> +        p2m->access_required = 1;
> +        _atomic_set(&p2m->active_vcpus, 0);
> +    }
> +
> +    return 0;
> +}
> +
> +void p2m_teardown(struct domain *d)
> +{
> +    /*
> +     * We must teardown altp2m unconditionally because
> +     * we initialise it unconditionally.

Why do we need to initialize altp2m unconditionally? When altp2m is not 
used we will allocate memory that is never used.

I would prefer to see the allocation of the memory only if the domain 
will make use of altp2m.

> +     */
> +    p2m_teardown_altp2m(d);
> +
> +    p2m_teardown_hostp2m(d);
> +}
> +
> +int p2m_init(struct domain *d)
> +{
> +    int rc = 0;
> +
> +    rc = p2m_init_hostp2m(d);
> +    if ( rc )
> +        return rc;
> +
> +    rc = p2m_init_altp2m(d);
> +    if ( rc )
> +        p2m_teardown_hostp2m(d);

This call is not necessary.

> +
> +    return rc;
> +}
> +
>   int relinquish_p2m_mapping(struct domain *d)
>   {
>       struct p2m_domain *p2m = &d->arch.p2m;
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 2039f16..6b9770f 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -29,6 +29,9 @@ enum domain_type {
>   #define is_64bit_domain(d) (0)
>   #endif
>
> +#define MAX_ALTP2M      10 /* arbitrary */
> +#define INVALID_ALTP2M  0xffff

IMHO, this should either be part of p2m.h or altp2m.h

> +
>   extern int dom0_11_mapping;
>   #define is_domain_direct_mapped(d) ((d) == hardware_domain && dom0_11_mapping)
>
> @@ -130,6 +133,9 @@ struct arch_domain
>
>       /* altp2m: allow multiple copies of host p2m */
>       bool_t altp2m_active;
> +    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
> +    spinlock_t altp2m_lock;

Please document was the lock is protecting.

> +    uint64_t altp2m_vttbr[MAX_ALTP2M];
>   }  __cacheline_aligned;
>
>   struct arch_vcpu
> diff --git a/xen/include/asm-arm/hvm/hvm.h b/xen/include/asm-arm/hvm/hvm.h
> index 96c455c..28d5298 100644
> --- a/xen/include/asm-arm/hvm/hvm.h
> +++ b/xen/include/asm-arm/hvm/hvm.h
> @@ -19,6 +19,18 @@
>   #ifndef __ASM_ARM_HVM_HVM_H__
>   #define __ASM_ARM_HVM_HVM_H__
>
> +struct vttbr_data {

This structure should not be part of hvm.h but processor.h. Also, I 
would rename it to simply vttbr.

> +    union {
> +        struct {
> +            u64 vttbr_baddr :40, /* variable res0: from 0-(x-1) bit */

Please drop vttbr_. Also, this field is 48 bits for ARMv8 (see ARM 
D7.2.102 in DDI 0487A.j).

> +                res1        :8,
> +                vttbr_vmid  :8,

Please drop vttbr_.

> +                res2        :8;
> +        };
> +        u64 vttbr;
> +    };
> +};
> +
>   struct hvm_function_table {
>       char *name;
>
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 0d1e61e..a78d547 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -8,6 +8,9 @@
>   #include <xen/p2m-common.h>
>   #include <public/memory.h>
>
> +#include <asm/atomic.h>
> +#include <asm/hvm/hvm.h>

ARM has not concept of HVM nor PV. So I would prefer to see a very 
limited usage of hvm.h

> +
>   #define paddr_bits PADDR_BITS
>
>   /* Holds the bit size of IPAs in p2m tables.  */
> @@ -17,6 +20,11 @@ struct domain;
>
>   extern void memory_type_changed(struct domain *);
>
> +typedef enum {
> +    p2m_host,
> +    p2m_alternate,
> +} p2m_class_t;
> +
>   /* Per-p2m-table state */
>   struct p2m_domain {
>       /* Lock that protects updates to the p2m */
> @@ -66,6 +74,18 @@ struct p2m_domain {
>       /* Radix tree to store the p2m_access_t settings as the pte's don't have
>        * enough available bits to store this information. */
>       struct radix_tree_root mem_access_settings;
> +
> +    /* Alternate p2m: count of vcpu's currently using this p2m. */
> +    atomic_t active_vcpus;
> +
> +    /* Choose between: host/alternate */
> +    p2m_class_t p2m_class;

Is there any reason to have this field? It is set but never used.

> +
> +    /* Back pointer to domain */
> +    struct domain *domain;

AFAICT, the only usage of this field is in p2m_altp2m_lazy_copy where 
you have direct access to the domain. So this could be dropped.

> +
> +    /* VTTBR information */
> +    struct vttbr_data vttbr;
>   };
>
>   /* List of possible type for each page in the p2m entry.
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-07-04 11:45 ` [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
@ 2016-07-04 15:39   ` Julien Grall
  2016-07-05  8:45     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 15:39 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index 8e8e0f7..cb90a55 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -104,8 +104,36 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>           break;
>
>       case HVMOP_altp2m_set_domain_state:
> -        rc = -EOPNOTSUPP;
> +    {

I cannot find anything in the code which prevents this sub-op to be 
called concurrently. Did I miss something?

> +        struct vcpu *v;
> +        bool_t ostate;
> +
> +        if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +
> +        ostate = d->arch.altp2m_active;
> +        d->arch.altp2m_active = !!a.u.domain_state.state;
> +
> +        /* If the alternate p2m state has changed, handle appropriately */
> +        if ( d->arch.altp2m_active != ostate &&
> +             (ostate || !(rc = p2m_init_altp2m_by_id(d, 0))) )
> +        {
> +            for_each_vcpu( d, v )
> +            {
> +                if ( !ostate )
> +                    altp2m_vcpu_initialise(v);
> +                else
> +                    altp2m_vcpu_destroy(v);
> +            }
> +
> +            if ( ostate )
> +                p2m_flush_altp2m(d);
> +        }
>           break;
> +    }
>
>       case HVMOP_altp2m_vcpu_enable_notify:
>           rc = -EOPNOTSUPP;
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index e72ca7a..4a745fd 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -2064,6 +2064,52 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
>       return ret;
>   }
>

The 3 helpers below are altp2m specific so I would move them in altp2m.c

> +struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
> +{
> +    unsigned int index = vcpu_altp2m(v).p2midx;
> +
> +    if ( index == INVALID_ALTP2M )
> +        return NULL;
> +
> +    BUG_ON(index >= MAX_ALTP2M);
> +
> +    return v->domain->arch.altp2m_p2m[index];
> +}
> +
> +static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
> +{
> +    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
> +    struct vttbr_data *vttbr = &p2m->vttbr;
> +
> +    p2m->lowest_mapped_gfn = INVALID_GFN;
> +    p2m->max_mapped_gfn = 0;

Would not it be easier to reallocate p2m from scratch everytime you 
enable it?

> +
> +    vttbr->vttbr_baddr = page_to_maddr(p2m->root);
> +    vttbr->vttbr_vmid = p2m->vmid;
> +
> +    d->arch.altp2m_vttbr[i] = vttbr->vttbr;
> +}
> +
> +int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
> +{
> +    int rc = -EINVAL;
> +
> +    if ( idx >= MAX_ALTP2M )
> +        return rc;
> +
> +    altp2m_lock(d);
> +
> +    if ( d->arch.altp2m_vttbr[idx] == INVALID_MFN )
> +    {
> +        p2m_init_altp2m_helper(d, idx);
> +        rc = 0;
> +    }
> +
> +    altp2m_unlock(d);
> +
> +    return rc;
> +}
> +
>   /*
>    * Local variables:
>    * mode: C

[...]

> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 6b9770f..8bcd618 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -138,6 +138,12 @@ struct arch_domain
>       uint64_t altp2m_vttbr[MAX_ALTP2M];
>   }  __cacheline_aligned;
>
> +struct altp2mvcpu {
> +    uint16_t p2midx; /* alternate p2m index */
> +};
> +
> +#define vcpu_altp2m(v) ((v)->arch.avcpu)
> +
>   struct arch_vcpu
>   {
>       struct {
> @@ -267,6 +273,9 @@ struct arch_vcpu
>       struct vtimer phys_timer;
>       struct vtimer virt_timer;
>       bool_t vtimer_initialized;
> +
> +    /* Alternate p2m context */
> +    struct altp2mvcpu avcpu;
>   }  __cacheline_aligned;
>
>   void vcpu_show_execution_state(struct vcpu *);
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index a78d547..8ee78e0 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -121,6 +121,25 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>       /* Not supported on ARM. */
>   }
>
> +/*
> + * Alternate p2m: shadow p2m tables used for alternate memory views.
> + */
> +
> +#define altp2m_lock(d)      spin_lock(&(d)->arch.altp2m_lock)
> +#define altp2m_unlock(d)    spin_unlock(&(d)->arch.altp2m_lock)
> +
> +/* Get current alternate p2m table */
> +struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
> +
> +/* Flush all the alternate p2m's for a domain */
> +static inline void p2m_flush_altp2m(struct domain *d)
> +{
> +    /* Not supported on ARM. */
> +}
> +
> +/* Make a specific alternate p2m valid */
> +int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
> +

Please move anything related to altp2m in altp2m.h

>   #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
>   #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
>
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
  2016-07-04 12:12   ` Sergej Proskurin
@ 2016-07-04 15:42     ` Julien Grall
  2016-07-05  8:52       ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 15:42 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 13:12, Sergej Proskurin wrote:
>> +/* Reset this p2m table to be empty */
>> +static void p2m_flush_table(struct p2m_domain *p2m)
>> +{
>> +    struct page_info *top, *pg;
>> +    mfn_t mfn;
>> +    unsigned int i;
>> +
>> +    /* Check whether the p2m table has already been flushed before. */
>> +    if ( p2m->root == NULL)
>> +        return;
>> +
>> +    spin_lock(&p2m->lock);
>> +
>> +    /*
>> +     * "Host" p2m tables can have shared entries &c that need a bit more care
>> +     * when discarding them
>> +     */
>> +    ASSERT(!p2m_is_hostp2m(p2m));
>> +
>> +    /* Zap the top level of the trie */
>> +    top = p2m->root;
>> +
>> +    /* Clear all concatenated first level pages */
>> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>> +    {
>> +        mfn = _mfn(page_to_mfn(top + i));
>> +        clear_domain_page(mfn);
>> +    }
>> +
>> +    /* Free the rest of the trie pages back to the paging pool */
>> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
>> +        if ( pg != top  )
>> +        {
>> +            /*
>> +             * Before freeing the individual pages, we clear them to prevent
>> +             * reusing old table entries in future p2m allocations.
>> +             */
>> +            mfn = _mfn(page_to_mfn(pg));
>> +            clear_domain_page(mfn);
>> +            free_domheap_page(pg);
>> +        }
>
> At this point, we prevent only the first root level page from being
> freed. In case there are multiple consecutive first level pages, one of
> them will be freed in the upper loop (and potentially crash the guest if
> the table is reused at a later point in time). However, testing for
> every concatenated page in the if clause of the while loop would further
> decrease the flushing performance. Thus, my question is, whether there
> is a good way to solve this issue?

The root pages are not part of p2m->pages, so there is no issue.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
  2016-07-04 11:45 ` [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines Sergej Proskurin
  2016-07-04 12:12   ` Sergej Proskurin
@ 2016-07-04 15:55   ` Julien Grall
  2016-07-05  9:51     ` Sergej Proskurin
  2016-07-04 16:20   ` Julien Grall
  2 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 15:55 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> The current implementation differentiates between flushing and
> destroying altp2m views. This commit adds the functions
> p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or
> individual altp2m views without destroying the entire table. In this
> way, altp2m views can be reused at a later point in time.
>
> In addition, the implementation clears all altp2m entries during the
> process of flushing. The same applies to hostp2m entries, when it is
> destroyed. In this way, further domain and p2m allocations will not
> unintentionally reuse old p2m mappings.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>   xen/arch/arm/p2m.c        | 67 +++++++++++++++++++++++++++++++++++++++++++++++
>   xen/include/asm-arm/p2m.h | 15 ++++++++---
>   2 files changed, 78 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 4a745fd..ae789e6 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
>       return rc;
>   }
>
> +/* Reset this p2m table to be empty */
> +static void p2m_flush_table(struct p2m_domain *p2m)
> +{
> +    struct page_info *top, *pg;
> +    mfn_t mfn;
> +    unsigned int i;
> +
> +    /* Check whether the p2m table has already been flushed before. */
> +    if ( p2m->root == NULL)

This check looks invalid. p2m->root is never reset to NULL by 
p2m_flush_table, so you will always flush.

> +        return;
> +
> +    spin_lock(&p2m->lock);
> +
> +    /*
> +     * "Host" p2m tables can have shared entries &c that need a bit more care
> +     * when discarding them

I don't understand this comment. Can you explain it?

> +     */
> +    ASSERT(!p2m_is_hostp2m(p2m));
> +
> +    /* Zap the top level of the trie */
> +    top = p2m->root;
> +
> +    /* Clear all concatenated first level pages */
> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
> +    {
> +        mfn = _mfn(page_to_mfn(top + i));
> +        clear_domain_page(mfn);
> +    }
> +
> +    /* Free the rest of the trie pages back to the paging pool */
> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
> +        if ( pg != top  )
> +        {
> +            /*
> +             * Before freeing the individual pages, we clear them to prevent
> +             * reusing old table entries in future p2m allocations.
> +             */
> +            mfn = _mfn(page_to_mfn(pg));
> +            clear_domain_page(mfn);
> +            free_domheap_page(pg);
> +        }
> +
> +    page_list_add(top, &p2m->pages);

This code is very similar to p2m_free_one. Can we share some code?

> +
> +    /* Invalidate VTTBR */
> +    p2m->vttbr.vttbr = 0;
> +    p2m->vttbr.vttbr_baddr = INVALID_MFN;
> +
> +    spin_unlock(&p2m->lock);
> +}

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 11:45 ` [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
  2016-07-04 15:17   ` Julien Grall
@ 2016-07-04 16:15   ` Julien Grall
  2016-07-04 16:51     ` Sergej Proskurin
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 16:15 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 04/07/16 12:45, Sergej Proskurin wrote:
> +static void p2m_teardown_hostp2m(struct domain *d)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    struct page_info *pg = NULL;
> +    mfn_t mfn;
> +    unsigned int i;
> +
> +    spin_lock(&p2m->lock);
>
> -    if ( p2m->root )
> -        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
> +        if ( pg != p2m->root )
> +        {
> +            mfn = _mfn(page_to_mfn(pg));
> +            clear_domain_page(mfn);

Can you explain why you are cleaning the page here? It was not part of 
p2m_teardown before this series.

> +            free_domheap_page(pg);
> +        }
>
> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
> +    {
> +        mfn = _mfn(page_to_mfn(p2m->root) + i);
> +        clear_domain_page(mfn);
> +    }
> +    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>       p2m->root = NULL;
>
>       p2m_free_vmid(d);
> @@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
>       spin_unlock(&p2m->lock);
>   }

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
  2016-07-04 11:45 ` [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines Sergej Proskurin
  2016-07-04 12:12   ` Sergej Proskurin
  2016-07-04 15:55   ` Julien Grall
@ 2016-07-04 16:20   ` Julien Grall
  2016-07-05  9:57     ` Sergej Proskurin
  2 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 16:20 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

On 04/07/16 12:45, Sergej Proskurin wrote:
> +void p2m_flush_altp2m(struct domain *d)
> +{
> +    unsigned int i;
> +
> +    altp2m_lock(d);
> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        p2m_flush_table(d->arch.altp2m_p2m[i]);
> +        flush_tlb();

I forgot to comment on this line.

Can you explain this call? flush_tlb is flushing TLBs for the current 
VMID only. However, d may not be equal to current when calling from HVM op.

> +        d->arch.altp2m_vttbr[i] = INVALID_MFN;
> +    }

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-07-04 11:45 ` [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
@ 2016-07-04 16:32   ` Julien Grall
  2016-07-05 11:37     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 16:32 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>   xen/arch/arm/hvm.c        |  2 +-
>   xen/arch/arm/p2m.c        | 32 ++++++++++++++++++++++++++++++++
>   xen/include/asm-arm/p2m.h |  3 +++
>   3 files changed, 36 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index 005d7c6..f4ec5cf 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -145,7 +145,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>           break;
>
>       case HVMOP_altp2m_destroy_p2m:
> -        rc = -EOPNOTSUPP;
> +        rc = p2m_destroy_altp2m_by_id(d, a.u.view.view);
>           break;
>
>       case HVMOP_altp2m_switch_p2m:
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 6c41b98..f82f1ea 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -2200,6 +2200,38 @@ void p2m_flush_altp2m(struct domain *d)
>       altp2m_unlock(d);
>   }
>
> +int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
> +{
> +    struct p2m_domain *p2m;
> +    int rc = -EBUSY;
> +
> +    if ( !idx || idx >= MAX_ALTP2M )

Can you please add a comment to explain why the altp2m at index 0 cannot 
be destroyed.

> +        return rc;
> +
> +    domain_pause_except_self(d);
> +
> +    altp2m_lock(d);
> +
> +    if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
> +    {
> +        p2m = d->arch.altp2m_p2m[idx];
> +
> +        if ( !_atomic_read(p2m->active_vcpus) )
> +        {
> +            p2m_flush_table(p2m);
> +            flush_tlb();

Can you explain this call? flush_tlb is flushing TLBs for the current 
VMID only. However, d may not be equal to current when calling from HVM op.

> +            d->arch.altp2m_vttbr[idx] = INVALID_MFN;
> +            rc = 0;
> +        }
> +    }
> +
> +    altp2m_unlock(d);
> +
> +    domain_unpause_except_self(d);
> +
> +    return rc;
> +}
> +
>   /*
>    * Local variables:
>    * mode: C
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index c51532a..255a282 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -140,6 +140,9 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
>   /* Find an available alternate p2m and make it valid */
>   int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
>
> +/* Make a specific alternate p2m invalid */
> +int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
> +
>   #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
>   #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
>
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 15:17   ` Julien Grall
@ 2016-07-04 16:40     ` Sergej Proskurin
  2016-07-04 16:43       ` Andrew Cooper
  2016-07-04 18:30       ` Julien Grall
  0 siblings, 2 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 16:40 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hello Julien,

On 07/04/2016 05:17 PM, Julien Grall wrote:
> Hello Sergej,
> 
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> The p2m intialization now invokes intialization routines responsible for
> 
> s/intialization/initialization/
> 
>> the allocation and intitialization of altp2m structures. The same
> 
> Ditto
> 

Thanks.

>> applies to teardown routines. The functionality has been adopted from
>> the x86 altp2m implementation.
> 
> This patch would benefit to be split in 2:
>    1) Moving p2m init/teardown in a separate function
>    2) Introducing altp2m init/teardown
> 
> It will ease the review.
> 

I will split this patch up in two parts, thank you.

>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>   xen/arch/arm/p2m.c            | 166
>> ++++++++++++++++++++++++++++++++++++++++--
>>   xen/include/asm-arm/domain.h  |   6 ++
>>   xen/include/asm-arm/hvm/hvm.h |  12 +++
>>   xen/include/asm-arm/p2m.h     |  20 +++++
>>   4 files changed, 198 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index aa4e774..e72ca7a 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1400,19 +1400,103 @@ static void p2m_free_vmid(struct domain *d)
>>       spin_unlock(&vmid_alloc_lock);
>>   }
>>
>> -void p2m_teardown(struct domain *d)
>> +static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
> 
> AFAICT, this function is only used by p2m_initialise_one. So I would
> prefer if you fold the code in the latter.
> 

I will do that, thanks.

>>   {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> +    int ret = 0;
>> +
>> +    spin_lock_init(&p2m->lock);
>> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
>> +
>> +    spin_lock(&p2m->lock);
>> +
>> +    p2m->domain = d;
>> +    p2m->access_required = false;
>> +    p2m->mem_access_enabled = false;
>> +    p2m->default_access = p2m_access_rwx;
>> +    p2m->p2m_class = p2m_host;
>> +    p2m->root = NULL;
>> +
>> +    /* Adopt VMID of the associated domain */
>> +    p2m->vmid = d->arch.p2m.vmid;
> 
> It looks like to me that re-using the same VMID will require more TLB
> flush (such as when a VCPU is migrated to another physical CPU). So
> could you explain why you decided to re-use the same VMID?
>

Please correct me if I am wrong, but I associate a VMID with an entire
domain. Since, the altp2m view still belongs to the same domain
(p2m_init_one is called only from p2m_init_altp2m), the code re-uses the
old VMID.

>> +    p2m->vttbr.vttbr = 0;
>> +    p2m->vttbr.vttbr_vmid = p2m->vmid;
>> +
>> +    p2m->max_mapped_gfn = 0;
>> +    p2m->lowest_mapped_gfn = ULONG_MAX;
>> +    radix_tree_init(&p2m->mem_access_settings);
>> +
>> +    spin_unlock(&p2m->lock);
>> +
>> +    return ret;
>> +}
>> +
>> +static void p2m_free_one(struct p2m_domain *p2m)
>> +{
>> +    mfn_t mfn;
>> +    unsigned int i;
>>       struct page_info *pg;
>>
>>       spin_lock(&p2m->lock);
>>
>>       while ( (pg = page_list_remove_head(&p2m->pages)) )
>> -        free_domheap_page(pg);
>> +        if ( pg != p2m->root )
>> +            free_domheap_page(pg);
>> +
>> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>> +    {
>> +        mfn = _mfn(page_to_mfn(p2m->root) + i);
>> +        clear_domain_page(mfn);
>> +    }
>> +    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>> +    p2m->root = NULL;
>> +
>> +    radix_tree_destroy(&p2m->mem_access_settings, NULL);
>> +
>> +    spin_unlock(&p2m->lock);
>> +
>> +    xfree(p2m);
>> +}
>> +
>> +static struct p2m_domain *p2m_init_one(struct domain *d)
>> +{
>> +    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
>> +
>> +    if ( !p2m )
>> +        return NULL;
>> +
>> +    if ( p2m_initialise(d, p2m) )
>> +        goto free_p2m;
>> +
>> +    return p2m;
>> +
>> +free_p2m:
>> +    xfree(p2m);
>> +    return NULL;
>> +}
>> +
>> +static void p2m_teardown_hostp2m(struct domain *d)
> 
> Why does p2m_teardown_hostp2m not use p2m_teardown_one to teardown the
> p2m? Assuming xfree(p2m) is move out of the function.
> 

I believe you mean p2m_free_one: The p2m_teardown_hostp2m might use the
same function but would require the call p2m_free_vmid(d) to be called
outside of p2m_free_one as well. This would require another acquisition
of the p2m->lock. Just to be sure, I did not want to split the teardown
process into two atomic executions. If you believe that it is safe to
do, I will gladly change the code and re-use the code from p2m_free_one
in p2m_teardown_hostp2m.

>> +{
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    struct page_info *pg = NULL;
>> +    mfn_t mfn;
>> +    unsigned int i;
>> +
>> +    spin_lock(&p2m->lock);
>>
>> -    if ( p2m->root )
> 
> Why did you remove this check? The p2m->root could be NULL if the an
> error occurred before create the root page table.
> 

That was a mistake. Thank you very much.

>> -        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
>> +        if ( pg != p2m->root )
> 
> Why this check? p2m->root will never be part of p2m->pages.
> 

I was not sure that p2m->root will never be part of p2m->pages. This
also answers my question in patch #06. Thank you very much for this comment.

>> +        {
>> +            mfn = _mfn(page_to_mfn(pg));
>> +            clear_domain_page(mfn);
>> +            free_domheap_page(pg);
>> +        }
>>
>> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>> +    {
>> +        mfn = _mfn(page_to_mfn(p2m->root) + i);
>> +        clear_domain_page(mfn);
>> +    }
>> +    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>       p2m->root = NULL;
>>
>>       p2m_free_vmid(d);
>> @@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
>>       spin_unlock(&p2m->lock);
>>   }
>>
>> -int p2m_init(struct domain *d)
>> +static int p2m_init_hostp2m(struct domain *d)
> 
> Why does p2m_init_hostp2m not use p2m_init_one to initialize the p2m?
> 

We dynamically allocate altp2ms. Also, the initialization of both the
altp2ms and hostp2m slightly differs (see VMID allocation). I could
rewrite the initialization function to be used for both, the hostp2m and
altp2m structs. Especially, if you say that we do not associate domains
with VMIDs (see your upper question).

>>   {
>>       struct p2m_domain *p2m = &d->arch.p2m;
>>       int rc = 0;
>> @@ -1437,6 +1521,8 @@ int p2m_init(struct domain *d)
>>       if ( rc != 0 )
>>           goto err;
>>
>> +    p2m->vttbr.vttbr_vmid = p2m->vmid;
>> +
>>       d->arch.vttbr = 0;
>>
>>       p2m->root = NULL;
>> @@ -1454,6 +1540,74 @@ err:
>>       return rc;
>>   }
>>
>> +static void p2m_teardown_altp2m(struct domain *d)
>> +{
>> +    unsigned int i;
>> +    struct p2m_domain *p2m;
>> +
>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>> +    {
>> +        if ( !d->arch.altp2m_p2m[i] )
>> +            continue;
>> +
>> +        p2m = d->arch.altp2m_p2m[i];
>> +        p2m_free_one(p2m);
>> +        d->arch.altp2m_vttbr[i] = INVALID_MFN;
>> +        d->arch.altp2m_p2m[i] = NULL;
> 
> These 2 lines are not necessary because the domain is destroyed and the
> memory associated will be free very soon.
> 

I will remove them.

>> +    }
>> +
>> +    d->arch.altp2m_active = false;
>> +}
>> +
>> +static int p2m_init_altp2m(struct domain *d)
>> +{
>> +    unsigned int i;
>> +    struct p2m_domain *p2m;
>> +
>> +    spin_lock_init(&d->arch.altp2m_lock);
>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>> +    {
>> +        d->arch.altp2m_vttbr[i] = INVALID_MFN;
> 
> The VTTBR will be stored in altp2m_p2m[i].vttbr. So why do you need to
> store in a different way?

You are definitely right. I was already thinking of that. Thank you.

> 
> Also, please don't mix value that have different meaning. MFN_INVALID
> indicates that a MFN is invalid not the VTTBR.
> 

Ok. I will include a new type for invalid VTTBRs.

>> +        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
>> +        if ( p2m == NULL )
>> +        {
>> +            p2m_teardown_altp2m(d);
> 
> This call is not necessary. p2m_teardown_altp2m will be called by
> p2m_teardown as part of arch_domain_destroy.
> 

Thanks for the hint.

>> +            return -ENOMEM;
>> +        }
>> +        p2m->p2m_class = p2m_alternate;
>> +        p2m->access_required = 1;
>> +        _atomic_set(&p2m->active_vcpus, 0);
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +void p2m_teardown(struct domain *d)
>> +{
>> +    /*
>> +     * We must teardown altp2m unconditionally because
>> +     * we initialise it unconditionally.
> 
> Why do we need to initialize altp2m unconditionally? When altp2m is not
> used we will allocate memory that is never used.
> 
> I would prefer to see the allocation of the memory only if the domain
> will make use of altp2m.
> 

This is true. At this point, I could check whether opt_altp2m_enabled is
set and initialize altp2m accordingly. Thanks.

>> +     */
>> +    p2m_teardown_altp2m(d);
>> +
>> +    p2m_teardown_hostp2m(d);
>> +}
>> +
>> +int p2m_init(struct domain *d)
>> +{
>> +    int rc = 0;
>> +
>> +    rc = p2m_init_hostp2m(d);
>> +    if ( rc )
>> +        return rc;
>> +
>> +    rc = p2m_init_altp2m(d);
>> +    if ( rc )
>> +        p2m_teardown_hostp2m(d);
> 
> This call is not necessary.
> 

Thank you.

>> +
>> +    return rc;
>> +}
>> +
>>   int relinquish_p2m_mapping(struct domain *d)
>>   {
>>       struct p2m_domain *p2m = &d->arch.p2m;
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 2039f16..6b9770f 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -29,6 +29,9 @@ enum domain_type {
>>   #define is_64bit_domain(d) (0)
>>   #endif
>>
>> +#define MAX_ALTP2M      10 /* arbitrary */
>> +#define INVALID_ALTP2M  0xffff
> 
> IMHO, this should either be part of p2m.h or altp2m.h
> 

I will place the defines in one of the header files, thank you.

>> +
>>   extern int dom0_11_mapping;
>>   #define is_domain_direct_mapped(d) ((d) == hardware_domain &&
>> dom0_11_mapping)
>>
>> @@ -130,6 +133,9 @@ struct arch_domain
>>
>>       /* altp2m: allow multiple copies of host p2m */
>>       bool_t altp2m_active;
>> +    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
>> +    spinlock_t altp2m_lock;
> 
> Please document was the lock is protecting.
> 

Ok.

>> +    uint64_t altp2m_vttbr[MAX_ALTP2M];
>>   }  __cacheline_aligned;
>>
>>   struct arch_vcpu
>> diff --git a/xen/include/asm-arm/hvm/hvm.h
>> b/xen/include/asm-arm/hvm/hvm.h
>> index 96c455c..28d5298 100644
>> --- a/xen/include/asm-arm/hvm/hvm.h
>> +++ b/xen/include/asm-arm/hvm/hvm.h
>> @@ -19,6 +19,18 @@
>>   #ifndef __ASM_ARM_HVM_HVM_H__
>>   #define __ASM_ARM_HVM_HVM_H__
>>
>> +struct vttbr_data {
> 
> This structure should not be part of hvm.h but processor.h. Also, I
> would rename it to simply vttbr.
> 

Ok, I will move it. The struct was named this way to be the counterpart
to struct ept_data. Do you still think, we should introduce naming
differences for basically the same register at this point?

>> +    union {
>> +        struct {
>> +            u64 vttbr_baddr :40, /* variable res0: from 0-(x-1) bit */
> 
> Please drop vttbr_. Also, this field is 48 bits for ARMv8 (see ARM
> D7.2.102 in DDI 0487A.j).
> 
>> +                res1        :8,
>> +                vttbr_vmid  :8,
> 
> Please drop vttbr_.
> 

Ok, thank you.

>> +                res2        :8;
>> +        };
>> +        u64 vttbr;
>> +    };
>> +};
>> +
>>   struct hvm_function_table {
>>       char *name;
>>
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 0d1e61e..a78d547 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -8,6 +8,9 @@
>>   #include <xen/p2m-common.h>
>>   #include <public/memory.h>
>>
>> +#include <asm/atomic.h>
>> +#include <asm/hvm/hvm.h>
> 
> ARM has not concept of HVM nor PV. So I would prefer to see a very
> limited usage of hvm.h
> 

Makes sense. I will rename the header file.

>> +
>>   #define paddr_bits PADDR_BITS
>>
>>   /* Holds the bit size of IPAs in p2m tables.  */
>> @@ -17,6 +20,11 @@ struct domain;
>>
>>   extern void memory_type_changed(struct domain *);
>>
>> +typedef enum {
>> +    p2m_host,
>> +    p2m_alternate,
>> +} p2m_class_t;
>> +
>>   /* Per-p2m-table state */
>>   struct p2m_domain {
>>       /* Lock that protects updates to the p2m */
>> @@ -66,6 +74,18 @@ struct p2m_domain {
>>       /* Radix tree to store the p2m_access_t settings as the pte's
>> don't have
>>        * enough available bits to store this information. */
>>       struct radix_tree_root mem_access_settings;
>> +
>> +    /* Alternate p2m: count of vcpu's currently using this p2m. */
>> +    atomic_t active_vcpus;
>> +
>> +    /* Choose between: host/alternate */
>> +    p2m_class_t p2m_class;
> 
> Is there any reason to have this field? It is set but never used.
> 

Actually it is used by p2m_is_altp2m and p2m_is_hostp2m (e.g. see assert
in p2m_flush_table).

>> +
>> +    /* Back pointer to domain */
>> +    struct domain *domain;
> 
> AFAICT, the only usage of this field is in p2m_altp2m_lazy_copy where
> you have direct access to the domain. So this could be dropped.
> 

Ok. Thank you

>> +
>> +    /* VTTBR information */
>> +    struct vttbr_data vttbr;
>>   };
>>
>>   /* List of possible type for each page in the p2m entry.
>>
> 
> Regards,
> 

Thank you very much for your comments.

Best regards,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 16:40     ` Sergej Proskurin
@ 2016-07-04 16:43       ` Andrew Cooper
  2016-07-04 16:56         ` Sergej Proskurin
  2016-07-04 18:18         ` Julien Grall
  2016-07-04 18:30       ` Julien Grall
  1 sibling, 2 replies; 126+ messages in thread
From: Andrew Cooper @ 2016-07-04 16:43 UTC (permalink / raw)
  To: Sergej Proskurin, Julien Grall, xen-devel; +Cc: Stefano Stabellini

On 04/07/16 17:40, Sergej Proskurin wrote:
>
>>>   {
>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>> +    int ret = 0;
>>> +
>>> +    spin_lock_init(&p2m->lock);
>>> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
>>> +
>>> +    spin_lock(&p2m->lock);
>>> +
>>> +    p2m->domain = d;
>>> +    p2m->access_required = false;
>>> +    p2m->mem_access_enabled = false;
>>> +    p2m->default_access = p2m_access_rwx;
>>> +    p2m->p2m_class = p2m_host;
>>> +    p2m->root = NULL;
>>> +
>>> +    /* Adopt VMID of the associated domain */
>>> +    p2m->vmid = d->arch.p2m.vmid;
>> It looks like to me that re-using the same VMID will require more TLB
>> flush (such as when a VCPU is migrated to another physical CPU). So
>> could you explain why you decided to re-use the same VMID?
>>
> Please correct me if I am wrong, but I associate a VMID with an entire
> domain. Since, the altp2m view still belongs to the same domain
> (p2m_init_one is called only from p2m_init_altp2m), the code re-uses the
> old VMID.

(I am not an ARM expert but) looking into VMIDs from the last time, they
are the TLB tag for the address space in use.

Does ARM have shared TLBs between multiple cores?  If so, you must a
separate VMID, otherwise an ALT2PM used by one vcpu could cause a
separate vcpu with a different ALTP2M to reuse the wrong translation.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 16:15   ` Julien Grall
@ 2016-07-04 16:51     ` Sergej Proskurin
  2016-07-04 18:34       ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 16:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hello Julien,

On 07/04/2016 06:15 PM, Julien Grall wrote:
> 
> 
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> +static void p2m_teardown_hostp2m(struct domain *d)
>> +{
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    struct page_info *pg = NULL;
>> +    mfn_t mfn;
>> +    unsigned int i;
>> +
>> +    spin_lock(&p2m->lock);
>>
>> -    if ( p2m->root )
>> -        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
>> +        if ( pg != p2m->root )
>> +        {
>> +            mfn = _mfn(page_to_mfn(pg));
>> +            clear_domain_page(mfn);
> 
> Can you explain why you are cleaning the page here? It was not part of
> p2m_teardown before this series.
> 

With the x86-based altp2m implementation, we experienced the problem
that altp2m-teardowns did not clean the pages. As a result, later
re-initialization reused the pages, which subsequently led to faults or
crashes due to reused mappings. We additionally clean the altp2m pages
and for the sake of completeness we clean the hostp2m tables as well.

>> +            free_domheap_page(pg);
>> +        }
>>
>> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>> +    {
>> +        mfn = _mfn(page_to_mfn(p2m->root) + i);
>> +        clear_domain_page(mfn);
>> +    }
>> +    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>       p2m->root = NULL;
>>
>>       p2m_free_vmid(d);
>> @@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
>>       spin_unlock(&p2m->lock);
>>   }
> 
> Regards,
> 

Thank you very much.

Best regards,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 16:43       ` Andrew Cooper
@ 2016-07-04 16:56         ` Sergej Proskurin
  2016-07-04 17:44           ` Julien Grall
  2016-07-04 18:18         ` Julien Grall
  1 sibling, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 16:56 UTC (permalink / raw)
  To: Andrew Cooper, Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Andrew,

On 07/04/2016 06:43 PM, Andrew Cooper wrote:
> On 04/07/16 17:40, Sergej Proskurin wrote:
>>
>>>>   {
>>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>>> +    int ret = 0;
>>>> +
>>>> +    spin_lock_init(&p2m->lock);
>>>> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
>>>> +
>>>> +    spin_lock(&p2m->lock);
>>>> +
>>>> +    p2m->domain = d;
>>>> +    p2m->access_required = false;
>>>> +    p2m->mem_access_enabled = false;
>>>> +    p2m->default_access = p2m_access_rwx;
>>>> +    p2m->p2m_class = p2m_host;
>>>> +    p2m->root = NULL;
>>>> +
>>>> +    /* Adopt VMID of the associated domain */
>>>> +    p2m->vmid = d->arch.p2m.vmid;
>>> It looks like to me that re-using the same VMID will require more TLB
>>> flush (such as when a VCPU is migrated to another physical CPU). So
>>> could you explain why you decided to re-use the same VMID?
>>>
>> Please correct me if I am wrong, but I associate a VMID with an entire
>> domain. Since, the altp2m view still belongs to the same domain
>> (p2m_init_one is called only from p2m_init_altp2m), the code re-uses the
>> old VMID.
> 
> (I am not an ARM expert but) looking into VMIDs from the last time, they
> are the TLB tag for the address space in use.
> 
> Does ARM have shared TLBs between multiple cores?  If so, you must a
> separate VMID, otherwise an ALT2PM used by one vcpu could cause a
> separate vcpu with a different ALTP2M to reuse the wrong translation.
> 
> ~Andrew
> 

You're absolutely correct. However, on every VMENTRY Xen explicitly
flushes the TLBs of the currently active domain (and with it, of the
currently active (a)p2m table) and hence it should not result in an issue.

Thank you very much.

Best regards,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.
  2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM Sergej Proskurin
  2016-07-04 12:15   ` Andrew Cooper
  2016-07-04 13:25   ` Julien Grall
@ 2016-07-04 17:42   ` Julien Grall
  2016-07-04 17:56     ` Tamas K Lengyel
  2 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 17:42 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 04/07/16 12:45, Sergej Proskurin wrote:
> The Xen altp2m subsystem is currently supported only on x86-64 based
> architectures. By utilizing ARM's virtualization extensions, we intend
> to implement altp2m support for ARM architectures and thus further
> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>
> With this commit, Xen is now able to activate altp2m support on ARM by
> means of the command-line argument 'altp2m' (bool).

Thinking a bit more about this patch. Why do we need an option to enable 
altp2m? All the code could be gated by the value of HVM_PARAM_ALTP2M.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 16:56         ` Sergej Proskurin
@ 2016-07-04 17:44           ` Julien Grall
  2016-07-04 21:19             ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 17:44 UTC (permalink / raw)
  To: Sergej Proskurin, Andrew Cooper, xen-devel; +Cc: Stefano Stabellini



On 04/07/16 17:56, Sergej Proskurin wrote:
> Hi Andrew,
>
> On 07/04/2016 06:43 PM, Andrew Cooper wrote:
>> On 04/07/16 17:40, Sergej Proskurin wrote:
>>>
>>>>>    {
>>>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>>>> +    int ret = 0;
>>>>> +
>>>>> +    spin_lock_init(&p2m->lock);
>>>>> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
>>>>> +
>>>>> +    spin_lock(&p2m->lock);
>>>>> +
>>>>> +    p2m->domain = d;
>>>>> +    p2m->access_required = false;
>>>>> +    p2m->mem_access_enabled = false;
>>>>> +    p2m->default_access = p2m_access_rwx;
>>>>> +    p2m->p2m_class = p2m_host;
>>>>> +    p2m->root = NULL;
>>>>> +
>>>>> +    /* Adopt VMID of the associated domain */
>>>>> +    p2m->vmid = d->arch.p2m.vmid;
>>>> It looks like to me that re-using the same VMID will require more TLB
>>>> flush (such as when a VCPU is migrated to another physical CPU). So
>>>> could you explain why you decided to re-use the same VMID?
>>>>
>>> Please correct me if I am wrong, but I associate a VMID with an entire
>>> domain. Since, the altp2m view still belongs to the same domain
>>> (p2m_init_one is called only from p2m_init_altp2m), the code re-uses the
>>> old VMID.
>>
>> (I am not an ARM expert but) looking into VMIDs from the last time, they
>> are the TLB tag for the address space in use.
>>
>> Does ARM have shared TLBs between multiple cores?  If so, you must a
>> separate VMID, otherwise an ALT2PM used by one vcpu could cause a
>> separate vcpu with a different ALTP2M to reuse the wrong translation.
>>
>> ~Andrew
>>
>
> You're absolutely correct. However, on every VMENTRY Xen explicitly
> flushes the TLBs of the currently active domain (and with it, of the
> currently active (a)p2m table) and hence it should not result in an issue.

VMENTRY is x86 not ARM. So are you sure you looked at the correct code?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.
  2016-07-04 17:42   ` Julien Grall
@ 2016-07-04 17:56     ` Tamas K Lengyel
  2016-07-04 21:08       ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Tamas K Lengyel @ 2016-07-04 17:56 UTC (permalink / raw)
  To: Julien Grall; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel

On Mon, Jul 4, 2016 at 11:42 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 04/07/16 12:45, Sergej Proskurin wrote:
>>
>> The Xen altp2m subsystem is currently supported only on x86-64 based
>> architectures. By utilizing ARM's virtualization extensions, we intend
>> to implement altp2m support for ARM architectures and thus further
>> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>>
>> With this commit, Xen is now able to activate altp2m support on ARM by
>> means of the command-line argument 'altp2m' (bool).
>
>
> Thinking a bit more about this patch. Why do we need an option to enable
> altp2m? All the code could be gated by the value of HVM_PARAM_ALTP2M.
>

I think the reasoning for having the altp2m boot parameter when it was
introduced for x86 was that altp2m is considered experimental. IMHO it
is stable enough so that we can forgo the boot param altogether.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 16:43       ` Andrew Cooper
  2016-07-04 16:56         ` Sergej Proskurin
@ 2016-07-04 18:18         ` Julien Grall
  2016-07-04 21:37           ` Sergej Proskurin
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 18:18 UTC (permalink / raw)
  To: Andrew Cooper, Sergej Proskurin, xen-devel
  Cc: Andre Przywara, Stefano Stabellini, Steve Capper



On 04/07/16 17:43, Andrew Cooper wrote:
> On 04/07/16 17:40, Sergej Proskurin wrote:
>>
>>>>    {
>>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>>> +    int ret = 0;
>>>> +
>>>> +    spin_lock_init(&p2m->lock);
>>>> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
>>>> +
>>>> +    spin_lock(&p2m->lock);
>>>> +
>>>> +    p2m->domain = d;
>>>> +    p2m->access_required = false;
>>>> +    p2m->mem_access_enabled = false;
>>>> +    p2m->default_access = p2m_access_rwx;
>>>> +    p2m->p2m_class = p2m_host;
>>>> +    p2m->root = NULL;
>>>> +
>>>> +    /* Adopt VMID of the associated domain */
>>>> +    p2m->vmid = d->arch.p2m.vmid;
>>> It looks like to me that re-using the same VMID will require more TLB
>>> flush (such as when a VCPU is migrated to another physical CPU). So
>>> could you explain why you decided to re-use the same VMID?
>>>
>> Please correct me if I am wrong, but I associate a VMID with an entire
>> domain. Since, the altp2m view still belongs to the same domain
>> (p2m_init_one is called only from p2m_init_altp2m), the code re-uses the
>> old VMID.
>
> (I am not an ARM expert but) looking into VMIDs from the last time, they
> are the TLB tag for the address space in use.

Correct.

>
> Does ARM have shared TLBs between multiple cores?  If so, you must a
> separate VMID, otherwise an ALT2PM used by one vcpu could cause a
> separate vcpu with a different ALTP2M to reuse the wrong translation.

 From the ARM ARM, I cannot rule out that TLBs cannot be shared from 
multiple cores (CC a couple of ARM folks to confirm).

Nevertheless, using a different VMID per P2M would avoid to care about 
flushing when moving the vCPU around.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 16:40     ` Sergej Proskurin
  2016-07-04 16:43       ` Andrew Cooper
@ 2016-07-04 18:30       ` Julien Grall
  2016-07-04 21:56         ` Sergej Proskurin
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 18:30 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 04/07/16 17:40, Sergej Proskurin wrote:
> On 07/04/2016 05:17 PM, Julien Grall wrote:
>> On 04/07/16 12:45, Sergej Proskurin wrote:

[...]

>>> +static struct p2m_domain *p2m_init_one(struct domain *d)
>>> +{
>>> +    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
>>> +
>>> +    if ( !p2m )
>>> +        return NULL;
>>> +
>>> +    if ( p2m_initialise(d, p2m) )
>>> +        goto free_p2m;
>>> +
>>> +    return p2m;
>>> +
>>> +free_p2m:
>>> +    xfree(p2m);
>>> +    return NULL;
>>> +}
>>> +
>>> +static void p2m_teardown_hostp2m(struct domain *d)
>>
>> Why does p2m_teardown_hostp2m not use p2m_teardown_one to teardown the
>> p2m? Assuming xfree(p2m) is move out of the function.
>>
>
> I believe you mean p2m_free_one: The p2m_teardown_hostp2m might use the
> same function but would require the call p2m_free_vmid(d) to be called
> outside of p2m_free_one as well. This would require another acquisition
> of the p2m->lock. Just to be sure, I did not want to split the teardown
> process into two atomic executions. If you believe that it is safe to
> do, I will gladly change the code and re-use the code from p2m_free_one
> in p2m_teardown_hostp2m.

Looking at the caller of p2m_teardown, I don't think the lock is 
necessary because nobody can use the P2M anymore when this function is 
called.

[...]

>>> +        {
>>> +            mfn = _mfn(page_to_mfn(pg));
>>> +            clear_domain_page(mfn);
>>> +            free_domheap_page(pg);
>>> +        }
>>>
>>> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>>> +    {
>>> +        mfn = _mfn(page_to_mfn(p2m->root) + i);
>>> +        clear_domain_page(mfn);
>>> +    }
>>> +    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>>        p2m->root = NULL;
>>>
>>>        p2m_free_vmid(d);
>>> @@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
>>>        spin_unlock(&p2m->lock);
>>>    }
>>>
>>> -int p2m_init(struct domain *d)
>>> +static int p2m_init_hostp2m(struct domain *d)
>>
>> Why does p2m_init_hostp2m not use p2m_init_one to initialize the p2m?
>>
>
> We dynamically allocate altp2ms. Also, the initialization of both the
> altp2ms and hostp2m slightly differs (see VMID allocation). I could
> rewrite the initialization function to be used for both, the hostp2m and
> altp2m structs. Especially, if you say that we do not associate domains
> with VMIDs (see your upper question).

I always prefer to see the code rewritten rather than duplicating code. 
The latter makes harder to fix bug when it is spread in multiple place.

[...]

>>> +    uint64_t altp2m_vttbr[MAX_ALTP2M];
>>>    }  __cacheline_aligned;
>>>
>>>    struct arch_vcpu
>>> diff --git a/xen/include/asm-arm/hvm/hvm.h
>>> b/xen/include/asm-arm/hvm/hvm.h
>>> index 96c455c..28d5298 100644
>>> --- a/xen/include/asm-arm/hvm/hvm.h
>>> +++ b/xen/include/asm-arm/hvm/hvm.h
>>> @@ -19,6 +19,18 @@
>>>    #ifndef __ASM_ARM_HVM_HVM_H__
>>>    #define __ASM_ARM_HVM_HVM_H__
>>>
>>> +struct vttbr_data {
>>
>> This structure should not be part of hvm.h but processor.h. Also, I
>> would rename it to simply vttbr.
>>
>
> Ok, I will move it. The struct was named this way to be the counterpart
> to struct ept_data. Do you still think, we should introduce naming
> differences for basically the same register at this point?

Yes, we are talking about two distinct architectures. If you look at 
ept_data, it stores more than the page table register. Hence the name.

[...]

>>> +
>>>    #define paddr_bits PADDR_BITS
>>>
>>>    /* Holds the bit size of IPAs in p2m tables.  */
>>> @@ -17,6 +20,11 @@ struct domain;
>>>
>>>    extern void memory_type_changed(struct domain *);
>>>
>>> +typedef enum {
>>> +    p2m_host,
>>> +    p2m_alternate,
>>> +} p2m_class_t;
>>> +
>>>    /* Per-p2m-table state */
>>>    struct p2m_domain {
>>>        /* Lock that protects updates to the p2m */
>>> @@ -66,6 +74,18 @@ struct p2m_domain {
>>>        /* Radix tree to store the p2m_access_t settings as the pte's
>>> don't have
>>>         * enough available bits to store this information. */
>>>        struct radix_tree_root mem_access_settings;
>>> +
>>> +    /* Alternate p2m: count of vcpu's currently using this p2m. */
>>> +    atomic_t active_vcpus;
>>> +
>>> +    /* Choose between: host/alternate */
>>> +    p2m_class_t p2m_class;
>>
>> Is there any reason to have this field? It is set but never used.
>>
>
> Actually it is used by p2m_is_altp2m and p2m_is_hostp2m (e.g. see assert
> in p2m_flush_table).

Right. Sorry, I didn't spot this call.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 16:51     ` Sergej Proskurin
@ 2016-07-04 18:34       ` Julien Grall
  2016-07-05  7:45         ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 18:34 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

On 04/07/16 17:51, Sergej Proskurin wrote:
> On 07/04/2016 06:15 PM, Julien Grall wrote:
>>
>>
>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>> +static void p2m_teardown_hostp2m(struct domain *d)
>>> +{
>>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +    struct page_info *pg = NULL;
>>> +    mfn_t mfn;
>>> +    unsigned int i;
>>> +
>>> +    spin_lock(&p2m->lock);
>>>
>>> -    if ( p2m->root )
>>> -        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
>>> +        if ( pg != p2m->root )
>>> +        {
>>> +            mfn = _mfn(page_to_mfn(pg));
>>> +            clear_domain_page(mfn);
>>
>> Can you explain why you are cleaning the page here? It was not part of
>> p2m_teardown before this series.
>>
>
> With the x86-based altp2m implementation, we experienced the problem
> that altp2m-teardowns did not clean the pages. As a result, later
> re-initialization reused the pages, which subsequently led to faults or
> crashes due to reused mappings. We additionally clean the altp2m pages
> and for the sake of completeness we clean the hostp2m tables as well.

All the pages allocated for the p2m are cleared before any usage (see 
p2m_create_table and p2m_allocate_table). So there is no point to zero 
the page here.

Also, unlike x86 we don't have a pool of page and directly use 
alloc_domheap_page to allocate a new page. So this would not prevent to 
get a non-zeroed page.

In any case, a such change should be in a separate patch and not hidden 
among big changes.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table.
  2016-07-04 11:45 ` [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table Sergej Proskurin
@ 2016-07-04 18:43   ` Julien Grall
  2016-07-05 13:56     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 18:43 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> +int p2m_table_init(struct domain *d)
> +{
> +    int i = 0;
> +    int rc = -ENOMEM;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +    spin_lock(&p2m->lock);
> +
> +    rc = p2m_alloc_table(p2m);
> +    if ( rc != 0 )
> +        goto out;
> +
> +    d->arch.vttbr = d->arch.p2m.vttbr.vttbr;
> +
> +    /*
> +     * Make sure that all TLBs corresponding to the new VMID are flushed
> +     * before using it.
>        */
>       flush_tlb_domain(d);
>
>       spin_unlock(&p2m->lock);
>
> -    return 0;
> +    if ( hvm_altp2m_supported() )
> +    {
> +        /* Init alternate p2m data */
> +        for ( i = 0; i < MAX_ALTP2M; i++ )
> +        {
> +            d->arch.altp2m_vttbr[i] = INVALID_MFN;
> +            rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);

Why do we need to allocate all the altp2m root page tables at the 
creation of the domain? This is wasting up to 80KB (2-root page for 10 
altp2m) per domain even if it may not be used at all by the domain.

> +            if ( rc != 0 )
> +                goto out;
> +        }
> +
> +        d->arch.altp2m_active = 0;
> +    }
> +
> +out:
> +    return rc;
>   }
>
>   #define MAX_VMID 256
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 783db5c..451b097 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -171,7 +171,7 @@ int relinquish_p2m_mapping(struct domain *d);
>    *
>    * Returns 0 for success or -errno.
>    */
> -int p2m_alloc_table(struct domain *d);
> +int p2m_table_init(struct domain *d);
>
>   /* Context switch */
>   void p2m_save_state(struct vcpu *p);
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-04 11:45 ` [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m Sergej Proskurin
  2016-07-04 12:30   ` Sergej Proskurin
@ 2016-07-04 20:32   ` Julien Grall
  2016-07-05 14:48     ` Sergej Proskurin
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 20:32 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/2016 12:45, Sergej Proskurin wrote:
> This commit makes sure that the TLB of a domain considers flushing all
> of the associated altp2m views. Therefore, in case a different domain
> (not the currently active domain) shall flush its TLBs, the current
> implementation loops over all VTTBRs of the different altp2m mappings
> per vCPU and flushes the TLBs. This way, a change of one of the altp2m
> mapping is considered.  At this point, it must be considered that the
> domain --whose TLBs are to be flushed-- is not locked.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 63 insertions(+), 8 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 7e721f9..019f10e 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -15,6 +15,8 @@
>  #include <asm/hardirq.h>
>  #include <asm/page.h>
>
> +#include <asm/altp2m.h>
> +
>  #ifdef CONFIG_ARM_64
>  static unsigned int __read_mostly p2m_root_order;
>  static unsigned int __read_mostly p2m_root_level;
> @@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>                   P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
>  }
>
> +static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    uint16_t index = vcpu_altp2m(v).p2midx;
> +
> +    if ( index == INVALID_ALTP2M )
> +        return INVALID_MFN;
> +
> +    BUG_ON(index >= MAX_ALTP2M);
> +
> +    return d->arch.altp2m_vttbr[index];
> +}
> +
> +static void p2m_load_altp2m_VTTBR(struct vcpu *v)

Please try to share code when it is possible. For instance, a big part 
of this helper is similar to p2m_load_VTTBR. Assuming you get the p2m 
rather than the VTTBR directly.

> +{
> +    struct domain *d = v->domain;
> +    uint64_t vttbr = p2m_get_altp2m_vttbr(v);
> +
> +    if ( is_idle_domain(d) )
> +        return;
> +
> +    BUG_ON(vttbr == INVALID_MFN);
> +    WRITE_SYSREG64(vttbr, VTTBR_EL2);
> +
> +    isb(); /* Ensure update is visible */
> +}
> +
>  static void p2m_load_VTTBR(struct domain *d)
>  {
>      if ( is_idle_domain(d) )
>          return;
> +
>      BUG_ON(!d->arch.vttbr);
>      WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
> +

Spurious changes.

>      isb(); /* Ensure update is visible */
>  }
>
> @@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
>      WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>      isb();
>
> -    p2m_load_VTTBR(n->domain);
> +    if ( altp2m_active(n->domain) )

This would benefit of an unlikely (maybe within altp2m_active).

> +        p2m_load_altp2m_VTTBR(n);
> +    else
> +        p2m_load_VTTBR(n->domain);
> +
>      isb();
>
>      if ( is_32bit_domain(n->domain) )
> @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
>  void flush_tlb_domain(struct domain *d)
>  {
>      unsigned long flags = 0;
> +    struct vcpu *v = NULL;
>
> -    /* Update the VTTBR if necessary with the domain d. In this case,
> -     * it's only necessary to flush TLBs on every CPUs with the current VMID
> -     * (our domain).
> +    /*
> +     * Update the VTTBR if necessary with the domain d. In this case, it is only
> +     * necessary to flush TLBs on every CPUs with the current VMID (our
> +     * domain).
>       */
>      if ( d != current->domain )
>      {
>          local_irq_save(flags);
> -        p2m_load_VTTBR(d);
> -    }
>
> -    flush_tlb();
> +        /* If altp2m is active, update VTTBR and flush TLBs of every VCPU */
> +        if ( altp2m_active(d) )
> +        {
> +            for_each_vcpu( d, v )
> +            {
> +                p2m_load_altp2m_VTTBR(v);
> +                flush_tlb();
> +            }
> +        }
> +        else
> +        {
> +            p2m_load_VTTBR(d);
> +            flush_tlb();
> +        }

Why do you need to do a such things? If the VMID is the same, a single 
call to flush_tlb() will nuke all the entries for the given TLBs.

If the VMID is not shared, then I am not even sure why you would need to 
flush the TLBs for all the altp2ms.

I have looked to Xen with this series applied and noticed that when you 
remove a page from the hostp2m, the mapping in the altp2m is not 
removed. So the guest may use a page that would have been freed 
previously. Did I miss something?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-07-04 11:45 ` [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m Sergej Proskurin
@ 2016-07-04 20:34   ` Julien Grall
  2016-07-05 20:31     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 20:34 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/2016 12:45, Sergej Proskurin wrote:
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index ce1c3c3..413125f 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>
> -        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
> +        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
>          if ( page == NULL )
>              return len;
>
> @@ -64,7 +64,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>
> -        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
> +        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
>          if ( page == NULL )
>              return len;
>
> @@ -96,7 +96,7 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
>          unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
>          struct page_info *page;
>
> -        page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
> +        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
>          if ( page == NULL )
>              return len;
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 9c8fefd..23b482f 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1829,10 +1829,11 @@ err:
>      return page;
>  }
>
> -struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
> +struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>                                      unsigned long flags)
>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
>      struct page_info *page = NULL;
>      paddr_t maddr = 0;
>      int rc;
> @@ -1844,17 +1845,23 @@ struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
>          unsigned long irq_flags;
>
>          local_irq_save(irq_flags);
> -        p2m_load_VTTBR(d);
> +
> +        if ( altp2m_active(d) )
> +            p2m_load_altp2m_VTTBR(v);
> +        else
> +            p2m_load_VTTBR(d);
>
>          rc = gvirt_to_maddr(va, &maddr, flags);
>
> -        p2m_load_VTTBR(current->domain);
> +        if ( altp2m_active(current->domain) )
> +            p2m_load_altp2m_VTTBR(current);
> +        else
> +            p2m_load_VTTBR(current->domain);
> +

This could be abstracted with a new helper to load the VTTBR for a given 
vCPU.

>          local_irq_restore(irq_flags);
>      }
>      else
> -    {
>          rc = gvirt_to_maddr(va, &maddr, flags);
> -    }
>
>      if ( rc )
>          goto err;

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism.
  2016-07-04 11:45 ` [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
@ 2016-07-04 20:53   ` Julien Grall
  2016-07-06  8:33     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 20:53 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini, Tamas K Lengyel

(CC Tamas)

Hello Sergej,

On 04/07/2016 12:45, Sergej Proskurin wrote:
> This commit adds the function p2m_altp2m_lazy_copy implementing the
> altp2m paging mechanism. The function p2m_altp2m_lazy_copy lazily copies
> the hostp2m's mapping into the currently active altp2m view on 2nd stage
> instruction or data access violations. Every altp2m violation generates
> a vm_event.

I have been working on clean up the abort path (see [1]). Please rebase 
your code on top of it.

> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---

[...]

> +/*
> + * If the fault is for a not present entry:
> + *     if the entry in the host p2m has a valid mfn, copy it and retry
> + *     else indicate that outer handler should handle fault
> + *
> + * If the fault is for a present entry:
> + *     indicate that outer handler should handle fault
> + */
> +bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
> +                            unsigned long gva, struct npfec npfec,
> +                            struct p2m_domain **ap2m)
> +{
> +    struct domain *d = v->domain;
> +    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
> +    p2m_type_t p2mt;
> +    xenmem_access_t xma;
> +    paddr_t maddr, mask = 0;
> +    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
> +    unsigned int level;
> +    unsigned long mattr;
> +    int rc = 0;
> +
> +    static const p2m_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> +        ACCESS(n),
> +        ACCESS(r),
> +        ACCESS(w),
> +        ACCESS(rw),
> +        ACCESS(x),
> +        ACCESS(rx),
> +        ACCESS(wx),
> +        ACCESS(rwx),
> +        ACCESS(rx2rw),
> +        ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    *ap2m = p2m_get_altp2m(v);
> +    if ( *ap2m == NULL)
> +        return 0;
> +
> +    /* Check if entry is part of the altp2m view */
> +    spin_lock(&(*ap2m)->lock);
> +    maddr = __p2m_lookup(*ap2m, gpa, NULL);
> +    spin_unlock(&(*ap2m)->lock);
> +    if ( maddr != INVALID_PADDR )
> +        return 0;
> +
> +    /* Check if entry is part of the host p2m view */
> +    spin_lock(&hp2m->lock);
> +    maddr = __p2m_lookup(hp2m, gpa, &p2mt);
> +    if ( maddr == INVALID_PADDR )
> +        goto out;
> +
> +    rc = __p2m_get_mem_access(hp2m, gfn, &xma);
> +    if ( rc )
> +        goto out;
> +
> +    rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
> +    if ( rc )
> +        goto out;

Can we introduce a function which return the xma, mfn, order, attribute 
at once? It will avoid to browse the p2m 3 times which is really 
expensive on ARMv7 because the p2m is not mapped in the virtual address 
space of Xen.

> +    spin_unlock(&hp2m->lock);
> +
> +    mask = level_masks[level];
> +
> +    rc = apply_p2m_changes(d, *ap2m, INSERT,
> +                           pfn_to_paddr(gfn_x(gfn)) & mask,
> +                           (pfn_to_paddr(gfn_x(gfn)) + level_sizes[level]) & mask,
> +                           maddr & mask, mattr, 0, p2mt,
> +                           memaccess[xma]);

The page associated to the MFN is not locked, so another thread could 
decide to remove the page from the domain and then the altp2m would 
contain an entry to something that does not belong to the domain 
anymore. Note that x86 is doing the same. So I am not sure why it is 
considered safe there...

> +    if ( rc )
> +    {
> +        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m %lx\n",
> +                (unsigned long)pfn_to_paddr(gfn_x(gfn)), (unsigned long)(maddr), (unsigned long)*ap2m);
> +        domain_crash(hp2m->domain);
> +    }
> +
> +    return 1;
> +
> +out:
> +    spin_unlock(&hp2m->lock);
> +    return 0;
> +}
> +
>  static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
>  {
>      struct p2m_domain *p2m = d->arch.altp2m_p2m[i];

[...]

> @@ -2429,6 +2460,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>                                       const union hsr hsr)
>  {
>      const struct hsr_dabt dabt = hsr.dabt;
> +    struct vcpu *v = current;
> +    struct p2m_domain *p2m = NULL;
>      int rc;
>      mmio_info_t info;
>
> @@ -2449,6 +2482,12 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>          info.gpa = get_faulting_ipa();
>      else
>      {
> +        /*
> +         * When using altp2m, this flush is required to get rid of old TLB
> +         * entries and use the new, lazily copied, ap2m entries.
> +         */
> +        flush_tlb_local();

Can you give more details why this flush is required?

> +

Regards,

[1] https://lists.xen.org/archives/html/xen-devel/2016-06/msg02853.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m.
  2016-07-04 11:45 ` [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
@ 2016-07-04 20:58   ` Julien Grall
  2016-07-06  8:41     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-04 20:58 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/2016 12:45, Sergej Proskurin wrote:
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 96892a5..de97a12 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -51,7 +51,8 @@ static bool_t p2m_mapping(lpae_t pte)
>
>  void p2m_dump_info(struct domain *d)
>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
> +    struct vcpu *v = current;

This is wrong, p2m_dump_info can be called with d != current->domain. 
Please try to look at how the caller may use the function before doing 
any modification.

In this case, I think you want to dump the info for the hostp2m and 
every altp2ms.

> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
>
>      spin_lock(&p2m->lock);
>      printk("p2m mappings for domain %d (vmid %d):\n",
> @@ -71,7 +72,8 @@ void memory_type_changed(struct domain *d)
>
>  void dump_p2m_lookup(struct domain *d, paddr_t addr)
>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
> +    struct vcpu *v = current;

Ditto.

> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);
>
>      printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
>
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM.
  2016-07-04 17:56     ` Tamas K Lengyel
@ 2016-07-04 21:08       ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 21:08 UTC (permalink / raw)
  To: Tamas K Lengyel, Julien Grall; +Cc: Xen-devel, Stefano Stabellini


On 07/04/2016 07:56 PM, Tamas K Lengyel wrote:
> On Mon, Jul 4, 2016 at 11:42 AM, Julien Grall <julien.grall@arm.com> wrote:
>>
>>
>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>>
>>> The Xen altp2m subsystem is currently supported only on x86-64 based
>>> architectures. By utilizing ARM's virtualization extensions, we intend
>>> to implement altp2m support for ARM architectures and thus further
>>> extend current Virtual Machine Introspection (VMI) capabilities on ARM.
>>>
>>> With this commit, Xen is now able to activate altp2m support on ARM by
>>> means of the command-line argument 'altp2m' (bool).
>>
>>
>> Thinking a bit more about this patch. Why do we need an option to enable
>> altp2m? All the code could be gated by the value of HVM_PARAM_ALTP2M.
>>
> 
> I think the reasoning for having the altp2m boot parameter when it was
> introduced for x86 was that altp2m is considered experimental. IMHO it
> is stable enough so that we can forgo the boot param altogether.
> 
> Tamas
> 

That would definitely remove the need for the altp2m initcall mechanism.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 17:44           ` Julien Grall
@ 2016-07-04 21:19             ` Sergej Proskurin
  2016-07-04 21:35               ` Julien Grall
  2016-07-04 21:46               ` Sergej Proskurin
  0 siblings, 2 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 21:19 UTC (permalink / raw)
  To: Julien Grall, Andrew Cooper, xen-devel; +Cc: Stefano Stabellini


On 07/04/2016 07:44 PM, Julien Grall wrote:
> 
> 
> On 04/07/16 17:56, Sergej Proskurin wrote:
>> Hi Andrew,
>>
>> On 07/04/2016 06:43 PM, Andrew Cooper wrote:
>>> On 04/07/16 17:40, Sergej Proskurin wrote:
>>>>
>>>>>>    {
>>>>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>>>>> +    int ret = 0;
>>>>>> +
>>>>>> +    spin_lock_init(&p2m->lock);
>>>>>> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
>>>>>> +
>>>>>> +    spin_lock(&p2m->lock);
>>>>>> +
>>>>>> +    p2m->domain = d;
>>>>>> +    p2m->access_required = false;
>>>>>> +    p2m->mem_access_enabled = false;
>>>>>> +    p2m->default_access = p2m_access_rwx;
>>>>>> +    p2m->p2m_class = p2m_host;
>>>>>> +    p2m->root = NULL;
>>>>>> +
>>>>>> +    /* Adopt VMID of the associated domain */
>>>>>> +    p2m->vmid = d->arch.p2m.vmid;
>>>>> It looks like to me that re-using the same VMID will require more TLB
>>>>> flush (such as when a VCPU is migrated to another physical CPU). So
>>>>> could you explain why you decided to re-use the same VMID?
>>>>>
>>>> Please correct me if I am wrong, but I associate a VMID with an entire
>>>> domain. Since, the altp2m view still belongs to the same domain
>>>> (p2m_init_one is called only from p2m_init_altp2m), the code re-uses
>>>> the
>>>> old VMID.
>>>
>>> (I am not an ARM expert but) looking into VMIDs from the last time, they
>>> are the TLB tag for the address space in use.
>>>
>>> Does ARM have shared TLBs between multiple cores?  If so, you must a
>>> separate VMID, otherwise an ALT2PM used by one vcpu could cause a
>>> separate vcpu with a different ALTP2M to reuse the wrong translation.
>>>
>>> ~Andrew
>>>
>>
>> You're absolutely correct. However, on every VMENTRY Xen explicitly
>> flushes the TLBs of the currently active domain (and with it, of the
>> currently active (a)p2m table) and hence it should not result in an
>> issue.
> 
> VMENTRY is x86 not ARM. So are you sure you looked at the correct code?
> 
> Regards,
> 

This is true. I just use the term VMENTER for describing transitions to
guests on both, x86 and ARM. In ./xen/arch/arm/domain.c the function
ctxt_switch_to calls p2m_restore_state on every context switch, wich in
turn loads the VTTBR associated to the domain and flushes the TLBs.

Best regards,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 21:19             ` Sergej Proskurin
@ 2016-07-04 21:35               ` Julien Grall
  2016-07-04 21:46               ` Sergej Proskurin
  1 sibling, 0 replies; 126+ messages in thread
From: Julien Grall @ 2016-07-04 21:35 UTC (permalink / raw)
  To: Sergej Proskurin, Andrew Cooper, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/2016 22:19, Sergej Proskurin wrote:
> On 07/04/2016 07:44 PM, Julien Grall wrote:
>> On 04/07/16 17:56, Sergej Proskurin wrote:
>>> On 07/04/2016 06:43 PM, Andrew Cooper wrote:
>>>> On 04/07/16 17:40, Sergej Proskurin wrote:
>>>>>
>>>>>>>    {
>>>>>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>>>>>> +    int ret = 0;
>>>>>>> +
>>>>>>> +    spin_lock_init(&p2m->lock);
>>>>>>> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
>>>>>>> +
>>>>>>> +    spin_lock(&p2m->lock);
>>>>>>> +
>>>>>>> +    p2m->domain = d;
>>>>>>> +    p2m->access_required = false;
>>>>>>> +    p2m->mem_access_enabled = false;
>>>>>>> +    p2m->default_access = p2m_access_rwx;
>>>>>>> +    p2m->p2m_class = p2m_host;
>>>>>>> +    p2m->root = NULL;
>>>>>>> +
>>>>>>> +    /* Adopt VMID of the associated domain */
>>>>>>> +    p2m->vmid = d->arch.p2m.vmid;
>>>>>> It looks like to me that re-using the same VMID will require more TLB
>>>>>> flush (such as when a VCPU is migrated to another physical CPU). So
>>>>>> could you explain why you decided to re-use the same VMID?
>>>>>>
>>>>> Please correct me if I am wrong, but I associate a VMID with an entire
>>>>> domain. Since, the altp2m view still belongs to the same domain
>>>>> (p2m_init_one is called only from p2m_init_altp2m), the code re-uses
>>>>> the
>>>>> old VMID.
>>>>
>>>> (I am not an ARM expert but) looking into VMIDs from the last time, they
>>>> are the TLB tag for the address space in use.
>>>>
>>>> Does ARM have shared TLBs between multiple cores?  If so, you must a
>>>> separate VMID, otherwise an ALT2PM used by one vcpu could cause a
>>>> separate vcpu with a different ALTP2M to reuse the wrong translation.
>>>>
>>>> ~Andrew
>>>>
>>>
>>> You're absolutely correct. However, on every VMENTRY Xen explicitly
>>> flushes the TLBs of the currently active domain (and with it, of the
>>> currently active (a)p2m table) and hence it should not result in an
>>> issue.
>>
>> VMENTRY is x86 not ARM. So are you sure you looked at the correct code?
>>
>> Regards,
>>
>
> This is true. I just use the term VMENTER for describing transitions to
> guests on both, x86 and ARM. In ./xen/arch/arm/domain.c the function
> ctxt_switch_to calls p2m_restore_state on every context switch, wich in
> turn loads the VTTBR associated to the domain and flushes the TLBs.

Really? I have this patch series applied on top of staging and there is 
no TLB flush instruction in p2m_restore_state nor p2m_load*.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 18:18         ` Julien Grall
@ 2016-07-04 21:37           ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 21:37 UTC (permalink / raw)
  To: Julien Grall, Andrew Cooper, xen-devel
  Cc: Andre Przywara, Stefano Stabellini, Steve Capper



On 07/04/2016 08:18 PM, Julien Grall wrote:
> 
> 
> On 04/07/16 17:43, Andrew Cooper wrote:
>> On 04/07/16 17:40, Sergej Proskurin wrote:
>>>
>>>>>    {
>>>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>>>> +    int ret = 0;
>>>>> +
>>>>> +    spin_lock_init(&p2m->lock);
>>>>> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
>>>>> +
>>>>> +    spin_lock(&p2m->lock);
>>>>> +
>>>>> +    p2m->domain = d;
>>>>> +    p2m->access_required = false;
>>>>> +    p2m->mem_access_enabled = false;
>>>>> +    p2m->default_access = p2m_access_rwx;
>>>>> +    p2m->p2m_class = p2m_host;
>>>>> +    p2m->root = NULL;
>>>>> +
>>>>> +    /* Adopt VMID of the associated domain */
>>>>> +    p2m->vmid = d->arch.p2m.vmid;
>>>> It looks like to me that re-using the same VMID will require more TLB
>>>> flush (such as when a VCPU is migrated to another physical CPU). So
>>>> could you explain why you decided to re-use the same VMID?
>>>>
>>> Please correct me if I am wrong, but I associate a VMID with an entire
>>> domain. Since, the altp2m view still belongs to the same domain
>>> (p2m_init_one is called only from p2m_init_altp2m), the code re-uses the
>>> old VMID.
>>
>> (I am not an ARM expert but) looking into VMIDs from the last time, they
>> are the TLB tag for the address space in use.
> 
> Correct.
> 
>>
>> Does ARM have shared TLBs between multiple cores?  If so, you must a
>> separate VMID, otherwise an ALT2PM used by one vcpu could cause a
>> separate vcpu with a different ALTP2M to reuse the wrong translation.
> 
> From the ARM ARM, I cannot rule out that TLBs cannot be shared from
> multiple cores (CC a couple of ARM folks to confirm).
> 
> Nevertheless, using a different VMID per P2M would avoid to care about
> flushing when moving the vCPU around.
> 

I see what you mean. This would definitely improve the context switch
performance. That is, instead of adopting the hostp2m's vmid during
altp2m table initialization, I will allocate a new VMID for every altp2m
table --as you said before. This would simply eliminate the need for
explicit TLB flushing. Thank you very much.

> Regards,
> 

Best regards,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 21:19             ` Sergej Proskurin
  2016-07-04 21:35               ` Julien Grall
@ 2016-07-04 21:46               ` Sergej Proskurin
  1 sibling, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 21:46 UTC (permalink / raw)
  To: xen-devel, Andrew Cooper, Julien Grall; +Cc: Stefano Stabellini



On 07/04/2016 11:19 PM, Sergej Proskurin wrote:
> 
> On 07/04/2016 07:44 PM, Julien Grall wrote:
>>
>>
>> On 04/07/16 17:56, Sergej Proskurin wrote:
>>> Hi Andrew,
>>>
>>> On 07/04/2016 06:43 PM, Andrew Cooper wrote:
>>>> On 04/07/16 17:40, Sergej Proskurin wrote:
>>>>>
>>>>>>>    {
>>>>>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>>>>>> +    int ret = 0;
>>>>>>> +
>>>>>>> +    spin_lock_init(&p2m->lock);
>>>>>>> +    INIT_PAGE_LIST_HEAD(&p2m->pages);
>>>>>>> +
>>>>>>> +    spin_lock(&p2m->lock);
>>>>>>> +
>>>>>>> +    p2m->domain = d;
>>>>>>> +    p2m->access_required = false;
>>>>>>> +    p2m->mem_access_enabled = false;
>>>>>>> +    p2m->default_access = p2m_access_rwx;
>>>>>>> +    p2m->p2m_class = p2m_host;
>>>>>>> +    p2m->root = NULL;
>>>>>>> +
>>>>>>> +    /* Adopt VMID of the associated domain */
>>>>>>> +    p2m->vmid = d->arch.p2m.vmid;
>>>>>> It looks like to me that re-using the same VMID will require more TLB
>>>>>> flush (such as when a VCPU is migrated to another physical CPU). So
>>>>>> could you explain why you decided to re-use the same VMID?
>>>>>>
>>>>> Please correct me if I am wrong, but I associate a VMID with an entire
>>>>> domain. Since, the altp2m view still belongs to the same domain
>>>>> (p2m_init_one is called only from p2m_init_altp2m), the code re-uses
>>>>> the
>>>>> old VMID.
>>>>
>>>> (I am not an ARM expert but) looking into VMIDs from the last time, they
>>>> are the TLB tag for the address space in use.
>>>>
>>>> Does ARM have shared TLBs between multiple cores?  If so, you must a
>>>> separate VMID, otherwise an ALT2PM used by one vcpu could cause a
>>>> separate vcpu with a different ALTP2M to reuse the wrong translation.
>>>>
>>>> ~Andrew
>>>>
>>>
>>> You're absolutely correct. However, on every VMENTRY Xen explicitly
>>> flushes the TLBs of the currently active domain (and with it, of the
>>> currently active (a)p2m table) and hence it should not result in an
>>> issue.
>>
>> VMENTRY is x86 not ARM. So are you sure you looked at the correct code?
>>
>> Regards,
>>
> 
> This is true. I just use the term VMENTER for describing transitions to
> guests on both, x86 and ARM. In ./xen/arch/arm/domain.c the function
> ctxt_switch_to calls p2m_restore_state on every context switch, wich in
> turn loads the VTTBR associated to the domain and flushes the TLBs.
> 

My bad: after loading the VTTBR, the TLBs are not explicitly flushed, as
I claimed before. That is, using a dedicated VMID per altp2m view is
definitely required. I will fix this. Thank you very much.

Best regards,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 18:30       ` Julien Grall
@ 2016-07-04 21:56         ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-04 21:56 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini



On 07/04/2016 08:30 PM, Julien Grall wrote:
> 
> 
> On 04/07/16 17:40, Sergej Proskurin wrote:
>> On 07/04/2016 05:17 PM, Julien Grall wrote:
>>> On 04/07/16 12:45, Sergej Proskurin wrote:
> 
> [...]
> 
>>>> +static struct p2m_domain *p2m_init_one(struct domain *d)
>>>> +{
>>>> +    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
>>>> +
>>>> +    if ( !p2m )
>>>> +        return NULL;
>>>> +
>>>> +    if ( p2m_initialise(d, p2m) )
>>>> +        goto free_p2m;
>>>> +
>>>> +    return p2m;
>>>> +
>>>> +free_p2m:
>>>> +    xfree(p2m);
>>>> +    return NULL;
>>>> +}
>>>> +
>>>> +static void p2m_teardown_hostp2m(struct domain *d)
>>>
>>> Why does p2m_teardown_hostp2m not use p2m_teardown_one to teardown the
>>> p2m? Assuming xfree(p2m) is move out of the function.
>>>
>>
>> I believe you mean p2m_free_one: The p2m_teardown_hostp2m might use the
>> same function but would require the call p2m_free_vmid(d) to be called
>> outside of p2m_free_one as well. This would require another acquisition
>> of the p2m->lock. Just to be sure, I did not want to split the teardown
>> process into two atomic executions. If you believe that it is safe to
>> do, I will gladly change the code and re-use the code from p2m_free_one
>> in p2m_teardown_hostp2m.
> 
> Looking at the caller of p2m_teardown, I don't think the lock is
> necessary because nobody can use the P2M anymore when this function is
> called.
> 
> [...]
> 

Ok. I will adapt the code as discussed. Thank you.

>>>> +        {
>>>> +            mfn = _mfn(page_to_mfn(pg));
>>>> +            clear_domain_page(mfn);
>>>> +            free_domheap_page(pg);
>>>> +        }
>>>>
>>>> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>>>> +    {
>>>> +        mfn = _mfn(page_to_mfn(p2m->root) + i);
>>>> +        clear_domain_page(mfn);
>>>> +    }
>>>> +    free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>>>        p2m->root = NULL;
>>>>
>>>>        p2m_free_vmid(d);
>>>> @@ -1422,7 +1506,7 @@ void p2m_teardown(struct domain *d)
>>>>        spin_unlock(&p2m->lock);
>>>>    }
>>>>
>>>> -int p2m_init(struct domain *d)
>>>> +static int p2m_init_hostp2m(struct domain *d)
>>>
>>> Why does p2m_init_hostp2m not use p2m_init_one to initialize the p2m?
>>>
>>
>> We dynamically allocate altp2ms. Also, the initialization of both the
>> altp2ms and hostp2m slightly differs (see VMID allocation). I could
>> rewrite the initialization function to be used for both, the hostp2m and
>> altp2m structs. Especially, if you say that we do not associate domains
>> with VMIDs (see your upper question).
> 
> I always prefer to see the code rewritten rather than duplicating code.
> The latter makes harder to fix bug when it is spread in multiple place.
> 
> [...]
> 

Makes sense. Especially that we have now discussed the fact that we
really need to allocate a new VMID per altp2m view. I will rewrite the
functionality and remove the duplicated code. Thank you.

>>>> +    uint64_t altp2m_vttbr[MAX_ALTP2M];
>>>>    }  __cacheline_aligned;
>>>>
>>>>    struct arch_vcpu
>>>> diff --git a/xen/include/asm-arm/hvm/hvm.h
>>>> b/xen/include/asm-arm/hvm/hvm.h
>>>> index 96c455c..28d5298 100644
>>>> --- a/xen/include/asm-arm/hvm/hvm.h
>>>> +++ b/xen/include/asm-arm/hvm/hvm.h
>>>> @@ -19,6 +19,18 @@
>>>>    #ifndef __ASM_ARM_HVM_HVM_H__
>>>>    #define __ASM_ARM_HVM_HVM_H__
>>>>
>>>> +struct vttbr_data {
>>>
>>> This structure should not be part of hvm.h but processor.h. Also, I
>>> would rename it to simply vttbr.
>>>
>>
>> Ok, I will move it. The struct was named this way to be the counterpart
>> to struct ept_data. Do you still think, we should introduce naming
>> differences for basically the same register at this point?
> 
> Yes, we are talking about two distinct architectures. If you look at
> ept_data, it stores more than the page table register. Hence the name.
> 
> [...]
> 

Ok.

>>>> +
>>>>    #define paddr_bits PADDR_BITS
>>>>
>>>>    /* Holds the bit size of IPAs in p2m tables.  */
>>>> @@ -17,6 +20,11 @@ struct domain;
>>>>
>>>>    extern void memory_type_changed(struct domain *);
>>>>
>>>> +typedef enum {
>>>> +    p2m_host,
>>>> +    p2m_alternate,
>>>> +} p2m_class_t;
>>>> +
>>>>    /* Per-p2m-table state */
>>>>    struct p2m_domain {
>>>>        /* Lock that protects updates to the p2m */
>>>> @@ -66,6 +74,18 @@ struct p2m_domain {
>>>>        /* Radix tree to store the p2m_access_t settings as the pte's
>>>> don't have
>>>>         * enough available bits to store this information. */
>>>>        struct radix_tree_root mem_access_settings;
>>>> +
>>>> +    /* Alternate p2m: count of vcpu's currently using this p2m. */
>>>> +    atomic_t active_vcpus;
>>>> +
>>>> +    /* Choose between: host/alternate */
>>>> +    p2m_class_t p2m_class;
>>>
>>> Is there any reason to have this field? It is set but never used.
>>>
>>
>> Actually it is used by p2m_is_altp2m and p2m_is_hostp2m (e.g. see assert
>> in p2m_flush_table).
> 
> Right. Sorry, I didn't spot this call.
> 

It's all good: Thank you for the great review :)

> Regards,
> 

Best regards,
Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines.
  2016-07-04 18:34       ` Julien Grall
@ 2016-07-05  7:45         ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05  7:45 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/04/2016 08:34 PM, Julien Grall wrote:
> On 04/07/16 17:51, Sergej Proskurin wrote:
>> On 07/04/2016 06:15 PM, Julien Grall wrote:
>>>
>>>
>>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>>> +static void p2m_teardown_hostp2m(struct domain *d)
>>>> +{
>>>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>>> +    struct page_info *pg = NULL;
>>>> +    mfn_t mfn;
>>>> +    unsigned int i;
>>>> +
>>>> +    spin_lock(&p2m->lock);
>>>>
>>>> -    if ( p2m->root )
>>>> -        free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>>> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
>>>> +        if ( pg != p2m->root )
>>>> +        {
>>>> +            mfn = _mfn(page_to_mfn(pg));
>>>> +            clear_domain_page(mfn);
>>>
>>> Can you explain why you are cleaning the page here? It was not part of
>>> p2m_teardown before this series.
>>>
>>
>> With the x86-based altp2m implementation, we experienced the problem
>> that altp2m-teardowns did not clean the pages. As a result, later
>> re-initialization reused the pages, which subsequently led to faults or
>> crashes due to reused mappings. We additionally clean the altp2m pages
>> and for the sake of completeness we clean the hostp2m tables as well.
>
> All the pages allocated for the p2m are cleared before any usage (see
> p2m_create_table and p2m_allocate_table). So there is no point to zero
> the page here.
>
> Also, unlike x86 we don't have a pool of page and directly use
> alloc_domheap_page to allocate a new page. So this would not prevent
> to get a non-zeroed page.
>
> In any case, a such change should be in a separate patch and not
> hidden among big changes.
>
> Regards,
>

Thank you Julien for clearing that up, I will remove the part that
clears the page to be freed.

Best regards,
Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-07-04 15:39   ` Julien Grall
@ 2016-07-05  8:45     ` Sergej Proskurin
  2016-07-05 10:11       ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05  8:45 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hello Julien,


On 07/04/2016 05:39 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index 8e8e0f7..cb90a55 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -104,8 +104,36 @@ static int
>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>           break;
>>
>>       case HVMOP_altp2m_set_domain_state:
>> -        rc = -EOPNOTSUPP;
>> +    {
>
> I cannot find anything in the code which prevents this sub-op to be
> called concurrently. Did I miss something?
>

Please correct me if I am wrong, but is the rcu-lock not already
sufficient, which is locked in the same function above?

[...]

d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
    rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();

[...]

>> +        struct vcpu *v;
>> +        bool_t ostate;
>> +
>> +        if ( !d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
>> +        {
>> +            rc = -EINVAL;
>> +            break;
>> +        }
>> +
>> +        ostate = d->arch.altp2m_active;
>> +        d->arch.altp2m_active = !!a.u.domain_state.state;
>> +
>> +        /* If the alternate p2m state has changed, handle
>> appropriately */
>> +        if ( d->arch.altp2m_active != ostate &&
>> +             (ostate || !(rc = p2m_init_altp2m_by_id(d, 0))) )
>> +        {
>> +            for_each_vcpu( d, v )
>> +            {
>> +                if ( !ostate )
>> +                    altp2m_vcpu_initialise(v);
>> +                else
>> +                    altp2m_vcpu_destroy(v);
>> +            }
>> +
>> +            if ( ostate )
>> +                p2m_flush_altp2m(d);
>> +        }
>>           break;
>> +    }
>>
>>       case HVMOP_altp2m_vcpu_enable_notify:
>>           rc = -EOPNOTSUPP;
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index e72ca7a..4a745fd 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -2064,6 +2064,52 @@ int p2m_get_mem_access(struct domain *d, gfn_t
>> gfn,
>>       return ret;
>>   }
>>
>
> The 3 helpers below are altp2m specific so I would move them in altp2m.c
>

I will do that, thank you.

>> +struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
>> +{
>> +    unsigned int index = vcpu_altp2m(v).p2midx;
>> +
>> +    if ( index == INVALID_ALTP2M )
>> +        return NULL;
>> +
>> +    BUG_ON(index >= MAX_ALTP2M);
>> +
>> +    return v->domain->arch.altp2m_p2m[index];
>> +}
>> +
>> +static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
>> +{
>> +    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
>> +    struct vttbr_data *vttbr = &p2m->vttbr;
>> +
>> +    p2m->lowest_mapped_gfn = INVALID_GFN;
>> +    p2m->max_mapped_gfn = 0;
>
> Would not it be easier to reallocate p2m from scratch everytime you
> enable it?
>

Do you mean instead of dynamically allocating memory for all altp2m_p2m
entries at once in p2m_init_altp2m by means of p2m_init_one? If yes,
then I agree. Thankyou.

>> +
>> +    vttbr->vttbr_baddr = page_to_maddr(p2m->root);
>> +    vttbr->vttbr_vmid = p2m->vmid;
>> +
>> +    d->arch.altp2m_vttbr[i] = vttbr->vttbr;
>> +}
>> +
>> +int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
>> +{
>> +    int rc = -EINVAL;
>> +
>> +    if ( idx >= MAX_ALTP2M )
>> +        return rc;
>> +
>> +    altp2m_lock(d);
>> +
>> +    if ( d->arch.altp2m_vttbr[idx] == INVALID_MFN )
>> +    {
>> +        p2m_init_altp2m_helper(d, idx);
>> +        rc = 0;
>> +    }
>> +
>> +    altp2m_unlock(d);
>> +
>> +    return rc;
>> +}
>> +
>>   /*
>>    * Local variables:
>>    * mode: C
>
> [...]
>
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 6b9770f..8bcd618 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -138,6 +138,12 @@ struct arch_domain
>>       uint64_t altp2m_vttbr[MAX_ALTP2M];
>>   }  __cacheline_aligned;
>>
>> +struct altp2mvcpu {
>> +    uint16_t p2midx; /* alternate p2m index */
>> +};
>> +
>> +#define vcpu_altp2m(v) ((v)->arch.avcpu)
>> +
>>   struct arch_vcpu
>>   {
>>       struct {
>> @@ -267,6 +273,9 @@ struct arch_vcpu
>>       struct vtimer phys_timer;
>>       struct vtimer virt_timer;
>>       bool_t vtimer_initialized;
>> +
>> +    /* Alternate p2m context */
>> +    struct altp2mvcpu avcpu;
>>   }  __cacheline_aligned;
>>
>>   void vcpu_show_execution_state(struct vcpu *);
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index a78d547..8ee78e0 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -121,6 +121,25 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>>       /* Not supported on ARM. */
>>   }
>>
>> +/*
>> + * Alternate p2m: shadow p2m tables used for alternate memory views.
>> + */
>> +
>> +#define altp2m_lock(d)      spin_lock(&(d)->arch.altp2m_lock)
>> +#define altp2m_unlock(d)    spin_unlock(&(d)->arch.altp2m_lock)
>> +
>> +/* Get current alternate p2m table */
>> +struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
>> +
>> +/* Flush all the alternate p2m's for a domain */
>> +static inline void p2m_flush_altp2m(struct domain *d)
>> +{
>> +    /* Not supported on ARM. */
>> +}
>> +
>> +/* Make a specific alternate p2m valid */
>> +int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx);
>> +
>
> Please move anything related to altp2m in altp2m.h
>

I will do that, thank you Julien.

>>   #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
>>   #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
>>
>>
>
> Regards,
>

Best regards,
Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
  2016-07-04 15:42     ` Julien Grall
@ 2016-07-05  8:52       ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05  8:52 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/04/2016 05:42 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 04/07/16 13:12, Sergej Proskurin wrote:
>>> +/* Reset this p2m table to be empty */
>>> +static void p2m_flush_table(struct p2m_domain *p2m)
>>> +{
>>> +    struct page_info *top, *pg;
>>> +    mfn_t mfn;
>>> +    unsigned int i;
>>> +
>>> +    /* Check whether the p2m table has already been flushed before. */
>>> +    if ( p2m->root == NULL)
>>> +        return;
>>> +
>>> +    spin_lock(&p2m->lock);
>>> +
>>> +    /*
>>> +     * "Host" p2m tables can have shared entries &c that need a bit
>>> more care
>>> +     * when discarding them
>>> +     */
>>> +    ASSERT(!p2m_is_hostp2m(p2m));
>>> +
>>> +    /* Zap the top level of the trie */
>>> +    top = p2m->root;
>>> +
>>> +    /* Clear all concatenated first level pages */
>>> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>>> +    {
>>> +        mfn = _mfn(page_to_mfn(top + i));
>>> +        clear_domain_page(mfn);
>>> +    }
>>> +
>>> +    /* Free the rest of the trie pages back to the paging pool */
>>> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
>>> +        if ( pg != top  )
>>> +        {
>>> +            /*
>>> +             * Before freeing the individual pages, we clear them
>>> to prevent
>>> +             * reusing old table entries in future p2m allocations.
>>> +             */
>>> +            mfn = _mfn(page_to_mfn(pg));
>>> +            clear_domain_page(mfn);
>>> +            free_domheap_page(pg);
>>> +        }
>>
>> At this point, we prevent only the first root level page from being
>> freed. In case there are multiple consecutive first level pages, one of
>> them will be freed in the upper loop (and potentially crash the guest if
>> the table is reused at a later point in time). However, testing for
>> every concatenated page in the if clause of the while loop would further
>> decrease the flushing performance. Thus, my question is, whether there
>> is a good way to solve this issue?
>
> The root pages are not part of p2m->pages, so there is no issue.

Thanks for clearing that up. We have already discussed this in patch #04.

>
> Regards,
>

Thank you.

Best regards,
Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
  2016-07-04 15:55   ` Julien Grall
@ 2016-07-05  9:51     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05  9:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hello Julien,

On 07/04/2016 05:55 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> The current implementation differentiates between flushing and
>> destroying altp2m views. This commit adds the functions
>> p2m_flush_altp2m, and p2m_flush_table, which allow to flush all or
>> individual altp2m views without destroying the entire table. In this
>> way, altp2m views can be reused at a later point in time.
>>
>> In addition, the implementation clears all altp2m entries during the
>> process of flushing. The same applies to hostp2m entries, when it is
>> destroyed. In this way, further domain and p2m allocations will not
>> unintentionally reuse old p2m mappings.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>   xen/arch/arm/p2m.c        | 67
>> +++++++++++++++++++++++++++++++++++++++++++++++
>>   xen/include/asm-arm/p2m.h | 15 ++++++++---
>>   2 files changed, 78 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 4a745fd..ae789e6 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -2110,6 +2110,73 @@ int p2m_init_altp2m_by_id(struct domain *d,
>> unsigned int idx)
>>       return rc;
>>   }
>>
>> +/* Reset this p2m table to be empty */
>> +static void p2m_flush_table(struct p2m_domain *p2m)
>> +{
>> +    struct page_info *top, *pg;
>> +    mfn_t mfn;
>> +    unsigned int i;
>> +
>> +    /* Check whether the p2m table has already been flushed before. */
>> +    if ( p2m->root == NULL)
>
> This check looks invalid. p2m->root is never reset to NULL by
> p2m_flush_table, so you will always flush.
>

All right. Here, I just wanted to be sure that we don't flush an
invalidpage. I will remove this check.

>> +        return;
>> +
>> +    spin_lock(&p2m->lock);
>> +
>> +    /*
>> +     * "Host" p2m tables can have shared entries &c that need a bit
>> more care
>> +     * when discarding them
>
> I don't understand this comment. Can you explain it?
>

This is a lost comment. However, there are slight differences in freeing
a hostp2m as opposed to an altp2m (see the additional freeing of VMIDs
in hostp2ms). Since, we agreed to using unique VMIDs for altp2m views as
well, I will try to merge the different flush/free functions.Thank you.

>> +     */
>> +    ASSERT(!p2m_is_hostp2m(p2m));
>> +
>> +    /* Zap the top level of the trie */
>> +    top = p2m->root;
>> +
>> +    /* Clear all concatenated first level pages */
>> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>> +    {
>> +        mfn = _mfn(page_to_mfn(top + i));
>> +        clear_domain_page(mfn);
>> +    }
>> +
>> +    /* Free the rest of the trie pages back to the paging pool */
>> +    while ( (pg = page_list_remove_head(&p2m->pages)) )
>> +        if ( pg != top  )
>> +        {
>> +            /*
>> +             * Before freeing the individual pages, we clear them to
>> prevent
>> +             * reusing old table entries in future p2m allocations.
>> +             */
>> +            mfn = _mfn(page_to_mfn(pg));
>> +            clear_domain_page(mfn);
>> +            free_domheap_page(pg);
>> +        }
>> +
>> +    page_list_add(top, &p2m->pages);
>
> This code is very similar to p2m_free_one. Can we share some code?
>

Yes, I will merge both functions and extract parts that differ in a
separate function. The same applies to p2m_teardown_hostp2m. Thank you.

>> +
>> +    /* Invalidate VTTBR */
>> +    p2m->vttbr.vttbr = 0;
>> +    p2m->vttbr.vttbr_baddr = INVALID_MFN;
>> +
>> +    spin_unlock(&p2m->lock);
>> +}
>
> Regards,
>

Thank you.

Cheers,
Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
  2016-07-04 16:20   ` Julien Grall
@ 2016-07-05  9:57     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05  9:57 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/04/2016 06:20 PM, Julien Grall wrote:
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> +void p2m_flush_altp2m(struct domain *d)
>> +{
>> +    unsigned int i;
>> +
>> +    altp2m_lock(d);
>> +
>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>> +    {
>> +        p2m_flush_table(d->arch.altp2m_p2m[i]);
>> +        flush_tlb();
>
> I forgot to comment on this line.
>
> Can you explain this call? flush_tlb is flushing TLBs for the current
> VMID only. However, d may not be equal to current when calling from
> HVM op.
>

Since we are flushing one of the altp2m views, I wanted to explicitly
get rid of potentially stalled entries in the TLBs. Until now, VMIDs
were associated with a specific domain --as opposed to a specific view.
That is, after flushing one of the altp2m views, the TLBs might be
holding invalid entries. Since we agreed to use unique VMIDs per altp2m
view, this flush can be left out. Thank you.

>> +        d->arch.altp2m_vttbr[i] = INVALID_MFN;
>> +    }
>
> Regards,
>

Thank you.

Cheers,
Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-07-05  8:45     ` Sergej Proskurin
@ 2016-07-05 10:11       ` Julien Grall
  2016-07-05 12:05         ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-05 10:11 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 05/07/16 09:45, Sergej Proskurin wrote:
>>> +struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
>>> +{
>>> +    unsigned int index = vcpu_altp2m(v).p2midx;
>>> +
>>> +    if ( index == INVALID_ALTP2M )
>>> +        return NULL;
>>> +
>>> +    BUG_ON(index >= MAX_ALTP2M);
>>> +
>>> +    return v->domain->arch.altp2m_p2m[index];
>>> +}
>>> +
>>> +static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
>>> +{
>>> +    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
>>> +    struct vttbr_data *vttbr = &p2m->vttbr;
>>> +
>>> +    p2m->lowest_mapped_gfn = INVALID_GFN;
>>> +    p2m->max_mapped_gfn = 0;
>>
>> Would not it be easier to reallocate p2m from scratch everytime you
>> enable it?
>>
>
> Do you mean instead of dynamically allocating memory for all altp2m_p2m
> entries at once in p2m_init_altp2m by means of p2m_init_one? If yes,
> then I agree. Thankyou.

I mean that the altp2m memory should only be allocated when you use it. 
I.e in p2m_init_next_altp2m.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-04 11:45 ` [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
  2016-07-04 13:36   ` Julien Grall
@ 2016-07-05 10:19   ` Julien Grall
  2016-07-06  9:14     ` Sergej Proskurin
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-05 10:19 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
> +{
> +    struct xen_hvm_altp2m_op a;
> +    struct domain *d = NULL;
> +    int rc = 0;
> +
> +    if ( !hvm_altp2m_supported() )
> +        return -EOPNOTSUPP;
> +
> +    if ( copy_from_guest(&a, arg, 1) )
> +        return -EFAULT;
> +
> +    if ( a.pad1 || a.pad2 ||
> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
> +         (a.cmd > HVMOP_altp2m_change_gfn) )
> +        return -EINVAL;
> +
> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
> +        rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
> +
> +    if ( d == NULL )
> +        return -ESRCH;
> +
> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
> +         !d->arch.altp2m_active )
> +    {
> +        rc = -EOPNOTSUPP;
> +        goto out;
> +    }
> +
> +    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
> +        goto out;

I think this is the best place to ask a couple of questions related to 
who can access altp2m. Based on this call, a guest is allowed to manage 
its own altp2m. Can you explain why we would want a guest to do that?

Also, I have noticed that a guest is allowed to disable ALTP2M on ARM 
because it set any param (x86 has some restriction on it). Similarly, 
the ALTP2M parameter can be set multiple time.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-07-04 16:32   ` Julien Grall
@ 2016-07-05 11:37     ` Sergej Proskurin
  2016-07-05 11:48       ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05 11:37 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/04/2016 06:32 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>   xen/arch/arm/hvm.c        |  2 +-
>>   xen/arch/arm/p2m.c        | 32 ++++++++++++++++++++++++++++++++
>>   xen/include/asm-arm/p2m.h |  3 +++
>>   3 files changed, 36 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index 005d7c6..f4ec5cf 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -145,7 +145,7 @@ static int
>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>           break;
>>
>>       case HVMOP_altp2m_destroy_p2m:
>> -        rc = -EOPNOTSUPP;
>> +        rc = p2m_destroy_altp2m_by_id(d, a.u.view.view);
>>           break;
>>
>>       case HVMOP_altp2m_switch_p2m:
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 6c41b98..f82f1ea 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -2200,6 +2200,38 @@ void p2m_flush_altp2m(struct domain *d)
>>       altp2m_unlock(d);
>>   }
>>
>> +int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>> +{
>> +    struct p2m_domain *p2m;
>> +    int rc = -EBUSY;
>> +
>> +    if ( !idx || idx >= MAX_ALTP2M )
>
> Can you please add a comment to explain why the altp2m at index 0
> cannot be destroyed.
>

This has been adopted from the x86 implementation. The altp2m[0] is
considered as hostp2m and is used as a safe harbor to which you can
switch as long as altp2m is active. After deactivating altp2m, the
system switches back to the original hostp2m view.That is, altp2m[0]
should only be destroyed/flushed/freed, when altp2m is deactivated.

>> +        return rc;
>> +
>> +    domain_pause_except_self(d);
>> +
>> +    altp2m_lock(d);
>> +
>> +    if ( d->arch.altp2m_vttbr[idx] != INVALID_MFN )
>> +    {
>> +        p2m = d->arch.altp2m_p2m[idx];
>> +
>> +        if ( !_atomic_read(p2m->active_vcpus) )
>> +        {
>> +            p2m_flush_table(p2m);
>> +            flush_tlb();
>
> Can you explain this call? flush_tlb is flushing TLBs for the current
> VMID only. However, d may not be equal to current when calling from
> HVM op.
>

You are correct. Please correct me if I am wrong, but I believe we even
do not need flushing at this point any more (see my response in patch
#06).Thank you.

>> +            d->arch.altp2m_vttbr[idx] = INVALID_MFN;
>> +            rc = 0;
>> +        }
>> +    }
>> +
>> +    altp2m_unlock(d);
>> +
>> +    domain_unpause_except_self(d);
>> +
>> +    return rc;
>> +}
>> +
>>   /*
>>    * Local variables:
>>    * mode: C
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index c51532a..255a282 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -140,6 +140,9 @@ int p2m_init_altp2m_by_id(struct domain *d,
>> unsigned int idx);
>>   /* Find an available alternate p2m and make it valid */
>>   int p2m_init_next_altp2m(struct domain *d, uint16_t *idx);
>>
>> +/* Make a specific alternate p2m invalid */
>> +int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx);
>> +
>>   #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
>>   #define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
>>
>>
>
> Regards,
>

~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-07-05 11:37     ` Sergej Proskurin
@ 2016-07-05 11:48       ` Julien Grall
  2016-07-05 12:18         ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-05 11:48 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 05/07/16 12:37, Sergej Proskurin wrote:
> On 07/04/2016 06:32 PM, Julien Grall wrote:
>> Hello Sergej,
>>
>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>> ---
>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>> Cc: Julien Grall <julien.grall@arm.com>
>>> ---
>>>    xen/arch/arm/hvm.c        |  2 +-
>>>    xen/arch/arm/p2m.c        | 32 ++++++++++++++++++++++++++++++++
>>>    xen/include/asm-arm/p2m.h |  3 +++
>>>    3 files changed, 36 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>>> index 005d7c6..f4ec5cf 100644
>>> --- a/xen/arch/arm/hvm.c
>>> +++ b/xen/arch/arm/hvm.c
>>> @@ -145,7 +145,7 @@ static int
>>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>>            break;
>>>
>>>        case HVMOP_altp2m_destroy_p2m:
>>> -        rc = -EOPNOTSUPP;
>>> +        rc = p2m_destroy_altp2m_by_id(d, a.u.view.view);
>>>            break;
>>>
>>>        case HVMOP_altp2m_switch_p2m:
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 6c41b98..f82f1ea 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -2200,6 +2200,38 @@ void p2m_flush_altp2m(struct domain *d)
>>>        altp2m_unlock(d);
>>>    }
>>>
>>> +int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>>> +{
>>> +    struct p2m_domain *p2m;
>>> +    int rc = -EBUSY;
>>> +
>>> +    if ( !idx || idx >= MAX_ALTP2M )
>>
>> Can you please add a comment to explain why the altp2m at index 0
>> cannot be destroyed.
>>
>
> This has been adopted from the x86 implementation. The altp2m[0] is
> considered as hostp2m and is used as a safe harbor to which you can
> switch as long as altp2m is active. After deactivating altp2m, the
> system switches back to the original hostp2m view.That is, altp2m[0]
> should only be destroyed/flushed/freed, when altp2m is deactivated.

Please add a comment in the code to explain it.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-07-05 10:11       ` Julien Grall
@ 2016-07-05 12:05         ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05 12:05 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/05/2016 12:11 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 05/07/16 09:45, Sergej Proskurin wrote:
>>>> +struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
>>>> +{
>>>> +    unsigned int index = vcpu_altp2m(v).p2midx;
>>>> +
>>>> +    if ( index == INVALID_ALTP2M )
>>>> +        return NULL;
>>>> +
>>>> +    BUG_ON(index >= MAX_ALTP2M);
>>>> +
>>>> +    return v->domain->arch.altp2m_p2m[index];
>>>> +}
>>>> +
>>>> +static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
>>>> +{
>>>> +    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
>>>> +    struct vttbr_data *vttbr = &p2m->vttbr;
>>>> +
>>>> +    p2m->lowest_mapped_gfn = INVALID_GFN;
>>>> +    p2m->max_mapped_gfn = 0;
>>>
>>> Would not it be easier to reallocate p2m from scratch everytime you
>>> enable it?
>>>
>>
>> Do you mean instead of dynamically allocating memory for all altp2m_p2m
>> entries at once in p2m_init_altp2m by means of p2m_init_one? If yes,
>> then I agree. Thankyou.
>
> I mean that the altp2m memory should only be allocated when you use
> it. I.e in p2m_init_next_altp2m.
>
> Regards,
>
Yes I agree: We can allocate p2m only when needed. In this way, we would
consume less memory at run time. Thank you.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-07-05 11:48       ` Julien Grall
@ 2016-07-05 12:18         ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05 12:18 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/05/2016 01:48 PM, Julien Grall wrote:
>
>
> On 05/07/16 12:37, Sergej Proskurin wrote:
>> On 07/04/2016 06:32 PM, Julien Grall wrote:
>>> Hello Sergej,
>>>
>>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>>> ---
>>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>>> Cc: Julien Grall <julien.grall@arm.com>
>>>> ---
>>>>    xen/arch/arm/hvm.c        |  2 +-
>>>>    xen/arch/arm/p2m.c        | 32 ++++++++++++++++++++++++++++++++
>>>>    xen/include/asm-arm/p2m.h |  3 +++
>>>>    3 files changed, 36 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>>>> index 005d7c6..f4ec5cf 100644
>>>> --- a/xen/arch/arm/hvm.c
>>>> +++ b/xen/arch/arm/hvm.c
>>>> @@ -145,7 +145,7 @@ static int
>>>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>            break;
>>>>
>>>>        case HVMOP_altp2m_destroy_p2m:
>>>> -        rc = -EOPNOTSUPP;
>>>> +        rc = p2m_destroy_altp2m_by_id(d, a.u.view.view);
>>>>            break;
>>>>
>>>>        case HVMOP_altp2m_switch_p2m:
>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>> index 6c41b98..f82f1ea 100644
>>>> --- a/xen/arch/arm/p2m.c
>>>> +++ b/xen/arch/arm/p2m.c
>>>> @@ -2200,6 +2200,38 @@ void p2m_flush_altp2m(struct domain *d)
>>>>        altp2m_unlock(d);
>>>>    }
>>>>
>>>> +int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
>>>> +{
>>>> +    struct p2m_domain *p2m;
>>>> +    int rc = -EBUSY;
>>>> +
>>>> +    if ( !idx || idx >= MAX_ALTP2M )
>>>
>>> Can you please add a comment to explain why the altp2m at index 0
>>> cannot be destroyed.
>>>
>>
>> This has been adopted from the x86 implementation. The altp2m[0] is
>> considered as hostp2m and is used as a safe harbor to which you can
>> switch as long as altp2m is active. After deactivating altp2m, the
>> system switches back to the original hostp2m view.That is, altp2m[0]
>> should only be destroyed/flushed/freed, when altp2m is deactivated.
>
> Please add a comment in the code to explain it.
>

Ok, I will add an appropriate comment, thank you.

Cheers,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-04 11:45 ` [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
@ 2016-07-05 12:49   ` Julien Grall
  2016-07-05 21:55     ` Sergej Proskurin
  2016-07-06 17:08   ` Julien Grall
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-05 12:49 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> +static inline
> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
> +                              struct p2m_domain *ap2m, p2m_access_t a,
> +                              gfn_t gfn)
> +{

[...]

> +    /* Set mem access attributes - currently supporting only one (4K) page. */
> +    mask = level_masks[3];
> +    return apply_p2m_changes(d, ap2m, INSERT,
> +                             gpa & mask,
> +                             (gpa + level_sizes[level]) & mask,
> +                             maddr & mask, mattr, 0, p2mt, a);

The operation INSERT will remove the previous mapping and decrease page 
reference for foreign mapping (see p2m_put_l3_page). So if you set the 
memory access for this kind of page, the reference count will be wrong 
afterward.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table.
  2016-07-04 18:43   ` Julien Grall
@ 2016-07-05 13:56     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05 13:56 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hello Julien,


On 07/04/2016 08:43 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> +int p2m_table_init(struct domain *d)
>> +{
>> +    int i = 0;
>> +    int rc = -ENOMEM;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +    spin_lock(&p2m->lock);
>> +
>> +    rc = p2m_alloc_table(p2m);
>> +    if ( rc != 0 )
>> +        goto out;
>> +
>> +    d->arch.vttbr = d->arch.p2m.vttbr.vttbr;
>> +
>> +    /*
>> +     * Make sure that all TLBs corresponding to the new VMID are
>> flushed
>> +     * before using it.
>>        */
>>       flush_tlb_domain(d);
>>
>>       spin_unlock(&p2m->lock);
>>
>> -    return 0;
>> +    if ( hvm_altp2m_supported() )
>> +    {
>> +        /* Init alternate p2m data */
>> +        for ( i = 0; i < MAX_ALTP2M; i++ )
>> +        {
>> +            d->arch.altp2m_vttbr[i] = INVALID_MFN;
>> +            rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
>
> Why do we need to allocate all the altp2m root page tables at the
> creation of the domain? This is wasting up to 80KB (2-root page for 10
> altp2m) per domain even if it may not be used at all by the domain.
>

As previously discussed in patch #05, I will change this behavior in a
way that altp2m views will be allocated at run time only when they are
needed. Thank you.

>> +            if ( rc != 0 )
>> +                goto out;
>> +        }
>> +
>> +        d->arch.altp2m_active = 0;
>> +    }
>> +
>> +out:
>> +    return rc;
>>   }
>>
>>   #define MAX_VMID 256
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 783db5c..451b097 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -171,7 +171,7 @@ int relinquish_p2m_mapping(struct domain *d);
>>    *
>>    * Returns 0 for success or -errno.
>>    */
>> -int p2m_alloc_table(struct domain *d);
>> +int p2m_table_init(struct domain *d);
>>
>>   /* Context switch */
>>   void p2m_save_state(struct vcpu *p);
>>
>
> Regards,
>

Best,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-04 20:32   ` Julien Grall
@ 2016-07-05 14:48     ` Sergej Proskurin
  2016-07-05 15:37       ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05 14:48 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/04/2016 10:32 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 04/07/2016 12:45, Sergej Proskurin wrote:
>> This commit makes sure that the TLB of a domain considers flushing all
>> of the associated altp2m views. Therefore, in case a different domain
>> (not the currently active domain) shall flush its TLBs, the current
>> implementation loops over all VTTBRs of the different altp2m mappings
>> per vCPU and flushes the TLBs. This way, a change of one of the altp2m
>> mapping is considered.  At this point, it must be considered that the
>> domain --whose TLBs are to be flushed-- is not locked.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/p2m.c | 71
>> ++++++++++++++++++++++++++++++++++++++++++++++++------
>>  1 file changed, 63 insertions(+), 8 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 7e721f9..019f10e 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -15,6 +15,8 @@
>>  #include <asm/hardirq.h>
>>  #include <asm/page.h>
>>
>> +#include <asm/altp2m.h>
>> +
>>  #ifdef CONFIG_ARM_64
>>  static unsigned int __read_mostly p2m_root_order;
>>  static unsigned int __read_mostly p2m_root_level;
>> @@ -79,12 +81,41 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>>                   P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
>>  }
>>
>> +static uint64_t p2m_get_altp2m_vttbr(struct vcpu *v)
>> +{
>> +    struct domain *d = v->domain;
>> +    uint16_t index = vcpu_altp2m(v).p2midx;
>> +
>> +    if ( index == INVALID_ALTP2M )
>> +        return INVALID_MFN;
>> +
>> +    BUG_ON(index >= MAX_ALTP2M);
>> +
>> +    return d->arch.altp2m_vttbr[index];
>> +}
>> +
>> +static void p2m_load_altp2m_VTTBR(struct vcpu *v)
>
> Please try to share code when it is possible. For instance, a big part
> of this helper is similar to p2m_load_VTTBR. Assuming you get the p2m
> rather than the VTTBR directly.
>

I will combine these two functions, thank you.

>> +{
>> +    struct domain *d = v->domain;
>> +    uint64_t vttbr = p2m_get_altp2m_vttbr(v);
>> +
>> +    if ( is_idle_domain(d) )
>> +        return;
>> +
>> +    BUG_ON(vttbr == INVALID_MFN);
>> +    WRITE_SYSREG64(vttbr, VTTBR_EL2);
>> +
>> +    isb(); /* Ensure update is visible */
>> +}
>> +
>>  static void p2m_load_VTTBR(struct domain *d)
>>  {
>>      if ( is_idle_domain(d) )
>>          return;
>> +
>>      BUG_ON(!d->arch.vttbr);
>>      WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
>> +
>
> Spurious changes.
>

I will revert these, thank you.

>>      isb(); /* Ensure update is visible */
>>  }
>>
>> @@ -101,7 +132,11 @@ void p2m_restore_state(struct vcpu *n)
>>      WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>>      isb();
>>
>> -    p2m_load_VTTBR(n->domain);
>> +    if ( altp2m_active(n->domain) )
>
> This would benefit of an unlikely (maybe within altp2m_active).
>

Fair enough. I will insert an unlikely macro into altp2m_active.

>> +        p2m_load_altp2m_VTTBR(n);
>> +    else
>> +        p2m_load_VTTBR(n->domain);
>> +
>>      isb();
>>
>>      if ( is_32bit_domain(n->domain) )
>> @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
>>  void flush_tlb_domain(struct domain *d)
>>  {
>>      unsigned long flags = 0;
>> +    struct vcpu *v = NULL;
>>
>> -    /* Update the VTTBR if necessary with the domain d. In this case,
>> -     * it's only necessary to flush TLBs on every CPUs with the
>> current VMID
>> -     * (our domain).
>> +    /*
>> +     * Update the VTTBR if necessary with the domain d. In this
>> case, it is only
>> +     * necessary to flush TLBs on every CPUs with the current VMID (our
>> +     * domain).
>>       */
>>      if ( d != current->domain )
>>      {
>>          local_irq_save(flags);
>> -        p2m_load_VTTBR(d);
>> -    }
>>
>> -    flush_tlb();
>> +        /* If altp2m is active, update VTTBR and flush TLBs of every
>> VCPU */
>> +        if ( altp2m_active(d) )
>> +        {
>> +            for_each_vcpu( d, v )
>> +            {
>> +                p2m_load_altp2m_VTTBR(v);
>> +                flush_tlb();
>> +            }
>> +        }
>> +        else
>> +        {
>> +            p2m_load_VTTBR(d);
>> +            flush_tlb();
>> +        }
>
> Why do you need to do a such things? If the VMID is the same, a single
> call to flush_tlb() will nuke all the entries for the given TLBs.
>
> If the VMID is not shared, then I am not even sure why you would need
> to flush the TLBs for all the altp2ms.
>

If the VMID is shared between multiple views and we would change one
entry in one specific view, we might reuse an entry that is not part of
the currently active altp2m view of the domain. And even if we assign
unique VMIDs to the individual altp2m views (as discussed in patch #04),
as far as I understand, we will still need to flush the mappings of all
altp2ms at this point because (AFAIK) changes in individual altp2m views
will still need to be propagated to the TLBs. Please correct me, if I am
wrong at this point.

> I have looked to Xen with this series applied and noticed that when
> you remove a page from the hostp2m, the mapping in the altp2m is not
> removed. So the guest may use a page that would have been freed
> previously. Did I miss something?
>

When altp2m is active, the hostp2m is not used. All changes are applied
directly to the individual altp2m views. As far as I know, the only
situations, where a mapping is removed from a specific p2m view is in
the functions unmap_regions_rw_cache, unmap_mmio_regions, and
guest_physmap_remove_page. Each one of these functions provide, however
the hostp2m to the function apply_p2m_changes, where the mapping is
eventually removed. So, we definitely remove mappings from the hostp2m.
However, these would not be removed from the associated altp2m views.

Can you state a scenario, when and why pages might need to be removed at
run time from the hostp2m? In that case, maybe it would make sense to
extend the three functions (unmap_regions_rw_cache, unmap_mmio_regions,
and guest_physmap_remove_page) to additionally remove the mappings from
all available altp2m views?

Cheers,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-05 14:48     ` Sergej Proskurin
@ 2016-07-05 15:37       ` Julien Grall
  2016-07-05 20:21         ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-05 15:37 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 05/07/16 15:48, Sergej Proskurin wrote:
> On 07/04/2016 10:32 PM, Julien Grall wrote:
>> On 04/07/2016 12:45, Sergej Proskurin wrote:
>>> +        p2m_load_altp2m_VTTBR(n);
>>> +    else
>>> +        p2m_load_VTTBR(n->domain);
>>> +
>>>       isb();
>>>
>>>       if ( is_32bit_domain(n->domain) )
>>> @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
>>>   void flush_tlb_domain(struct domain *d)
>>>   {
>>>       unsigned long flags = 0;
>>> +    struct vcpu *v = NULL;
>>>
>>> -    /* Update the VTTBR if necessary with the domain d. In this case,
>>> -     * it's only necessary to flush TLBs on every CPUs with the
>>> current VMID
>>> -     * (our domain).
>>> +    /*
>>> +     * Update the VTTBR if necessary with the domain d. In this
>>> case, it is only
>>> +     * necessary to flush TLBs on every CPUs with the current VMID (our
>>> +     * domain).
>>>        */
>>>       if ( d != current->domain )
>>>       {
>>>           local_irq_save(flags);
>>> -        p2m_load_VTTBR(d);
>>> -    }
>>>
>>> -    flush_tlb();
>>> +        /* If altp2m is active, update VTTBR and flush TLBs of every
>>> VCPU */
>>> +        if ( altp2m_active(d) )
>>> +        {
>>> +            for_each_vcpu( d, v )
>>> +            {
>>> +                p2m_load_altp2m_VTTBR(v);
>>> +                flush_tlb();
>>> +            }
>>> +        }
>>> +        else
>>> +        {
>>> +            p2m_load_VTTBR(d);
>>> +            flush_tlb();
>>> +        }
>>
>> Why do you need to do a such things? If the VMID is the same, a single
>> call to flush_tlb() will nuke all the entries for the given TLBs.
>>
>> If the VMID is not shared, then I am not even sure why you would need
>> to flush the TLBs for all the altp2ms.
>>
>
> If the VMID is shared between multiple views and we would change one
> entry in one specific view, we might reuse an entry that is not part of
> the currently active altp2m view of the domain. And even if we assign
> unique VMIDs to the individual altp2m views (as discussed in patch #04),
> as far as I understand, we will still need to flush the mappings of all
> altp2ms at this point because (AFAIK) changes in individual altp2m views
> will still need to be propagated to the TLBs. Please correct me, if I am
> wrong at this point.

You seem to be really confused how TLB flush instructions work on ARM.
The function flush_tlb() will flush TLBs on all the processor for the 
current VMID. If the VMID is shared, then calling N-times the flush with 
the same VMID will just be a waste of processor cycle.

Now, if the VMID is not shared (and based on your current series):
flush_tlb_domain is called in two places:
    - p2m_alloc_table
    - apply_p2m_changes

For the former, it allocates root table for a specific p2m. For the 
latter, the changes is only done for a specific p2m, and those changes 
are currently not replicated in all the p2ms. So flushing the TLBs for 
each altp2m view do not seem useful here too.

>
>> I have looked to Xen with this series applied and noticed that when
>> you remove a page from the hostp2m, the mapping in the altp2m is not
>> removed. So the guest may use a page that would have been freed
>> previously. Did I miss something?
>>
>
> When altp2m is active, the hostp2m is not used. All changes are applied
> directly to the individual altp2m views. As far as I know, the only
> situations, where a mapping is removed from a specific p2m view is in
> the functions unmap_regions_rw_cache, unmap_mmio_regions, and
> guest_physmap_remove_page. Each one of these functions provide, however
> the hostp2m to the function apply_p2m_changes, where the mapping is
> eventually removed. So, we definitely remove mappings from the hostp2m.
> However, these would not be removed from the associated altp2m views.
>
> Can you state a scenario, when and why pages might need to be removed at
> run time from the hostp2m? In that case, maybe it would make sense to
> extend the three functions (unmap_regions_rw_cache, unmap_mmio_regions,
> and guest_physmap_remove_page) to additionally remove the mappings from
> all available altp2m views?

All the functions, you mentioned can be called after the domain has been 
created. If you grep guest_physmap_remove_page in the code source, you 
will find that it is used to unmap grant entry from the memory (see 
replace_grant_host_mapping) or when decreasing the memory reservation 
(see do_memory_op).

Note that, from my understanding, x86 has the same problem.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-05 15:37       ` Julien Grall
@ 2016-07-05 20:21         ` Sergej Proskurin
  2016-07-06 14:28           ` Julien Grall
  2016-07-07 17:24           ` Julien Grall
  0 siblings, 2 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05 20:21 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,

On 07/05/2016 05:37 PM, Julien Grall wrote:
> 
> 
> On 05/07/16 15:48, Sergej Proskurin wrote:
>> On 07/04/2016 10:32 PM, Julien Grall wrote:
>>> On 04/07/2016 12:45, Sergej Proskurin wrote:
>>>> +        p2m_load_altp2m_VTTBR(n);
>>>> +    else
>>>> +        p2m_load_VTTBR(n->domain);
>>>> +
>>>>       isb();
>>>>
>>>>       if ( is_32bit_domain(n->domain) )
>>>> @@ -119,22 +154,42 @@ void p2m_restore_state(struct vcpu *n)
>>>>   void flush_tlb_domain(struct domain *d)
>>>>   {
>>>>       unsigned long flags = 0;
>>>> +    struct vcpu *v = NULL;
>>>>
>>>> -    /* Update the VTTBR if necessary with the domain d. In this case,
>>>> -     * it's only necessary to flush TLBs on every CPUs with the
>>>> current VMID
>>>> -     * (our domain).
>>>> +    /*
>>>> +     * Update the VTTBR if necessary with the domain d. In this
>>>> case, it is only
>>>> +     * necessary to flush TLBs on every CPUs with the current VMID
>>>> (our
>>>> +     * domain).
>>>>        */
>>>>       if ( d != current->domain )
>>>>       {
>>>>           local_irq_save(flags);
>>>> -        p2m_load_VTTBR(d);
>>>> -    }
>>>>
>>>> -    flush_tlb();
>>>> +        /* If altp2m is active, update VTTBR and flush TLBs of every
>>>> VCPU */
>>>> +        if ( altp2m_active(d) )
>>>> +        {
>>>> +            for_each_vcpu( d, v )
>>>> +            {
>>>> +                p2m_load_altp2m_VTTBR(v);
>>>> +                flush_tlb();
>>>> +            }
>>>> +        }
>>>> +        else
>>>> +        {
>>>> +            p2m_load_VTTBR(d);
>>>> +            flush_tlb();
>>>> +        }
>>>
>>> Why do you need to do a such things? If the VMID is the same, a single
>>> call to flush_tlb() will nuke all the entries for the given TLBs.
>>>
>>> If the VMID is not shared, then I am not even sure why you would need
>>> to flush the TLBs for all the altp2ms.
>>>
>>
>> If the VMID is shared between multiple views and we would change one
>> entry in one specific view, we might reuse an entry that is not part of
>> the currently active altp2m view of the domain. And even if we assign
>> unique VMIDs to the individual altp2m views (as discussed in patch #04),
>> as far as I understand, we will still need to flush the mappings of all
>> altp2ms at this point because (AFAIK) changes in individual altp2m views
>> will still need to be propagated to the TLBs. Please correct me, if I am
>> wrong at this point.
> 
> You seem to be really confused how TLB flush instructions work on ARM.
> The function flush_tlb() will flush TLBs on all the processor for the
> current VMID. If the VMID is shared, then calling N-times the flush with
> the same VMID will just be a waste of processor cycle.
> 

True, you are right. I was confusing things, sorry. Thank you for
putting that straight.

> Now, if the VMID is not shared (and based on your current series):
> flush_tlb_domain is called in two places:
>    - p2m_alloc_table
>    - apply_p2m_changes
> 
> For the former, it allocates root table for a specific p2m. For the
> latter, the changes is only done for a specific p2m, and those changes
> are currently not replicated in all the p2ms. So flushing the TLBs for
> each altp2m view do not seem useful here too.
> 

I agree on this point as well. However, we should maybe think of another
name for flush_tlb_domain. Especially, if we do not flush the entire
domain (including all currently active altp2m views). What do you think?

Also, we would need another flushing routine that takes a specific p2m
as argument so that its VTTBR can be considered while flushing the TLBs.

>>
>>> I have looked to Xen with this series applied and noticed that when
>>> you remove a page from the hostp2m, the mapping in the altp2m is not
>>> removed. So the guest may use a page that would have been freed
>>> previously. Did I miss something?
>>>
>>
>> When altp2m is active, the hostp2m is not used. All changes are applied
>> directly to the individual altp2m views. As far as I know, the only
>> situations, where a mapping is removed from a specific p2m view is in
>> the functions unmap_regions_rw_cache, unmap_mmio_regions, and
>> guest_physmap_remove_page. Each one of these functions provide, however
>> the hostp2m to the function apply_p2m_changes, where the mapping is
>> eventually removed. So, we definitely remove mappings from the hostp2m.
>> However, these would not be removed from the associated altp2m views.
>>
>> Can you state a scenario, when and why pages might need to be removed at
>> run time from the hostp2m? In that case, maybe it would make sense to
>> extend the three functions (unmap_regions_rw_cache, unmap_mmio_regions,
>> and guest_physmap_remove_page) to additionally remove the mappings from
>> all available altp2m views?
> 
> All the functions, you mentioned can be called after the domain has been
> created. If you grep guest_physmap_remove_page in the code source, you
> will find that it is used to unmap grant entry from the memory (see
> replace_grant_host_mapping) or when decreasing the memory reservation
> (see do_memory_op).
> 
> Note that, from my understanding, x86 has the same problem.
> 

Yes, the x86 implementation works similarly to the one for ARM. I will
need to think about that. One solution could simply extend the
previously mentioned functions by unmapping the particular entry in all
currently available altp2m views (assuming, we allocate the altp2m views
on demand, as discussed in patch #05).

Whatever the solution will be, it will need to be ported to the x86
implementation as well.

Thank you for pointing this out.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-07-04 20:34   ` Julien Grall
@ 2016-07-05 20:31     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05 20:31 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,

On 07/04/2016 10:34 PM, Julien Grall wrote:
> Hello Sergej,
> 
> On 04/07/2016 12:45, Sergej Proskurin wrote:
>> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
>> index ce1c3c3..413125f 100644
>> --- a/xen/arch/arm/guestcopy.c
>> +++ b/xen/arch/arm/guestcopy.c
>> @@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void
>> *to, const void *from,
>>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>>          struct page_info *page;
>>
>> -        page = get_page_from_gva(current->domain, (vaddr_t) to,
>> GV2M_WRITE);
>> +        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
>>          if ( page == NULL )
>>              return len;
>>
>> @@ -64,7 +64,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>>          struct page_info *page;
>>
>> -        page = get_page_from_gva(current->domain, (vaddr_t) to,
>> GV2M_WRITE);
>> +        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
>>          if ( page == NULL )
>>              return len;
>>
>> @@ -96,7 +96,7 @@ unsigned long raw_copy_from_guest(void *to, const
>> void __user *from, unsigned le
>>          unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
>>          struct page_info *page;
>>
>> -        page = get_page_from_gva(current->domain, (vaddr_t) from,
>> GV2M_READ);
>> +        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
>>          if ( page == NULL )
>>              return len;
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 9c8fefd..23b482f 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1829,10 +1829,11 @@ err:
>>      return page;
>>  }
>>
>> -struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
>> +struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>>                                      unsigned long flags)
>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> +    struct domain *d = v->domain;
>> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) :
>> p2m_get_hostp2m(d);
>>      struct page_info *page = NULL;
>>      paddr_t maddr = 0;
>>      int rc;
>> @@ -1844,17 +1845,23 @@ struct page_info *get_page_from_gva(struct
>> domain *d, vaddr_t va,
>>          unsigned long irq_flags;
>>
>>          local_irq_save(irq_flags);
>> -        p2m_load_VTTBR(d);
>> +
>> +        if ( altp2m_active(d) )
>> +            p2m_load_altp2m_VTTBR(v);
>> +        else
>> +            p2m_load_VTTBR(d);
>>
>>          rc = gvirt_to_maddr(va, &maddr, flags);
>>
>> -        p2m_load_VTTBR(current->domain);
>> +        if ( altp2m_active(current->domain) )
>> +            p2m_load_altp2m_VTTBR(current);
>> +        else
>> +            p2m_load_VTTBR(current->domain);
>> +
> 
> This could be abstracted with a new helper to load the VTTBR for a given
> vCPU.
> 

For my next patch series, I will think about an abstraction of the
function p2m_load[_altp2m]_VTTBR or a merge of both functions into one,
as discussed in path #11. Thank you.

>>          local_irq_restore(irq_flags);
>>      }
>>      else
>> -    {
>>          rc = gvirt_to_maddr(va, &maddr, flags);
>> -    }
>>
>>      if ( rc )
>>          goto err;
> 
> Regards,
> 

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-05 12:49   ` Julien Grall
@ 2016-07-05 21:55     ` Sergej Proskurin
  2016-07-06 14:32       ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-05 21:55 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hello Julien,

On 07/05/2016 02:49 PM, Julien Grall wrote:
> Hello Sergej,
> 
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> +static inline
>> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
>> +                              struct p2m_domain *ap2m, p2m_access_t a,
>> +                              gfn_t gfn)
>> +{
> 
> [...]
> 
>> +    /* Set mem access attributes - currently supporting only one (4K)
>> page. */
>> +    mask = level_masks[3];
>> +    return apply_p2m_changes(d, ap2m, INSERT,
>> +                             gpa & mask,
>> +                             (gpa + level_sizes[level]) & mask,
>> +                             maddr & mask, mattr, 0, p2mt, a);
> 
> The operation INSERT will remove the previous mapping and decrease page
> reference for foreign mapping (see p2m_put_l3_page). So if you set the
> memory access for this kind of page, the reference count will be wrong
> afterward.
> 

I see your point. As far as I understand, the purpose of the function
p2m_put_l3_page to simply decrement the ref count of the particular pte
and free the page if its' ref count reaches zero. Since p2m_put_l3_page
is called only twice in p2m.c, we could insert a check
(p2m_is_hostp2m/altp2m) making sure that the ref count is decremented
only if the p2m in question is the hostp2m. This will make sure that the
ref count is maintained correctly.

Thank you for pointing that out.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism.
  2016-07-04 20:53   ` Julien Grall
@ 2016-07-06  8:33     ` Sergej Proskurin
  2016-07-06 14:26       ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-06  8:33 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini, Tamas K Lengyel

Hi Julien,


On 07/04/2016 10:53 PM, Julien Grall wrote:
> (CC Tamas)
>
> Hello Sergej,
>
> On 04/07/2016 12:45, Sergej Proskurin wrote:
>> This commit adds the function p2m_altp2m_lazy_copy implementing the
>> altp2m paging mechanism. The function p2m_altp2m_lazy_copy lazily copies
>> the hostp2m's mapping into the currently active altp2m view on 2nd stage
>> instruction or data access violations. Every altp2m violation generates
>> a vm_event.
>
> I have been working on clean up the abort path (see [1]). Please
> rebase your code on top of it.
>

I will do that, thank you.

>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>
> [...]
>
>> +/*
>> + * If the fault is for a not present entry:
>> + *     if the entry in the host p2m has a valid mfn, copy it and retry
>> + *     else indicate that outer handler should handle fault
>> + *
>> + * If the fault is for a present entry:
>> + *     indicate that outer handler should handle fault
>> + */
>> +bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa,
>> +                            unsigned long gva, struct npfec npfec,
>> +                            struct p2m_domain **ap2m)
>> +{
>> +    struct domain *d = v->domain;
>> +    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
>> +    p2m_type_t p2mt;
>> +    xenmem_access_t xma;
>> +    paddr_t maddr, mask = 0;
>> +    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
>> +    unsigned int level;
>> +    unsigned long mattr;
>> +    int rc = 0;
>> +
>> +    static const p2m_access_t memaccess[] = {
>> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
>> +        ACCESS(n),
>> +        ACCESS(r),
>> +        ACCESS(w),
>> +        ACCESS(rw),
>> +        ACCESS(x),
>> +        ACCESS(rx),
>> +        ACCESS(wx),
>> +        ACCESS(rwx),
>> +        ACCESS(rx2rw),
>> +        ACCESS(n2rwx),
>> +#undef ACCESS
>> +    };
>> +
>> +    *ap2m = p2m_get_altp2m(v);
>> +    if ( *ap2m == NULL)
>> +        return 0;
>> +
>> +    /* Check if entry is part of the altp2m view */
>> +    spin_lock(&(*ap2m)->lock);
>> +    maddr = __p2m_lookup(*ap2m, gpa, NULL);
>> +    spin_unlock(&(*ap2m)->lock);
>> +    if ( maddr != INVALID_PADDR )
>> +        return 0;
>> +
>> +    /* Check if entry is part of the host p2m view */
>> +    spin_lock(&hp2m->lock);
>> +    maddr = __p2m_lookup(hp2m, gpa, &p2mt);
>> +    if ( maddr == INVALID_PADDR )
>> +        goto out;
>> +
>> +    rc = __p2m_get_mem_access(hp2m, gfn, &xma);
>> +    if ( rc )
>> +        goto out;
>> +
>> +    rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
>> +    if ( rc )
>> +        goto out;
>
> Can we introduce a function which return the xma, mfn, order,
> attribute at once? It will avoid to browse the p2m 3 times which is
> really expensive on ARMv7 because the p2m is not mapped in the virtual
> address space of Xen.
>

I was already thinking of at least merging p2m_get_gfn_level_and_attr
with __p2m_lookup. But it would also make sense to introduce an entirely
new function, which does just that.I believe increasing the overhead of
__p2m_lookup would not be a good solution. Thank you.

>> +    spin_unlock(&hp2m->lock);
>> +
>> +    mask = level_masks[level];
>> +
>> +    rc = apply_p2m_changes(d, *ap2m, INSERT,
>> +                           pfn_to_paddr(gfn_x(gfn)) & mask,
>> +                           (pfn_to_paddr(gfn_x(gfn)) +
>> level_sizes[level]) & mask,
>> +                           maddr & mask, mattr, 0, p2mt,
>> +                           memaccess[xma]);
>
> The page associated to the MFN is not locked, so another thread could
> decide to remove the page from the domain and then the altp2m would
> contain an entry to something that does not belong to the domain
> anymore. Note that x86 is doing the same. So I am not sure why it is
> considered safe there...
>

If I understand you correctly, unlocking the hp2m->lock after calling
apply_p2m_changeswould already solve this issue, right? Thanks.

>> +    if ( rc )
>> +    {
>> +        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m
>> %lx\n",
>> +                (unsigned long)pfn_to_paddr(gfn_x(gfn)), (unsigned
>> long)(maddr), (unsigned long)*ap2m);
>> +        domain_crash(hp2m->domain);
>> +    }
>> +
>> +    return 1;
>> +
>> +out:
>> +    spin_unlock(&hp2m->lock);
>> +    return 0;
>> +}
>> +
>>  static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
>>  {
>>      struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
>
> [...]
>
>> @@ -2429,6 +2460,8 @@ static void do_trap_data_abort_guest(struct
>> cpu_user_regs *regs,
>>                                       const union hsr hsr)
>>  {
>>      const struct hsr_dabt dabt = hsr.dabt;
>> +    struct vcpu *v = current;
>> +    struct p2m_domain *p2m = NULL;
>>      int rc;
>>      mmio_info_t info;
>>
>> @@ -2449,6 +2482,12 @@ static void do_trap_data_abort_guest(struct
>> cpu_user_regs *regs,
>>          info.gpa = get_faulting_ipa();
>>      else
>>      {
>> +        /*
>> +         * When using altp2m, this flush is required to get rid of
>> old TLB
>> +         * entries and use the new, lazily copied, ap2m entries.
>> +         */
>> +        flush_tlb_local();
>
> Can you give more details why this flush is required?
>

Without the flush, the guest crashed to an unresolved data abort. To be
more precise, after the first lazy-copy of a mapping from the hostp2m to
the currently active altp2m view, the system crashed because it was not
able to find the new mapping in its altp2m table. The explicit flush
solved this issue quite nicely.

As I answer your question, I am starting to think that the crash was a
result of of a lack of a memory barrier because even with the old
(hostp2m's) TLBs present, the translation would not present an issue
(the mapping would be the same as the p2m entry is simply copied from
the hostp2m to the active altp2m view). Also, the guest would reuse the
old TLBs, the system would have used the access rights of the hostp2m,
and hence again be trapped by Xen.

I will try to solve this issue by means of a memory barrier, thank you.

>> +
>
> Regards,
>
> [1] https://lists.xen.org/archives/html/xen-devel/2016-06/msg02853.html
>

Cheers,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m.
  2016-07-04 20:58   ` Julien Grall
@ 2016-07-06  8:41     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-06  8:41 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/04/2016 10:58 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 04/07/2016 12:45, Sergej Proskurin wrote:
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/p2m.c | 6 ++++--
>>  1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 96892a5..de97a12 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -51,7 +51,8 @@ static bool_t p2m_mapping(lpae_t pte)
>>
>>  void p2m_dump_info(struct domain *d)
>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> +    struct vcpu *v = current;
>
> This is wrong, p2m_dump_info can be called with d != current->domain.
> Please try to look at how the caller may use the function before doing
> any modification.
>
> In this case, I think you want to dump the info for the hostp2m and
> every altp2ms.
>

Fair enough. I will provide the information ofall altp2m views. Thank you.

>> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) :
>> p2m_get_hostp2m(d);
>>
>>      spin_lock(&p2m->lock);
>>      printk("p2m mappings for domain %d (vmid %d):\n",
>> @@ -71,7 +72,8 @@ void memory_type_changed(struct domain *d)
>>
>>  void dump_p2m_lookup(struct domain *d, paddr_t addr)
>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> +    struct vcpu *v = current;
>
> Ditto.
>

Ok, thankyou.

>> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) :
>> p2m_get_hostp2m(d);
>>
>>      printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr);
>>
>>
>
> Regards,
>

Cheers,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM.
  2016-07-04 13:38   ` Razvan Cojocaru
@ 2016-07-06  8:44     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-06  8:44 UTC (permalink / raw)
  To: xen-devel

Hi Razvan,


On 07/04/2016 03:38 PM, Razvan Cojocaru wrote:
> On 07/04/16 14:45, Sergej Proskurin wrote:
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
>> Cc: Tamas K Lengyel <tamas@tklengyel.com>
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> Cc: Wei Liu <wei.liu2@citrix.com>
>> ---
>>  tools/tests/xen-access/xen-access.c | 11 +++++++++--
>>  1 file changed, 9 insertions(+), 2 deletions(-)
> Fair enough, looks like a trivial change.
>
> Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
>

Great, thank you.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-05 10:19   ` Julien Grall
@ 2016-07-06  9:14     ` Sergej Proskurin
  2016-07-06 13:43       ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-06  9:14 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/05/2016 12:19 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>> +{
>> +    struct xen_hvm_altp2m_op a;
>> +    struct domain *d = NULL;
>> +    int rc = 0;
>> +
>> +    if ( !hvm_altp2m_supported() )
>> +        return -EOPNOTSUPP;
>> +
>> +    if ( copy_from_guest(&a, arg, 1) )
>> +        return -EFAULT;
>> +
>> +    if ( a.pad1 || a.pad2 ||
>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>> +        return -EINVAL;
>> +
>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>> +        rcu_lock_domain_by_any_id(a.domain) :
>> rcu_lock_current_domain();
>> +
>> +    if ( d == NULL )
>> +        return -ESRCH;
>> +
>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>> +         !d->arch.altp2m_active )
>> +    {
>> +        rc = -EOPNOTSUPP;
>> +        goto out;
>> +    }
>> +
>> +    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
>> +        goto out;
>
> I think this is the best place to ask a couple of questions related to
> who can access altp2m. Based on this call, a guest is allowed to
> manage its own altp2m. Can you explain why we would want a guest to do
> that?
>

On x86, altp2m might be used by the guest in the #VE (Virtualization
Exception). On ARM, there is indeed not necessary for a guest to access
altp2m. Could you provide me with information, how to best restrict
non-privileged guests (not only dom0) from accessing these HVMOPs? Can
thisbedone by means of xsm? Thank you.

> Also, I have noticed that a guest is allowed to disable ALTP2M on ARM
> because it set any param (x86 has some restriction on it). Similarly,
> the ALTP2M parameter can be set multiple time.
>

Same here.

Cheers,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-06  9:14     ` Sergej Proskurin
@ 2016-07-06 13:43       ` Julien Grall
  2016-07-06 15:23         ` Tamas K Lengyel
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-06 13:43 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini, Tamas K Lengyel

Hello Sergej,

On 06/07/16 10:14, Sergej Proskurin wrote:
> On 07/05/2016 12:19 PM, Julien Grall wrote:
>> Hello Sergej,
>>
>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>> +{
>>> +    struct xen_hvm_altp2m_op a;
>>> +    struct domain *d = NULL;
>>> +    int rc = 0;
>>> +
>>> +    if ( !hvm_altp2m_supported() )
>>> +        return -EOPNOTSUPP;
>>> +
>>> +    if ( copy_from_guest(&a, arg, 1) )
>>> +        return -EFAULT;
>>> +
>>> +    if ( a.pad1 || a.pad2 ||
>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>> +        return -EINVAL;
>>> +
>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>> +        rcu_lock_domain_by_any_id(a.domain) :
>>> rcu_lock_current_domain();
>>> +
>>> +    if ( d == NULL )
>>> +        return -ESRCH;
>>> +
>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>> +         !d->arch.altp2m_active )
>>> +    {
>>> +        rc = -EOPNOTSUPP;
>>> +        goto out;
>>> +    }
>>> +
>>> +    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
>>> +        goto out;
>>
>> I think this is the best place to ask a couple of questions related to
>> who can access altp2m. Based on this call, a guest is allowed to
>> manage its own altp2m. Can you explain why we would want a guest to do
>> that?
>>
>
> On x86, altp2m might be used by the guest in the #VE (Virtualization
> Exception). On ARM, there is indeed not necessary for a guest to access
> altp2m. Could you provide me with information, how to best restrict
> non-privileged guests (not only dom0) from accessing these HVMOPs? Can
> thisbedone by means of xsm? Thank you.

This does not looks safe for both x86 and ARM. From my understanding a 
malware would be able to modify an altp2m, switching between 2 view... 
which would lead to remove the entire purpose of altp2m.

When XSM is not enabled (this is the default on Xen), XSM_TARGET allows 
the guest (see xsm_default_action) to call the operations. So I am not 
convince XSM is the right way to go.

>
>> Also, I have noticed that a guest is allowed to disable ALTP2M on ARM
>> because it set any param (x86 has some restriction on it). Similarly,
>> the ALTP2M parameter can be set multiple time.
>>
>
> Same here.

Give a look how x86 restrict the write to HVMOP_set_param.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism.
  2016-07-06  8:33     ` Sergej Proskurin
@ 2016-07-06 14:26       ` Julien Grall
  0 siblings, 0 replies; 126+ messages in thread
From: Julien Grall @ 2016-07-06 14:26 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini, Tamas K Lengyel



On 06/07/16 09:33, Sergej Proskurin wrote:
> On 07/04/2016 10:53 PM, Julien Grall wrote:
>> Can we introduce a function which return the xma, mfn, order,
>> attribute at once? It will avoid to browse the p2m 3 times which is
>> really expensive on ARMv7 because the p2m is not mapped in the virtual
>> address space of Xen.
>>
>
> I was already thinking of at least merging p2m_get_gfn_level_and_attr
> with __p2m_lookup. But it would also make sense to introduce an entirely
> new function, which does just that.I believe increasing the overhead of
> __p2m_lookup would not be a good solution. Thank you.

How the overhead of __p2m_lookup will be increased? level, mattr and 
type is already available in p2m_lookup. For the mem access, as you pass 
a pointer you can retrieve the access only when the pointer is not NULL.

>
>>> +    spin_unlock(&hp2m->lock);
>>> +
>>> +    mask = level_masks[level];
>>> +
>>> +    rc = apply_p2m_changes(d, *ap2m, INSERT,
>>> +                           pfn_to_paddr(gfn_x(gfn)) & mask,
>>> +                           (pfn_to_paddr(gfn_x(gfn)) +
>>> level_sizes[level]) & mask,
>>> +                           maddr & mask, mattr, 0, p2mt,
>>> +                           memaccess[xma]);
>>
>> The page associated to the MFN is not locked, so another thread could
>> decide to remove the page from the domain and then the altp2m would
>> contain an entry to something that does not belong to the domain
>> anymore. Note that x86 is doing the same. So I am not sure why it is
>> considered safe there...
>>
>
> If I understand you correctly, unlocking the hp2m->lock after calling
> apply_p2m_changeswould already solve this issue, right? Thanks.

That would solve it. However, you will add a lot of contention on hp2m 
when each vCPU is using a different empty view.

>
>>> +    if ( rc )
>>> +    {
>>> +        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m
>>> %lx\n",
>>> +                (unsigned long)pfn_to_paddr(gfn_x(gfn)), (unsigned
>>> long)(maddr), (unsigned long)*ap2m);
>>> +        domain_crash(hp2m->domain);
>>> +    }
>>> +
>>> +    return 1;
>>> +
>>> +out:
>>> +    spin_unlock(&hp2m->lock);
>>> +    return 0;
>>> +}
>>> +
>>>   static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
>>>   {
>>>       struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
>>
>> [...]
>>
>>> @@ -2429,6 +2460,8 @@ static void do_trap_data_abort_guest(struct
>>> cpu_user_regs *regs,
>>>                                        const union hsr hsr)
>>>   {
>>>       const struct hsr_dabt dabt = hsr.dabt;
>>> +    struct vcpu *v = current;
>>> +    struct p2m_domain *p2m = NULL;
>>>       int rc;
>>>       mmio_info_t info;
>>>
>>> @@ -2449,6 +2482,12 @@ static void do_trap_data_abort_guest(struct
>>> cpu_user_regs *regs,
>>>           info.gpa = get_faulting_ipa();
>>>       else
>>>       {
>>> +        /*
>>> +         * When using altp2m, this flush is required to get rid of
>>> old TLB
>>> +         * entries and use the new, lazily copied, ap2m entries.
>>> +         */
>>> +        flush_tlb_local();
>>
>> Can you give more details why this flush is required?
>>
>
> Without the flush, the guest crashed to an unresolved data abort. To be
> more precise, after the first lazy-copy of a mapping from the hostp2m to
> the currently active altp2m view, the system crashed because it was not
> able to find the new mapping in its altp2m table. The explicit flush
> solved this issue quite nicely.

This explicit flush likely means there is another issue in the code. 
This flush should not be necessary.

Can you give more details about the exact data abort problem? Is it 
because the translation VA -> IPA is failing?

>
> As I answer your question, I am starting to think that the crash was a
> result of of a lack of a memory barrier because even with the old
> (hostp2m's) TLBs present, the translation would not present an issue
> (the mapping would be the same as the p2m entry is simply copied from
> the hostp2m to the active altp2m view). Also, the guest would reuse the
> old TLBs, the system would have used the access rights of the hostp2m,
> and hence again be trapped by Xen.

I am sorry but I don't understand what you are trying to say.

> I will try to solve this issue by means of a memory barrier, thank you.

Can you details where you think there is a lack of memory barrier?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-05 20:21         ` Sergej Proskurin
@ 2016-07-06 14:28           ` Julien Grall
  2016-07-06 14:39             ` Sergej Proskurin
  2016-07-07 17:24           ` Julien Grall
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-06 14:28 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 05/07/16 21:21, Sergej Proskurin wrote:
> I agree on this point as well. However, we should maybe think of another
> name for flush_tlb_domain. Especially, if we do not flush the entire
> domain (including all currently active altp2m views). What do you think?

This function is only used within p2m.c, although it is exported. What 
about renaming this function to flush_tlb_p2m and instead pass the p2m 
in parameter?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-05 21:55     ` Sergej Proskurin
@ 2016-07-06 14:32       ` Julien Grall
  2016-07-06 16:12         ` Tamas K Lengyel
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-06 14:32 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 05/07/16 22:55, Sergej Proskurin wrote:
> Hello Julien,
>
> On 07/05/2016 02:49 PM, Julien Grall wrote:
>> Hello Sergej,
>>
>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>> +static inline
>>> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
>>> +                              struct p2m_domain *ap2m, p2m_access_t a,
>>> +                              gfn_t gfn)
>>> +{
>>
>> [...]
>>
>>> +    /* Set mem access attributes - currently supporting only one (4K)
>>> page. */
>>> +    mask = level_masks[3];
>>> +    return apply_p2m_changes(d, ap2m, INSERT,
>>> +                             gpa & mask,
>>> +                             (gpa + level_sizes[level]) & mask,
>>> +                             maddr & mask, mattr, 0, p2mt, a);
>>
>> The operation INSERT will remove the previous mapping and decrease page
>> reference for foreign mapping (see p2m_put_l3_page). So if you set the
>> memory access for this kind of page, the reference count will be wrong
>> afterward.
>>
>
> I see your point. As far as I understand, the purpose of the function
> p2m_put_l3_page to simply decrement the ref count of the particular pte
> and free the page if its' ref count reaches zero. Since p2m_put_l3_page
> is called only twice in p2m.c, we could insert a check
> (p2m_is_hostp2m/altp2m) making sure that the ref count is decremented
> only if the p2m in question is the hostp2m. This will make sure that the
> ref count is maintained correctly.

Why don't you use the operation MEMACCESS?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-06 14:28           ` Julien Grall
@ 2016-07-06 14:39             ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-06 14:39 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/06/2016 04:28 PM, Julien Grall wrote:
>
>
> On 05/07/16 21:21, Sergej Proskurin wrote:
>> I agree on this point as well. However, we should maybe think of another
>> name for flush_tlb_domain. Especially, if we do not flush the entire
>> domain (including all currently active altp2m views). What do you think?
>
> This function is only used within p2m.c, although it is exported. What
> about renaming this function to flush_tlb_p2m and instead pass the p2m
> in parameter?
>
> Regards,
>

That sounds good to me :) I will do that. Thank you.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-06 13:43       ` Julien Grall
@ 2016-07-06 15:23         ` Tamas K Lengyel
  2016-07-06 15:54           ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Tamas K Lengyel @ 2016-07-06 15:23 UTC (permalink / raw)
  To: Julien Grall; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel

On Wed, Jul 6, 2016 at 7:43 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hello Sergej,
>
>
> On 06/07/16 10:14, Sergej Proskurin wrote:
>>
>> On 07/05/2016 12:19 PM, Julien Grall wrote:
>>>
>>> Hello Sergej,
>>>
>>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>>>
>>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>>> +{
>>>> +    struct xen_hvm_altp2m_op a;
>>>> +    struct domain *d = NULL;
>>>> +    int rc = 0;
>>>> +
>>>> +    if ( !hvm_altp2m_supported() )
>>>> +        return -EOPNOTSUPP;
>>>> +
>>>> +    if ( copy_from_guest(&a, arg, 1) )
>>>> +        return -EFAULT;
>>>> +
>>>> +    if ( a.pad1 || a.pad2 ||
>>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>>> +        return -EINVAL;
>>>> +
>>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>>> +        rcu_lock_domain_by_any_id(a.domain) :
>>>> rcu_lock_current_domain();
>>>> +
>>>> +    if ( d == NULL )
>>>> +        return -ESRCH;
>>>> +
>>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>>> +         !d->arch.altp2m_active )
>>>> +    {
>>>> +        rc = -EOPNOTSUPP;
>>>> +        goto out;
>>>> +    }
>>>> +
>>>> +    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
>>>> +        goto out;
>>>
>>>
>>> I think this is the best place to ask a couple of questions related to
>>> who can access altp2m. Based on this call, a guest is allowed to
>>> manage its own altp2m. Can you explain why we would want a guest to do
>>> that?
>>>
>>
>> On x86, altp2m might be used by the guest in the #VE (Virtualization
>> Exception). On ARM, there is indeed not necessary for a guest to access
>> altp2m. Could you provide me with information, how to best restrict
>> non-privileged guests (not only dom0) from accessing these HVMOPs? Can
>> thisbedone by means of xsm? Thank you.
>
>
> This does not looks safe for both x86 and ARM. From my understanding a
> malware would be able to modify an altp2m, switching between 2 view... which
> would lead to remove the entire purpose of altp2m.

Well, the whole purpose of the VMFUNC instruction right now is to be
able to switch the hypervisor's tables (like altp2m) from within the
guest. So yes, if you have a malicious guest then it could control
it's own altp2m views. I would assume there are other safe-guards
in-place for systems that use this to ensure kernel-integrity and thus
prevent arbitrary use of these functions. AFAIK there are only
experimental systems based on this so whether it's less or more secure
is debatable and likely depend on your implementation.

As for ARM - as there is no hardware features like this available -
our goal is to use altp2m in purely external usecases so exposing
these ops to the guest is not required. For the first prototype it
made sense to mirror the x86 side to reduce the possibility of
introducing some bug.

Cheers,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-06 15:23         ` Tamas K Lengyel
@ 2016-07-06 15:54           ` Julien Grall
  2016-07-06 16:05             ` Tamas K Lengyel
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-06 15:54 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Sergej Proskurin, Stefano Stabellini, Andrew Cooper, Jan Beulich,
	Xen-devel

(Add x86 maintainers)

On 06/07/16 16:23, Tamas K Lengyel wrote:
> On Wed, Jul 6, 2016 at 7:43 AM, Julien Grall <julien.grall@arm.com> wrote:
>> Hello Sergej,
>>
>>
>> On 06/07/16 10:14, Sergej Proskurin wrote:
>>>
>>> On 07/05/2016 12:19 PM, Julien Grall wrote:
>>>>
>>>> Hello Sergej,
>>>>
>>>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>>>>
>>>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>> +{
>>>>> +    struct xen_hvm_altp2m_op a;
>>>>> +    struct domain *d = NULL;
>>>>> +    int rc = 0;
>>>>> +
>>>>> +    if ( !hvm_altp2m_supported() )
>>>>> +        return -EOPNOTSUPP;
>>>>> +
>>>>> +    if ( copy_from_guest(&a, arg, 1) )
>>>>> +        return -EFAULT;
>>>>> +
>>>>> +    if ( a.pad1 || a.pad2 ||
>>>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>>>> +        return -EINVAL;
>>>>> +
>>>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>>>> +        rcu_lock_domain_by_any_id(a.domain) :
>>>>> rcu_lock_current_domain();
>>>>> +
>>>>> +    if ( d == NULL )
>>>>> +        return -ESRCH;
>>>>> +
>>>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>>>> +         !d->arch.altp2m_active )
>>>>> +    {
>>>>> +        rc = -EOPNOTSUPP;
>>>>> +        goto out;
>>>>> +    }
>>>>> +
>>>>> +    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
>>>>> +        goto out;
>>>>
>>>>
>>>> I think this is the best place to ask a couple of questions related to
>>>> who can access altp2m. Based on this call, a guest is allowed to
>>>> manage its own altp2m. Can you explain why we would want a guest to do
>>>> that?
>>>>
>>>
>>> On x86, altp2m might be used by the guest in the #VE (Virtualization
>>> Exception). On ARM, there is indeed not necessary for a guest to access
>>> altp2m. Could you provide me with information, how to best restrict
>>> non-privileged guests (not only dom0) from accessing these HVMOPs? Can
>>> thisbedone by means of xsm? Thank you.
>>
>>
>> This does not looks safe for both x86 and ARM. From my understanding a
>> malware would be able to modify an altp2m, switching between 2 view... which
>> would lead to remove the entire purpose of altp2m.
>
> Well, the whole purpose of the VMFUNC instruction right now is to be
> able to switch the hypervisor's tables (like altp2m) from within the
> guest. So yes, if you have a malicious guest then it could control
> it's own altp2m views. I would assume there are other safe-guards
> in-place for systems that use this to ensure kernel-integrity and thus
> prevent arbitrary use of these functions. AFAIK there are only
> experimental systems based on this so whether it's less or more secure
> is debatable and likely depend on your implementation.

Taken aside the VMFUNC, it looks like insecure to expose a HVMOP to the 
guest which could modify the memaccess attribute of a region.

I thought the whole purpose of VM introspection is to avoid trusting the 
guest (kernel + userspace). The first thing a malware will do is trying 
to do is getting access to the kernel. Once it gets this access, it 
could remove all the memory access restriction to avoid trapping.

> As for ARM - as there is no hardware features like this available -
> our goal is to use altp2m in purely external usecases so exposing
> these ops to the guest is not required. For the first prototype it
> made sense to mirror the x86 side to reduce the possibility of
> introducing some bug.

No, this is not the right approach. We should not introduce potential 
security issue just because x86 side does it and it "reduces the 
possibility of introducing some bug".

You will have to give me another reason to accept a such patch.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-06 15:54           ` Julien Grall
@ 2016-07-06 16:05             ` Tamas K Lengyel
  2016-07-06 16:29               ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Tamas K Lengyel @ 2016-07-06 16:05 UTC (permalink / raw)
  To: Julien Grall
  Cc: Sergej Proskurin, Stefano Stabellini, Andrew Cooper, Jan Beulich,
	Xen-devel

On Wed, Jul 6, 2016 at 9:54 AM, Julien Grall <julien.grall@arm.com> wrote:
> (Add x86 maintainers)
>
>
> On 06/07/16 16:23, Tamas K Lengyel wrote:
>>
>> On Wed, Jul 6, 2016 at 7:43 AM, Julien Grall <julien.grall@arm.com> wrote:
>>>
>>> Hello Sergej,
>>>
>>>
>>> On 06/07/16 10:14, Sergej Proskurin wrote:
>>>>
>>>>
>>>> On 07/05/2016 12:19 PM, Julien Grall wrote:
>>>>>
>>>>>
>>>>> Hello Sergej,
>>>>>
>>>>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>>>>>
>>>>>>
>>>>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>> +{
>>>>>> +    struct xen_hvm_altp2m_op a;
>>>>>> +    struct domain *d = NULL;
>>>>>> +    int rc = 0;
>>>>>> +
>>>>>> +    if ( !hvm_altp2m_supported() )
>>>>>> +        return -EOPNOTSUPP;
>>>>>> +
>>>>>> +    if ( copy_from_guest(&a, arg, 1) )
>>>>>> +        return -EFAULT;
>>>>>> +
>>>>>> +    if ( a.pad1 || a.pad2 ||
>>>>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>>>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>>>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>>>>> +        return -EINVAL;
>>>>>> +
>>>>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>>>>> +        rcu_lock_domain_by_any_id(a.domain) :
>>>>>> rcu_lock_current_domain();
>>>>>> +
>>>>>> +    if ( d == NULL )
>>>>>> +        return -ESRCH;
>>>>>> +
>>>>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>>>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>>>>> +         !d->arch.altp2m_active )
>>>>>> +    {
>>>>>> +        rc = -EOPNOTSUPP;
>>>>>> +        goto out;
>>>>>> +    }
>>>>>> +
>>>>>> +    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
>>>>>> +        goto out;
>>>>>
>>>>>
>>>>>
>>>>> I think this is the best place to ask a couple of questions related to
>>>>> who can access altp2m. Based on this call, a guest is allowed to
>>>>> manage its own altp2m. Can you explain why we would want a guest to do
>>>>> that?
>>>>>
>>>>
>>>> On x86, altp2m might be used by the guest in the #VE (Virtualization
>>>> Exception). On ARM, there is indeed not necessary for a guest to access
>>>> altp2m. Could you provide me with information, how to best restrict
>>>> non-privileged guests (not only dom0) from accessing these HVMOPs? Can
>>>> thisbedone by means of xsm? Thank you.
>>>
>>>
>>>
>>> This does not looks safe for both x86 and ARM. From my understanding a
>>> malware would be able to modify an altp2m, switching between 2 view...
>>> which
>>> would lead to remove the entire purpose of altp2m.
>>
>>
>> Well, the whole purpose of the VMFUNC instruction right now is to be
>> able to switch the hypervisor's tables (like altp2m) from within the
>> guest. So yes, if you have a malicious guest then it could control
>> it's own altp2m views. I would assume there are other safe-guards
>> in-place for systems that use this to ensure kernel-integrity and thus
>> prevent arbitrary use of these functions. AFAIK there are only
>> experimental systems based on this so whether it's less or more secure
>> is debatable and likely depend on your implementation.
>
>
> Taken aside the VMFUNC, it looks like insecure to expose a HVMOP to the
> guest which could modify the memaccess attribute of a region.
>
> I thought the whole purpose of VM introspection is to avoid trusting the
> guest (kernel + userspace). The first thing a malware will do is trying to
> do is getting access to the kernel. Once it gets this access, it could
> remove all the memory access restriction to avoid trapping.

That's why I'm saying systems that use this will likely do extra steps
to ensure kernel integrity. In use-cases where this is to be used
exclusively for external monitoring the system can be restricted with
XSM to not allow the guest to issue the hvmops. And remember, on x86
this system is not exclusively used for introspection.

>
>> As for ARM - as there is no hardware features like this available -
>> our goal is to use altp2m in purely external usecases so exposing
>> these ops to the guest is not required. For the first prototype it
>> made sense to mirror the x86 side to reduce the possibility of
>> introducing some bug.
>
>
> No, this is not the right approach. We should not introduce potential
> security issue just because x86 side does it and it "reduces the possibility
> of introducing some bug".
>
> You will have to give me another reason to accept a such patch.

The first revision of a large series is highly unlikely to get
accepted on the first run so we have been working with the assumption
that there will be new revisions. The prototype has been working well
enough for our internal tests to warrant not submitting it as PATCH
RFC. Since this is Sergej's first work with Xen it helped to mirror
the x86 to get him up to speed while working on the prototype and
reducing the complexity he has to keep track of. Now that this phase
is complete the adjustments can be made as required, such as not
exposing these hvmops to ARM guests.

Cheers,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-06 14:32       ` Julien Grall
@ 2016-07-06 16:12         ` Tamas K Lengyel
  2016-07-06 16:59           ` Julien Grall
  2016-07-06 17:03           ` Sergej Proskurin
  0 siblings, 2 replies; 126+ messages in thread
From: Tamas K Lengyel @ 2016-07-06 16:12 UTC (permalink / raw)
  To: Julien Grall; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel

On Wed, Jul 6, 2016 at 8:32 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 05/07/16 22:55, Sergej Proskurin wrote:
>>
>> Hello Julien,
>>
>> On 07/05/2016 02:49 PM, Julien Grall wrote:
>>>
>>> Hello Sergej,
>>>
>>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>>>
>>>> +static inline
>>>> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain
>>>> *hp2m,
>>>> +                              struct p2m_domain *ap2m, p2m_access_t a,
>>>> +                              gfn_t gfn)
>>>> +{
>>>
>>>
>>> [...]
>>>
>>>> +    /* Set mem access attributes - currently supporting only one (4K)
>>>> page. */
>>>> +    mask = level_masks[3];
>>>> +    return apply_p2m_changes(d, ap2m, INSERT,
>>>> +                             gpa & mask,
>>>> +                             (gpa + level_sizes[level]) & mask,
>>>> +                             maddr & mask, mattr, 0, p2mt, a);
>>>
>>>
>>> The operation INSERT will remove the previous mapping and decrease page
>>> reference for foreign mapping (see p2m_put_l3_page). So if you set the
>>> memory access for this kind of page, the reference count will be wrong
>>> afterward.
>>>
>>
>> I see your point. As far as I understand, the purpose of the function
>> p2m_put_l3_page to simply decrement the ref count of the particular pte
>> and free the page if its' ref count reaches zero. Since p2m_put_l3_page
>> is called only twice in p2m.c, we could insert a check
>> (p2m_is_hostp2m/altp2m) making sure that the ref count is decremented
>> only if the p2m in question is the hostp2m. This will make sure that the
>> ref count is maintained correctly.
>
>
> Why don't you use the operation MEMACCESS?

I believe it would require duplicating some parts of the INSERT
routine as MEMACCESS assumes the pte is already present in the p2m,
which may not be the case for ap2m.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-06 16:05             ` Tamas K Lengyel
@ 2016-07-06 16:29               ` Julien Grall
  2016-07-06 16:35                 ` Tamas K Lengyel
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-06 16:29 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Sergej Proskurin, Stefano Stabellini, Andrew Cooper, Jan Beulich,
	Xen-devel



On 06/07/16 17:05, Tamas K Lengyel wrote:
> On Wed, Jul 6, 2016 at 9:54 AM, Julien Grall <julien.grall@arm.com> wrote:
>> Taken aside the VMFUNC, it looks like insecure to expose a HVMOP to the
>> guest which could modify the memaccess attribute of a region.
>>
>> I thought the whole purpose of VM introspection is to avoid trusting the
>> guest (kernel + userspace). The first thing a malware will do is trying to
>> do is getting access to the kernel. Once it gets this access, it could
>> remove all the memory access restriction to avoid trapping.
>
> That's why I'm saying systems that use this will likely do extra steps
> to ensure kernel integrity. In use-cases where this is to be used
> exclusively for external monitoring the system can be restricted with
> XSM to not allow the guest to issue the hvmops. And remember, on x86
> this system is not exclusively used for introspection.

I am not aware on how x86 is using alt2pm. And this series didn't give 
much explanation how this is supposed to work...

>>
>>> As for ARM - as there is no hardware features like this available -
>>> our goal is to use altp2m in purely external usecases so exposing
>>> these ops to the guest is not required. For the first prototype it
>>> made sense to mirror the x86 side to reduce the possibility of
>>> introducing some bug.
>>
>>
>> No, this is not the right approach. We should not introduce potential
>> security issue just because x86 side does it and it "reduces the possibility
>> of introducing some bug".
>>
>> You will have to give me another reason to accept a such patch.
>
> The first revision of a large series is highly unlikely to get
> accepted on the first run so we have been working with the assumption
> that there will be new revisions. The prototype has been working well
> enough for our internal tests to warrant not submitting it as PATCH
> RFC. Since this is Sergej's first work with Xen it helped to mirror
> the x86 to get him up to speed while working on the prototype and
> reducing the complexity he has to keep track of. Now that this phase
> is complete the adjustments can be made as required, such as not
> exposing these hvmops to ARM guests.

A such large series is already hard to review, it is even harder when 
the contributor leaves code unfinished because he assumed there will be 
a new revision. Actually this is the whole purpose of tagging the series 
with "RFC". It is used that the series is not in a state where it could 
potentially be committed.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-06 16:29               ` Julien Grall
@ 2016-07-06 16:35                 ` Tamas K Lengyel
  2016-07-06 18:35                   ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Tamas K Lengyel @ 2016-07-06 16:35 UTC (permalink / raw)
  To: Julien Grall
  Cc: Sergej Proskurin, Stefano Stabellini, Andrew Cooper, Jan Beulich,
	Xen-devel

On Wed, Jul 6, 2016 at 10:29 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 06/07/16 17:05, Tamas K Lengyel wrote:
>>
>> On Wed, Jul 6, 2016 at 9:54 AM, Julien Grall <julien.grall@arm.com> wrote:
>>>
>>> Taken aside the VMFUNC, it looks like insecure to expose a HVMOP to the
>>> guest which could modify the memaccess attribute of a region.
>>>
>>> I thought the whole purpose of VM introspection is to avoid trusting the
>>> guest (kernel + userspace). The first thing a malware will do is trying
>>> to
>>> do is getting access to the kernel. Once it gets this access, it could
>>> remove all the memory access restriction to avoid trapping.
>>
>>
>> That's why I'm saying systems that use this will likely do extra steps
>> to ensure kernel integrity. In use-cases where this is to be used
>> exclusively for external monitoring the system can be restricted with
>> XSM to not allow the guest to issue the hvmops. And remember, on x86
>> this system is not exclusively used for introspection.
>
>
> I am not aware on how x86 is using alt2pm. And this series didn't give much
> explanation how this is supposed to work...
>
>>>
>>>> As for ARM - as there is no hardware features like this available -
>>>> our goal is to use altp2m in purely external usecases so exposing
>>>> these ops to the guest is not required. For the first prototype it
>>>> made sense to mirror the x86 side to reduce the possibility of
>>>> introducing some bug.
>>>
>>>
>>>
>>> No, this is not the right approach. We should not introduce potential
>>> security issue just because x86 side does it and it "reduces the
>>> possibility
>>> of introducing some bug".
>>>
>>> You will have to give me another reason to accept a such patch.
>>
>>
>> The first revision of a large series is highly unlikely to get
>> accepted on the first run so we have been working with the assumption
>> that there will be new revisions. The prototype has been working well
>> enough for our internal tests to warrant not submitting it as PATCH
>> RFC. Since this is Sergej's first work with Xen it helped to mirror
>> the x86 to get him up to speed while working on the prototype and
>> reducing the complexity he has to keep track of. Now that this phase
>> is complete the adjustments can be made as required, such as not
>> exposing these hvmops to ARM guests.
>
>
> A such large series is already hard to review, it is even harder when the
> contributor leaves code unfinished because he assumed there will be a new
> revision. Actually this is the whole purpose of tagging the series with
> "RFC". It is used that the series is not in a state where it could
> potentially be committed.
>

The code is not in an unfinished state by any means as it passes _our_
tests and works as expected. I think the assumption we made that there
will be required adjustments is very reasonable on any patch series.
So I'm not sure what the problem is.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-06 16:12         ` Tamas K Lengyel
@ 2016-07-06 16:59           ` Julien Grall
  2016-07-06 17:03           ` Sergej Proskurin
  1 sibling, 0 replies; 126+ messages in thread
From: Julien Grall @ 2016-07-06 16:59 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel



On 06/07/16 17:12, Tamas K Lengyel wrote:
> On Wed, Jul 6, 2016 at 8:32 AM, Julien Grall <julien.grall@arm.com> wrote:
>>
>>
>> On 05/07/16 22:55, Sergej Proskurin wrote:
>>>
>>> Hello Julien,
>>>
>>> On 07/05/2016 02:49 PM, Julien Grall wrote:
>>>>
>>>> Hello Sergej,
>>>>
>>>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>>>>
>>>>> +static inline
>>>>> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain
>>>>> *hp2m,
>>>>> +                              struct p2m_domain *ap2m, p2m_access_t a,
>>>>> +                              gfn_t gfn)
>>>>> +{
>>>>
>>>>
>>>> [...]
>>>>
>>>>> +    /* Set mem access attributes - currently supporting only one (4K)
>>>>> page. */
>>>>> +    mask = level_masks[3];
>>>>> +    return apply_p2m_changes(d, ap2m, INSERT,
>>>>> +                             gpa & mask,
>>>>> +                             (gpa + level_sizes[level]) & mask,
>>>>> +                             maddr & mask, mattr, 0, p2mt, a);
>>>>
>>>>
>>>> The operation INSERT will remove the previous mapping and decrease page
>>>> reference for foreign mapping (see p2m_put_l3_page). So if you set the
>>>> memory access for this kind of page, the reference count will be wrong
>>>> afterward.
>>>>
>>>
>>> I see your point. As far as I understand, the purpose of the function
>>> p2m_put_l3_page to simply decrement the ref count of the particular pte
>>> and free the page if its' ref count reaches zero. Since p2m_put_l3_page
>>> is called only twice in p2m.c, we could insert a check
>>> (p2m_is_hostp2m/altp2m) making sure that the ref count is decremented
>>> only if the p2m in question is the hostp2m. This will make sure that the
>>> ref count is maintained correctly.
>>
>>
>> Why don't you use the operation MEMACCESS?
>
> I believe it would require duplicating some parts of the INSERT
> routine as MEMACCESS assumes the pte is already present in the p2m,
> which may not be the case for ap2m.

Hmmm right. The p2m code is already complex so extending MEMACCESS will 
be worst. I don't much like the idea of a check which I think is very 
fragile, although it would be my preference for now.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-06 16:12         ` Tamas K Lengyel
  2016-07-06 16:59           ` Julien Grall
@ 2016-07-06 17:03           ` Sergej Proskurin
  1 sibling, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-06 17:03 UTC (permalink / raw)
  To: xen-devel



On 07/06/2016 06:12 PM, Tamas K Lengyel wrote:
> On Wed, Jul 6, 2016 at 8:32 AM, Julien Grall <julien.grall@arm.com> wrote:
>>
>> On 05/07/16 22:55, Sergej Proskurin wrote:
>>> Hello Julien,
>>>
>>> On 07/05/2016 02:49 PM, Julien Grall wrote:
>>>> Hello Sergej,
>>>>
>>>> On 04/07/16 12:45, Sergej Proskurin wrote:
>>>>> +static inline
>>>>> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain
>>>>> *hp2m,
>>>>> +                              struct p2m_domain *ap2m, p2m_access_t a,
>>>>> +                              gfn_t gfn)
>>>>> +{
>>>>
>>>> [...]
>>>>
>>>>> +    /* Set mem access attributes - currently supporting only one (4K)
>>>>> page. */
>>>>> +    mask = level_masks[3];
>>>>> +    return apply_p2m_changes(d, ap2m, INSERT,
>>>>> +                             gpa & mask,
>>>>> +                             (gpa + level_sizes[level]) & mask,
>>>>> +                             maddr & mask, mattr, 0, p2mt, a);
>>>>
>>>> The operation INSERT will remove the previous mapping and decrease page
>>>> reference for foreign mapping (see p2m_put_l3_page). So if you set the
>>>> memory access for this kind of page, the reference count will be wrong
>>>> afterward.
>>>>
>>> I see your point. As far as I understand, the purpose of the function
>>> p2m_put_l3_page to simply decrement the ref count of the particular pte
>>> and free the page if its' ref count reaches zero. Since p2m_put_l3_page
>>> is called only twice in p2m.c, we could insert a check
>>> (p2m_is_hostp2m/altp2m) making sure that the ref count is decremented
>>> only if the p2m in question is the hostp2m. This will make sure that the
>>> ref count is maintained correctly.
>>
>> Why don't you use the operation MEMACCESS?
> I believe it would require duplicating some parts of the INSERT
> routine as MEMACCESS assumes the pte is already present in the p2m,
> which may not be the case for ap2m.
>
Yes, exactly. MEMACCESS expects the particular entry to be present in
the associated p2m view. We could use it but only in addition to INSERT.
Otherwise we will not be able to insert the entry into the targeted
altp2m view.Because of this, IMHO it would make sense to introduce an
additional (unlikely) check making sure that the ref count is not
decremented if altp2m is active.

Cheers,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-04 11:45 ` [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
  2016-07-05 12:49   ` Julien Grall
@ 2016-07-06 17:08   ` Julien Grall
  2016-07-07  9:16     ` Sergej Proskurin
  1 sibling, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-06 17:08 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hi,

On 04/07/16 12:45, Sergej Proskurin wrote:
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -2085,6 +2085,159 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)

[...]

> +static inline
> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain *hp2m,
> +                              struct p2m_domain *ap2m, p2m_access_t a,
> +                              gfn_t gfn)
> +{
> +    p2m_type_t p2mt;
> +    xenmem_access_t xma_old;
> +    paddr_t gpa = pfn_to_paddr(gfn_x(gfn));
> +    paddr_t maddr, mask = 0;
> +    unsigned int level;
> +    unsigned long mattr;
> +    int rc;
> +
> +    static const p2m_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> +        ACCESS(n),
> +        ACCESS(r),
> +        ACCESS(w),
> +        ACCESS(rw),
> +        ACCESS(x),
> +        ACCESS(rx),
> +        ACCESS(wx),
> +        ACCESS(rwx),
> +        ACCESS(rx2rw),
> +        ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    /* Check if entry is part of the altp2m view. */
> +    spin_lock(&ap2m->lock);
> +    maddr = __p2m_lookup(ap2m, gpa, &p2mt);
> +    spin_unlock(&ap2m->lock);
> +
> +    /* Check host p2m if no valid entry in ap2m. */
> +    if ( maddr == INVALID_PADDR )
> +    {
> +        /* Check if entry is part of the host p2m view. */
> +        spin_lock(&hp2m->lock);
> +        maddr = __p2m_lookup(hp2m, gpa, &p2mt);
> +        if ( maddr == INVALID_PADDR || p2mt != p2m_ram_rw )
> +            goto out;
> +
> +        rc = __p2m_get_mem_access(hp2m, gfn, &xma_old);
> +        if ( rc )
> +            goto out;
> +
> +        rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
> +        if ( rc )
> +            goto out;
> +        spin_unlock(&hp2m->lock);
> +
> +        mask = level_masks[level];
> +
> +        /* If this is a superpage, copy that first. */
> +        if ( level != 3 )
> +        {
> +            rc = apply_p2m_changes(d, ap2m, INSERT,
> +                                   gpa & mask,
> +                                   (gpa + level_sizes[level]) & mask,
> +                                   maddr & mask, mattr, 0, p2mt,
> +                                   memaccess[xma_old]);
> +            if ( rc < 0 )
> +                goto out;
> +        }
> +    }
> +    else
> +    {
> +        spin_lock(&ap2m->lock);
> +        rc = p2m_get_gfn_level_and_attr(ap2m, gpa, &level, &mattr);
> +        spin_unlock(&ap2m->lock);
> +        if ( rc )
> +            goto out;
> +    }
> +
> +    /* Set mem access attributes - currently supporting only one (4K) page. */
> +    mask = level_masks[3];
> +    return apply_p2m_changes(d, ap2m, INSERT,
> +                             gpa & mask,
> +                             (gpa + level_sizes[level]) & mask,
> +                             maddr & mask, mattr, 0, p2mt, a);
> +
> +out:
> +    if ( spin_is_locked(&hp2m->lock) )
> +        spin_unlock(&hp2m->lock);

I have not spotted this before. spin_is_locked does not ensure that the 
lock was taken by this CPU. So you may unlock for another CPU.

> +
> +    return -ESRCH;
> +}
> +

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-06 16:35                 ` Tamas K Lengyel
@ 2016-07-06 18:35                   ` Julien Grall
  2016-07-07  9:14                     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-06 18:35 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Sergej Proskurin, Stefano Stabellini, Andrew Cooper, Jan Beulich,
	Xen-devel



On 06/07/16 17:35, Tamas K Lengyel wrote:
> On Wed, Jul 6, 2016 at 10:29 AM, Julien Grall <julien.grall@arm.com> wrote:
>>
>>
>> On 06/07/16 17:05, Tamas K Lengyel wrote:
>>>
>>> On Wed, Jul 6, 2016 at 9:54 AM, Julien Grall <julien.grall@arm.com> wrote:
>>>>
>>>> Taken aside the VMFUNC, it looks like insecure to expose a HVMOP to the
>>>> guest which could modify the memaccess attribute of a region.
>>>>
>>>> I thought the whole purpose of VM introspection is to avoid trusting the
>>>> guest (kernel + userspace). The first thing a malware will do is trying
>>>> to
>>>> do is getting access to the kernel. Once it gets this access, it could
>>>> remove all the memory access restriction to avoid trapping.
>>>
>>>
>>> That's why I'm saying systems that use this will likely do extra steps
>>> to ensure kernel integrity. In use-cases where this is to be used
>>> exclusively for external monitoring the system can be restricted with
>>> XSM to not allow the guest to issue the hvmops. And remember, on x86
>>> this system is not exclusively used for introspection.
>>
>>
>> I am not aware on how x86 is using alt2pm. And this series didn't give much
>> explanation how this is supposed to work...
>>
>>>>
>>>>> As for ARM - as there is no hardware features like this available -
>>>>> our goal is to use altp2m in purely external usecases so exposing
>>>>> these ops to the guest is not required. For the first prototype it
>>>>> made sense to mirror the x86 side to reduce the possibility of
>>>>> introducing some bug.
>>>>
>>>>
>>>>
>>>> No, this is not the right approach. We should not introduce potential
>>>> security issue just because x86 side does it and it "reduces the
>>>> possibility
>>>> of introducing some bug".
>>>>
>>>> You will have to give me another reason to accept a such patch.
>>>
>>>
>>> The first revision of a large series is highly unlikely to get
>>> accepted on the first run so we have been working with the assumption
>>> that there will be new revisions. The prototype has been working well
>>> enough for our internal tests to warrant not submitting it as PATCH
>>> RFC. Since this is Sergej's first work with Xen it helped to mirror
>>> the x86 to get him up to speed while working on the prototype and
>>> reducing the complexity he has to keep track of. Now that this phase
>>> is complete the adjustments can be made as required, such as not
>>> exposing these hvmops to ARM guests.
>>
>>
>> A such large series is already hard to review, it is even harder when the
>> contributor leaves code unfinished because he assumed there will be a new
>> revision. Actually this is the whole purpose of tagging the series with
>> "RFC". It is used that the series is not in a state where it could
>> potentially be committed.
>>
>
> The code is not in an unfinished state by any means as it passes _our_
> tests and works as expected. I think the assumption we made that there
> will be required adjustments is very reasonable on any patch series.
> So I'm not sure what the problem is.

Well, I am bit surprised that this series is passing your tests (you may 
want to update them?). Before sending a new version, I would recommend 
to go through the locking of each path. I tried to comment on every 
locking issue I have spotted, although I may have miss some.

I would also recommend you to go through the ARM ARM to see how the TLBs 
behaves because switching between page tables with the same VMID could 
be harmful if not doing correctly. I have mentioned that you could use 
different VMID for the page table, however this may have impact on other 
part of the memory such as the cache. (this would need to be investigated).

I know it is not an easy task. The P2M code is complex, so it would 
benefit for all the reviewers to have an explanation in the cover letter 
how this is supposed to work.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-07-06 18:35                   ` Julien Grall
@ 2016-07-07  9:14                     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-07  9:14 UTC (permalink / raw)
  To: Julien Grall, Tamas K Lengyel
  Cc: Xen-devel, Stefano Stabellini, Jan Beulich, Andrew Cooper

Hi  Julien,


On 07/06/2016 08:35 PM, Julien Grall wrote:
>
>
> On 06/07/16 17:35, Tamas K Lengyel wrote:
>> On Wed, Jul 6, 2016 at 10:29 AM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>>
>>> On 06/07/16 17:05, Tamas K Lengyel wrote:
>>>>
>>>> On Wed, Jul 6, 2016 at 9:54 AM, Julien Grall <julien.grall@arm.com>
>>>> wrote:
>>>>>
>>>>> Taken aside the VMFUNC, it looks like insecure to expose a HVMOP
>>>>> to the
>>>>> guest which could modify the memaccess attribute of a region.
>>>>>
>>>>> I thought the whole purpose of VM introspection is to avoid
>>>>> trusting the
>>>>> guest (kernel + userspace). The first thing a malware will do is
>>>>> trying
>>>>> to
>>>>> do is getting access to the kernel. Once it gets this access, it
>>>>> could
>>>>> remove all the memory access restriction to avoid trapping.
>>>>
>>>>
>>>> That's why I'm saying systems that use this will likely do extra steps
>>>> to ensure kernel integrity. In use-cases where this is to be used
>>>> exclusively for external monitoring the system can be restricted with
>>>> XSM to not allow the guest to issue the hvmops. And remember, on x86
>>>> this system is not exclusively used for introspection.
>>>
>>>
>>> I am not aware on how x86 is using alt2pm. And this series didn't
>>> give much
>>> explanation how this is supposed to work...
>>>
>>>>>
>>>>>> As for ARM - as there is no hardware features like this available -
>>>>>> our goal is to use altp2m in purely external usecases so exposing
>>>>>> these ops to the guest is not required. For the first prototype it
>>>>>> made sense to mirror the x86 side to reduce the possibility of
>>>>>> introducing some bug.
>>>>>
>>>>>
>>>>>
>>>>> No, this is not the right approach. We should not introduce potential
>>>>> security issue just because x86 side does it and it "reduces the
>>>>> possibility
>>>>> of introducing some bug".
>>>>>
>>>>> You will have to give me another reason to accept a such patch.
>>>>
>>>>
>>>> The first revision of a large series is highly unlikely to get
>>>> accepted on the first run so we have been working with the assumption
>>>> that there will be new revisions. The prototype has been working well
>>>> enough for our internal tests to warrant not submitting it as PATCH
>>>> RFC. Since this is Sergej's first work with Xen it helped to mirror
>>>> the x86 to get him up to speed while working on the prototype and
>>>> reducing the complexity he has to keep track of. Now that this phase
>>>> is complete the adjustments can be made as required, such as not
>>>> exposing these hvmops to ARM guests.
>>>
>>>
>>> A such large series is already hard to review, it is even harder
>>> when the
>>> contributor leaves code unfinished because he assumed there will be
>>> a new
>>> revision. Actually this is the whole purpose of tagging the series with
>>> "RFC". It is used that the series is not in a state where it could
>>> potentially be committed.
>>>
>>
>> The code is not in an unfinished state by any means as it passes _our_
>> tests and works as expected. I think the assumption we made that there
>> will be required adjustments is very reasonable on any patch series.
>> So I'm not sure what the problem is.
>
> Well, I am bit surprised that this series is passing your tests (you
> may want to update them?). Before sending a new version, I would
> recommend to go through the locking of each path. I tried to comment
> on every locking issue I have spotted, although I may have miss some.
>
> I would also recommend you to go through the ARM ARM to see how the
> TLBs behaves because switching between page tables with the same VMID
> could be harmful if not doing correctly. I have mentioned that you
> could use different VMID for the page table, however this may have
> impact on other part of the memory such as the cache. (this would need
> to be investigated).
>
> I know it is not an easy task. The P2M code is complex, so it would
> benefit for all the reviewers to have an explanation in the cover
> letter how this is supposed to work.
>
> Regards,
>

Thank youJulien for your thorough review. We will try to address all of
the points you have mentioned and of course re-test the code before
re-submission. I will also provide a cover letter that will explain the
altp2m functionality in more detail.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-07-06 17:08   ` Julien Grall
@ 2016-07-07  9:16     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-07  9:16 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 07/06/2016 07:08 PM, Julien Grall wrote:
> Hi,
>
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -2085,6 +2085,159 @@ bool_t p2m_mem_access_check(paddr_t gpa,
>> vaddr_t gla, const struct npfec npfec)
>
> [...]
>
>> +static inline
>> +int p2m_set_altp2m_mem_access(struct domain *d, struct p2m_domain
>> *hp2m,
>> +                              struct p2m_domain *ap2m, p2m_access_t a,
>> +                              gfn_t gfn)
>> +{
>> +    p2m_type_t p2mt;
>> +    xenmem_access_t xma_old;
>> +    paddr_t gpa = pfn_to_paddr(gfn_x(gfn));
>> +    paddr_t maddr, mask = 0;
>> +    unsigned int level;
>> +    unsigned long mattr;
>> +    int rc;
>> +
>> +    static const p2m_access_t memaccess[] = {
>> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
>> +        ACCESS(n),
>> +        ACCESS(r),
>> +        ACCESS(w),
>> +        ACCESS(rw),
>> +        ACCESS(x),
>> +        ACCESS(rx),
>> +        ACCESS(wx),
>> +        ACCESS(rwx),
>> +        ACCESS(rx2rw),
>> +        ACCESS(n2rwx),
>> +#undef ACCESS
>> +    };
>> +
>> +    /* Check if entry is part of the altp2m view. */
>> +    spin_lock(&ap2m->lock);
>> +    maddr = __p2m_lookup(ap2m, gpa, &p2mt);
>> +    spin_unlock(&ap2m->lock);
>> +
>> +    /* Check host p2m if no valid entry in ap2m. */
>> +    if ( maddr == INVALID_PADDR )
>> +    {
>> +        /* Check if entry is part of the host p2m view. */
>> +        spin_lock(&hp2m->lock);
>> +        maddr = __p2m_lookup(hp2m, gpa, &p2mt);
>> +        if ( maddr == INVALID_PADDR || p2mt != p2m_ram_rw )
>> +            goto out;
>> +
>> +        rc = __p2m_get_mem_access(hp2m, gfn, &xma_old);
>> +        if ( rc )
>> +            goto out;
>> +
>> +        rc = p2m_get_gfn_level_and_attr(hp2m, gpa, &level, &mattr);
>> +        if ( rc )
>> +            goto out;
>> +        spin_unlock(&hp2m->lock);
>> +
>> +        mask = level_masks[level];
>> +
>> +        /* If this is a superpage, copy that first. */
>> +        if ( level != 3 )
>> +        {
>> +            rc = apply_p2m_changes(d, ap2m, INSERT,
>> +                                   gpa & mask,
>> +                                   (gpa + level_sizes[level]) & mask,
>> +                                   maddr & mask, mattr, 0, p2mt,
>> +                                   memaccess[xma_old]);
>> +            if ( rc < 0 )
>> +                goto out;
>> +        }
>> +    }
>> +    else
>> +    {
>> +        spin_lock(&ap2m->lock);
>> +        rc = p2m_get_gfn_level_and_attr(ap2m, gpa, &level, &mattr);
>> +        spin_unlock(&ap2m->lock);
>> +        if ( rc )
>> +            goto out;
>> +    }
>> +
>> +    /* Set mem access attributes - currently supporting only one
>> (4K) page. */
>> +    mask = level_masks[3];
>> +    return apply_p2m_changes(d, ap2m, INSERT,
>> +                             gpa & mask,
>> +                             (gpa + level_sizes[level]) & mask,
>> +                             maddr & mask, mattr, 0, p2mt, a);
>> +
>> +out:
>> +    if ( spin_is_locked(&hp2m->lock) )
>> +        spin_unlock(&hp2m->lock);
>
> I have not spotted this before. spin_is_locked does not ensure that
> the lock was taken by this CPU. So you may unlock for another CPU.
>

Thank you. I will consider this in the next patch.

Cheers,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-04 11:45 ` [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM Sergej Proskurin
@ 2016-07-07 16:27   ` Wei Liu
  2016-07-24 16:06     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Wei Liu @ 2016-07-07 16:27 UTC (permalink / raw)
  To: Sergej Proskurin; +Cc: xen-devel, Ian Jackson, Wei Liu

On Mon, Jul 04, 2016 at 01:45:45PM +0200, Sergej Proskurin wrote:
> The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
> This parameter allows further usage of altp2m on ARM. For this, we
> define an additional altp2m field for PV domains as part of the
> libxl_domain_type struct. This field can be set only on ARM systems
> through the "altp2m" switch in the domain's configuration file (i.e.
> set altp2m=1).
> 
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> ---
>  tools/libxl/libxl_create.c  |  1 +
>  tools/libxl/libxl_dom.c     | 14 ++++++++++++++
>  tools/libxl/libxl_types.idl |  1 +
>  tools/libxl/xl_cmdimpl.c    |  5 +++++
>  4 files changed, 21 insertions(+)
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 1b99472..40b5f61 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -400,6 +400,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>              b_info->cmdline = b_info->u.pv.cmdline;
>              b_info->u.pv.cmdline = NULL;
>          }
> +        libxl_defbool_setdefault(&b_info->u.pv.altp2m, false);
>          break;
>      default:
>          LOG(ERROR, "invalid domain type %s in create info",
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index ec29060..ab023a2 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -277,6 +277,16 @@ err:
>  }
>  #endif
>  
> +#if defined(__arm__) || defined(__aarch64__)
> +static void pv_set_conf_params(xc_interface *handle, uint32_t domid,
> +                               libxl_domain_build_info *const info)
> +{
> +    if ( libxl_defbool_val(info->u.pv.altp2m) )

Coding style.

> +        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
> +                         libxl_defbool_val(info->u.pv.altp2m));
> +}
> +#endif
> +
>  static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
>                                  libxl_domain_build_info *const info)
>  {
> @@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>              return rc;
>  #endif
>      }
> +#if defined(__arm__) || defined(__aarch64__)
> +    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
> +        pv_set_conf_params(ctx->xch, domid, info);
> +#endif
>  
>      rc = libxl__arch_domain_create(gc, d_config, domid);
>  
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index ef614be..0a164f9 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>                                        ("features", string, {'const': True}),
>                                        # Use host's E820 for PCI passthrough.
>                                        ("e820_host", libxl_defbool),
> +                                      ("altp2m", libxl_defbool),

Why is this placed in PV instead of arch_arm?

You would also need a LIBXL_HAVE macro in libxl.h for the new field.

>                                        ])),
>                   ("invalid", None),
>                   ], keyvar_init_val = "LIBXL_DOMAIN_TYPE_INVALID")),
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 6459eec..12c6e48 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1718,6 +1718,11 @@ static void parse_config_data(const char *config_source,
>              exit(1);
>          }
>  
> +#if defined(__arm__) || defined(__aarch64__)
> +        /* Enable altp2m for PV guests solely on ARM */
> +        xlu_cfg_get_defbool(config, "altp2m", &b_info->u.pv.altp2m, 0);
> +#endif
> +
>          break;
>      }
>      default:
> -- 
> 2.8.3
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m.
  2016-07-05 20:21         ` Sergej Proskurin
  2016-07-06 14:28           ` Julien Grall
@ 2016-07-07 17:24           ` Julien Grall
  1 sibling, 0 replies; 126+ messages in thread
From: Julien Grall @ 2016-07-07 17:24 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 05/07/16 21:21, Sergej Proskurin wrote:
> On 07/05/2016 05:37 PM, Julien Grall wrote:
>>> When altp2m is active, the hostp2m is not used. All changes are applied
>>> directly to the individual altp2m views. As far as I know, the only
>>> situations, where a mapping is removed from a specific p2m view is in
>>> the functions unmap_regions_rw_cache, unmap_mmio_regions, and
>>> guest_physmap_remove_page. Each one of these functions provide, however
>>> the hostp2m to the function apply_p2m_changes, where the mapping is
>>> eventually removed. So, we definitely remove mappings from the hostp2m.
>>> However, these would not be removed from the associated altp2m views.
>>>
>>> Can you state a scenario, when and why pages might need to be removed at
>>> run time from the hostp2m? In that case, maybe it would make sense to
>>> extend the three functions (unmap_regions_rw_cache, unmap_mmio_regions,
>>> and guest_physmap_remove_page) to additionally remove the mappings from
>>> all available altp2m views?
>>
>> All the functions, you mentioned can be called after the domain has been
>> created. If you grep guest_physmap_remove_page in the code source, you
>> will find that it is used to unmap grant entry from the memory (see
>> replace_grant_host_mapping) or when decreasing the memory reservation
>> (see do_memory_op).
>>
>> Note that, from my understanding, x86 has the same problem.
>>
>
> Yes, the x86 implementation works similarly to the one for ARM. I will
> need to think about that. One solution could simply extend the
> previously mentioned functions by unmapping the particular entry in all
> currently available altp2m views (assuming, we allocate the altp2m views
> on demand, as discussed in patch #05).
>
> Whatever the solution will be, it will need to be ported to the x86
> implementation as well.

Actually, the x86 code is propagating the change (see 
p2m_altp2m_propagate_change). I missed it while going the code the first 
time.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-07-04 11:45 ` [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
@ 2016-07-15 13:45   ` Julien Grall
  2016-07-16 15:18     ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-15 13:45 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini, Tamas K Lengyel

Hi Sergej,

On 04/07/16 12:45, Sergej Proskurin wrote:
> This commit changes the prototype of the following functions:
> - apply_p2m_changes
> - apply_one_level
> - p2m_shatter_page
> - p2m_create_table
> - __p2m_lookup
> - __p2m_get_mem_access
>
> These changes are required as our implementation reuses most of the
> existing ARM p2m implementation to set page table attributes of the
> individual altp2m views. Therefore, exiting function prototypes have
> been extended to hold another argument (of type struct p2m_domain *).
> This allows to specify the p2m/altp2m domain that should be processed by
> the individual function -- instead of accessing the host's default p2m
> domain.

I am actually reworking the whole p2m code to be complain with the ARM 
architecture (such as break-before-make) and make easier to implement 
new features such as altp2m.

Would it be possible to send this patch separately with nothing altp2m 
related in it?

>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>   xen/arch/arm/p2m.c | 80 +++++++++++++++++++++++++++++-------------------------
>   1 file changed, 43 insertions(+), 37 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 019f10e..9c8fefd 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -200,9 +200,8 @@ void flush_tlb_domain(struct domain *d)
>    * There are no processor functions to do a stage 2 only lookup therefore we
>    * do a a software walk.
>    */
> -static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
> +static paddr_t __p2m_lookup(struct p2m_domain *p2m, paddr_t paddr, p2m_type_t *t)
>   {
> -    struct p2m_domain *p2m = &d->arch.p2m;
>       const unsigned int offsets[4] = {
>           zeroeth_table_offset(paddr),
>           first_table_offset(paddr),
> @@ -282,10 +281,11 @@ err:
>   paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
>   {
>       paddr_t ret;
> -    struct p2m_domain *p2m = &d->arch.p2m;
> +    struct vcpu *v = current;
> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);

This change is wrong, this function is called in hypercalls to translate 
an IPA for another domain to an MFN. So current->domain != d.

>
>       spin_lock(&p2m->lock);
> -    ret = __p2m_lookup(d, paddr, t);
> +    ret = __p2m_lookup(p2m, paddr, t);
>       spin_unlock(&p2m->lock);
>
>       return ret;
> @@ -441,10 +441,12 @@ static inline void p2m_remove_pte(lpae_t *p, bool_t flush_cache)
>    *
>    * level_shift is the number of bits at the level we want to create.
>    */
> -static int p2m_create_table(struct domain *d, lpae_t *entry,
> -                            int level_shift, bool_t flush_cache)
> +static int p2m_create_table(struct domain *d,

Please drop "struct domain *d", it was only here to get the p2m.

> +                            struct p2m_domain *p2m,
> +                            lpae_t *entry,
> +                            int level_shift,
> +                            bool_t flush_cache)
>   {
> -    struct p2m_domain *p2m = &d->arch.p2m;
>       struct page_info *page;
>       lpae_t *p;
>       lpae_t pte;
> @@ -502,10 +504,9 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>       return 0;
>   }
>
> -static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
> +static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
>                                   xenmem_access_t *access)
>   {
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>       void *i;
>       unsigned int index;
>
> @@ -548,7 +549,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
>            * No setting was found in the Radix tree. Check if the
>            * entry exists in the page-tables.
>            */
> -        paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
> +        paddr_t maddr = __p2m_lookup(p2m, gfn_x(gfn) << PAGE_SHIFT, NULL);
>           if ( INVALID_PADDR == maddr )
>               return -ESRCH;
>
> @@ -677,17 +678,17 @@ static const paddr_t level_shifts[] =
>       { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
>
>   static int p2m_shatter_page(struct domain *d,

Ditto.

> +                            struct p2m_domain *p2m,
>                               lpae_t *entry,
>                               unsigned int level,
>                               bool_t flush_cache)
>   {
>       const paddr_t level_shift = level_shifts[level];
> -    int rc = p2m_create_table(d, entry,
> +    int rc = p2m_create_table(d, p2m, entry,
>                                 level_shift - PAGE_SHIFT, flush_cache);
>
>       if ( !rc )
>       {
> -        struct p2m_domain *p2m = &d->arch.p2m;
>           p2m->stats.shattered[level]++;
>           p2m->stats.mappings[level]--;
>           p2m->stats.mappings[level+1] += LPAE_ENTRIES;
> @@ -704,6 +705,7 @@ static int p2m_shatter_page(struct domain *d,
>    * -ve == (-Exxx) error.
>    */
>   static int apply_one_level(struct domain *d,

Ditto.

> +                           struct p2m_domain *p2m,
>                              lpae_t *entry,
>                              unsigned int level,
>                              bool_t flush_cache,
> @@ -721,7 +723,6 @@ static int apply_one_level(struct domain *d,
>       const paddr_t level_mask = level_masks[level];
>       const paddr_t level_shift = level_shifts[level];
>
> -    struct p2m_domain *p2m = &d->arch.p2m;
>       lpae_t pte;
>       const lpae_t orig_pte = *entry;
>       int rc;

[...]

> @@ -1011,6 +1012,7 @@ static void update_reference_mapping(struct page_info *page,
>   }
>
>   static int apply_p2m_changes(struct domain *d,

I would like to see "struct domain *d" dropped completely. If we really 
need it, we could introduce a back pointer to domain.

> +                     struct p2m_domain *p2m,
>                        enum p2m_operation op,
>                        paddr_t start_gpaddr,
>                        paddr_t end_gpaddr,

[...]

> @@ -1743,6 +1745,9 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>       xenmem_access_t xma;
>       p2m_type_t t;
>       struct page_info *page = NULL;
> +    struct vcpu *v = current;
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);

This patch is supposed to be "comestic fixes", so this change does not 
belong to this patch.

However, the changes within this function look wrong to me because 
p2m_mem_access_check_and_get_page is used by Xen to get the page when 
copying data to/from the guest. If the entry is not yet replicated in 
the altp2m, you will fail to copy the data.

You may also want to consider how get_page_from_gva would work if an 
altp2m is used.

>
>       rc = gva_to_ipa(gva, &ipa, flag);

This is not related to this patch, but I think gva_to_ipa can fail to 
translate a VA to an IPA if the stage-1 page table reside in memory 
which was restricted by memaccess.

>       if ( rc < 0 )
> @@ -1752,7 +1757,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>        * We do this first as this is faster in the default case when no
>        * permission is set on the page.
>        */
> -    rc = __p2m_get_mem_access(current->domain, _gfn(paddr_to_pfn(ipa)), &xma);
> +    rc = __p2m_get_mem_access(p2m, _gfn(paddr_to_pfn(ipa)), &xma);
>       if ( rc < 0 )
>           goto err;
>
> @@ -1801,7 +1806,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>        * We had a mem_access permission limiting the access, but the page type
>        * could also be limiting, so we need to check that as well.
>        */
> -    maddr = __p2m_lookup(current->domain, ipa, &t);
> +    maddr = __p2m_lookup(p2m, ipa, &t);
>       if ( maddr == INVALID_PADDR )
>           goto err;
>
> @@ -2125,7 +2130,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>           return 0;
>       }
>
> -    rc = apply_p2m_changes(d, MEMACCESS,
> +    rc = apply_p2m_changes(d, p2m, MEMACCESS,
>                              pfn_to_paddr(gfn_x(gfn) + start),
>                              pfn_to_paddr(gfn_x(gfn) + nr),
>                              0, MATTR_MEM, mask, 0, a);
> @@ -2141,10 +2146,11 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
>                          xenmem_access_t *access)
>   {
>       int ret;
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    struct vcpu *v = current;
> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) : p2m_get_hostp2m(d);

Same remark as above, this patch is only "comestic" and functional 
changes does not belong to this patch.

>
>       spin_lock(&p2m->lock);
> -    ret = __p2m_get_mem_access(d, gfn, access);
> +    ret = __p2m_get_mem_access(p2m, gfn, access);
>       spin_unlock(&p2m->lock);
>
>       return ret;
>


Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-07-15 13:45   ` Julien Grall
@ 2016-07-16 15:18     ` Sergej Proskurin
  0 siblings, 0 replies; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-16 15:18 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini, Tamas K Lengyel

Hi Julien,


On 07/15/2016 03:45 PM, Julien Grall wrote:
> Hi Sergej,
> 
> On 04/07/16 12:45, Sergej Proskurin wrote:
>> This commit changes the prototype of the following functions:
>> - apply_p2m_changes
>> - apply_one_level
>> - p2m_shatter_page
>> - p2m_create_table
>> - __p2m_lookup
>> - __p2m_get_mem_access
>>
>> These changes are required as our implementation reuses most of the
>> existing ARM p2m implementation to set page table attributes of the
>> individual altp2m views. Therefore, exiting function prototypes have
>> been extended to hold another argument (of type struct p2m_domain *).
>> This allows to specify the p2m/altp2m domain that should be processed by
>> the individual function -- instead of accessing the host's default p2m
>> domain.
> 
> I am actually reworking the whole p2m code to be complain with the ARM
> architecture (such as break-before-make) and make easier to implement
> new features such as altp2m.
> 
> Would it be possible to send this patch separately with nothing altp2m
> related in it?
> 

I will look into that, thank you for asking. The current implementation
already provides more cosmetic fixes, therefore I need to shortly assess
which parts might be submitted independently of altp2m. I will respond
as quickly as possible.

>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>   xen/arch/arm/p2m.c | 80
>> +++++++++++++++++++++++++++++-------------------------
>>   1 file changed, 43 insertions(+), 37 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 019f10e..9c8fefd 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -200,9 +200,8 @@ void flush_tlb_domain(struct domain *d)
>>    * There are no processor functions to do a stage 2 only lookup
>> therefore we
>>    * do a a software walk.
>>    */
>> -static paddr_t __p2m_lookup(struct domain *d, paddr_t paddr,
>> p2m_type_t *t)
>> +static paddr_t __p2m_lookup(struct p2m_domain *p2m, paddr_t paddr,
>> p2m_type_t *t)
>>   {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>       const unsigned int offsets[4] = {
>>           zeroeth_table_offset(paddr),
>>           first_table_offset(paddr),
>> @@ -282,10 +281,11 @@ err:
>>   paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
>>   {
>>       paddr_t ret;
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> +    struct vcpu *v = current;
>> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) :
>> p2m_get_hostp2m(d);
> 
> This change is wrong, this function is called in hypercalls to translate
> an IPA for another domain to an MFN. So current->domain != d.
> 
>>
>>       spin_lock(&p2m->lock);
>> -    ret = __p2m_lookup(d, paddr, t);
>> +    ret = __p2m_lookup(p2m, paddr, t);
>>       spin_unlock(&p2m->lock);
>>
>>       return ret;
>> @@ -441,10 +441,12 @@ static inline void p2m_remove_pte(lpae_t *p,
>> bool_t flush_cache)
>>    *
>>    * level_shift is the number of bits at the level we want to create.
>>    */
>> -static int p2m_create_table(struct domain *d, lpae_t *entry,
>> -                            int level_shift, bool_t flush_cache)
>> +static int p2m_create_table(struct domain *d,
> 
> Please drop "struct domain *d", it was only here to get the p2m.
> 
>> +                            struct p2m_domain *p2m,
>> +                            lpae_t *entry,
>> +                            int level_shift,
>> +                            bool_t flush_cache)
>>   {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>       struct page_info *page;
>>       lpae_t *p;
>>       lpae_t pte;
>> @@ -502,10 +504,9 @@ static int p2m_create_table(struct domain *d,
>> lpae_t *entry,
>>       return 0;
>>   }
>>
>> -static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
>> +static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
>>                                   xenmem_access_t *access)
>>   {
>> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>       void *i;
>>       unsigned int index;
>>
>> @@ -548,7 +549,7 @@ static int __p2m_get_mem_access(struct domain *d,
>> gfn_t gfn,
>>            * No setting was found in the Radix tree. Check if the
>>            * entry exists in the page-tables.
>>            */
>> -        paddr_t maddr = __p2m_lookup(d, gfn_x(gfn) << PAGE_SHIFT, NULL);
>> +        paddr_t maddr = __p2m_lookup(p2m, gfn_x(gfn) << PAGE_SHIFT,
>> NULL);
>>           if ( INVALID_PADDR == maddr )
>>               return -ESRCH;
>>
>> @@ -677,17 +678,17 @@ static const paddr_t level_shifts[] =
>>       { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
>>
>>   static int p2m_shatter_page(struct domain *d,
> 
> Ditto.
> 
>> +                            struct p2m_domain *p2m,
>>                               lpae_t *entry,
>>                               unsigned int level,
>>                               bool_t flush_cache)
>>   {
>>       const paddr_t level_shift = level_shifts[level];
>> -    int rc = p2m_create_table(d, entry,
>> +    int rc = p2m_create_table(d, p2m, entry,
>>                                 level_shift - PAGE_SHIFT, flush_cache);
>>
>>       if ( !rc )
>>       {
>> -        struct p2m_domain *p2m = &d->arch.p2m;
>>           p2m->stats.shattered[level]++;
>>           p2m->stats.mappings[level]--;
>>           p2m->stats.mappings[level+1] += LPAE_ENTRIES;
>> @@ -704,6 +705,7 @@ static int p2m_shatter_page(struct domain *d,
>>    * -ve == (-Exxx) error.
>>    */
>>   static int apply_one_level(struct domain *d,
> 
> Ditto.
> 
>> +                           struct p2m_domain *p2m,
>>                              lpae_t *entry,
>>                              unsigned int level,
>>                              bool_t flush_cache,
>> @@ -721,7 +723,6 @@ static int apply_one_level(struct domain *d,
>>       const paddr_t level_mask = level_masks[level];
>>       const paddr_t level_shift = level_shifts[level];
>>
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>       lpae_t pte;
>>       const lpae_t orig_pte = *entry;
>>       int rc;
> 
> [...]
> 
>> @@ -1011,6 +1012,7 @@ static void update_reference_mapping(struct
>> page_info *page,
>>   }
>>
>>   static int apply_p2m_changes(struct domain *d,
> 
> I would like to see "struct domain *d" dropped completely. If we really
> need it, we could introduce a back pointer to domain.
> 
>> +                     struct p2m_domain *p2m,
>>                        enum p2m_operation op,
>>                        paddr_t start_gpaddr,
>>                        paddr_t end_gpaddr,
> 
> [...]
> 
>> @@ -1743,6 +1745,9 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
>> unsigned long flag)
>>       xenmem_access_t xma;
>>       p2m_type_t t;
>>       struct page_info *page = NULL;
>> +    struct vcpu *v = current;
>> +    struct domain *d = v->domain;
>> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) :
>> p2m_get_hostp2m(d);
> 
> This patch is supposed to be "comestic fixes", so this change does not
> belong to this patch.
> 
> However, the changes within this function look wrong to me because
> p2m_mem_access_check_and_get_page is used by Xen to get the page when
> copying data to/from the guest. If the entry is not yet replicated in
> the altp2m, you will fail to copy the data.
> 
> You may also want to consider how get_page_from_gva would work if an
> altp2m is used.
> 
>>
>>       rc = gva_to_ipa(gva, &ipa, flag);
> 
> This is not related to this patch, but I think gva_to_ipa can fail to
> translate a VA to an IPA if the stage-1 page table reside in memory
> which was restricted by memaccess.
> 
>>       if ( rc < 0 )
>> @@ -1752,7 +1757,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
>> unsigned long flag)
>>        * We do this first as this is faster in the default case when no
>>        * permission is set on the page.
>>        */
>> -    rc = __p2m_get_mem_access(current->domain,
>> _gfn(paddr_to_pfn(ipa)), &xma);
>> +    rc = __p2m_get_mem_access(p2m, _gfn(paddr_to_pfn(ipa)), &xma);
>>       if ( rc < 0 )
>>           goto err;
>>
>> @@ -1801,7 +1806,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
>> unsigned long flag)
>>        * We had a mem_access permission limiting the access, but the
>> page type
>>        * could also be limiting, so we need to check that as well.
>>        */
>> -    maddr = __p2m_lookup(current->domain, ipa, &t);
>> +    maddr = __p2m_lookup(p2m, ipa, &t);
>>       if ( maddr == INVALID_PADDR )
>>           goto err;
>>
>> @@ -2125,7 +2130,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t
>> gfn, uint32_t nr,
>>           return 0;
>>       }
>>
>> -    rc = apply_p2m_changes(d, MEMACCESS,
>> +    rc = apply_p2m_changes(d, p2m, MEMACCESS,
>>                              pfn_to_paddr(gfn_x(gfn) + start),
>>                              pfn_to_paddr(gfn_x(gfn) + nr),
>>                              0, MATTR_MEM, mask, 0, a);
>> @@ -2141,10 +2146,11 @@ int p2m_get_mem_access(struct domain *d, gfn_t
>> gfn,
>>                          xenmem_access_t *access)
>>   {
>>       int ret;
>> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    struct vcpu *v = current;
>> +    struct p2m_domain *p2m = altp2m_active(d) ? p2m_get_altp2m(v) :
>> p2m_get_hostp2m(d);
> 
> Same remark as above, this patch is only "comestic" and functional
> changes does not belong to this patch.
> 
>>
>>       spin_lock(&p2m->lock);
>> -    ret = __p2m_get_mem_access(d, gfn, access);
>> +    ret = __p2m_get_mem_access(p2m, gfn, access);
>>       spin_unlock(&p2m->lock);
>>
>>       return ret;
>>

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-07 16:27   ` Wei Liu
@ 2016-07-24 16:06     ` Sergej Proskurin
  2016-07-25  8:32       ` Wei Liu
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-24 16:06 UTC (permalink / raw)
  To: xen-devel, wei.liu2

Hi Wei,

On 07/07/2016 06:27 PM, Wei Liu wrote:
> On Mon, Jul 04, 2016 at 01:45:45PM +0200, Sergej Proskurin wrote:
>> The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
>> This parameter allows further usage of altp2m on ARM. For this, we
>> define an additional altp2m field for PV domains as part of the
>> libxl_domain_type struct. This field can be set only on ARM systems
>> through the "altp2m" switch in the domain's configuration file (i.e.
>> set altp2m=1).
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> Cc: Wei Liu <wei.liu2@citrix.com>
>> ---
>>  tools/libxl/libxl_create.c  |  1 +
>>  tools/libxl/libxl_dom.c     | 14 ++++++++++++++
>>  tools/libxl/libxl_types.idl |  1 +
>>  tools/libxl/xl_cmdimpl.c    |  5 +++++
>>  4 files changed, 21 insertions(+)
>>
>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>> index 1b99472..40b5f61 100644
>> --- a/tools/libxl/libxl_create.c
>> +++ b/tools/libxl/libxl_create.c
>> @@ -400,6 +400,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>>              b_info->cmdline = b_info->u.pv.cmdline;
>>              b_info->u.pv.cmdline = NULL;
>>          }
>> +        libxl_defbool_setdefault(&b_info->u.pv.altp2m, false);
>>          break;
>>      default:
>>          LOG(ERROR, "invalid domain type %s in create info",
>> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
>> index ec29060..ab023a2 100644
>> --- a/tools/libxl/libxl_dom.c
>> +++ b/tools/libxl/libxl_dom.c
>> @@ -277,6 +277,16 @@ err:
>>  }
>>  #endif
>>  
>> +#if defined(__arm__) || defined(__aarch64__)
>> +static void pv_set_conf_params(xc_interface *handle, uint32_t domid,
>> +                               libxl_domain_build_info *const info)
>> +{
>> +    if ( libxl_defbool_val(info->u.pv.altp2m) )
> 
> Coding style.
> 

I am not sure which part does not fulfill Xen's coding style. Could you
be more specific please? Thank you.

>> +        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
>> +                         libxl_defbool_val(info->u.pv.altp2m));
>> +}
>> +#endif
>> +
>>  static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
>>                                  libxl_domain_build_info *const info)
>>  {
>> @@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>>              return rc;
>>  #endif
>>      }
>> +#if defined(__arm__) || defined(__aarch64__)
>> +    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
>> +        pv_set_conf_params(ctx->xch, domid, info);
>> +#endif
>>  
>>      rc = libxl__arch_domain_create(gc, d_config, domid);
>>  
>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>> index ef614be..0a164f9 100644
>> --- a/tools/libxl/libxl_types.idl
>> +++ b/tools/libxl/libxl_types.idl
>> @@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>>                                        ("features", string, {'const': True}),
>>                                        # Use host's E820 for PCI passthrough.
>>                                        ("e820_host", libxl_defbool),
>> +                                      ("altp2m", libxl_defbool),
> 
> Why is this placed in PV instead of arch_arm?
> 

The current implementation mirrors the x86's altp2m, where the altp2m
field represents a property of HVM guests in b_info->u.hvm.altp2m. Since
guests on ARM are marked as PV, I have placed the field altp2m into
b_info->u.pv.altp2m.

However, if you believe that it would make more sense to place altp2m
into b_info->arch_arm.altp2m, I will try to adapt the affected fields.

> You would also need a LIBXL_HAVE macro in libxl.h for the new field.
> 

There is already a LIBXL_HAVE_ALTP2M macro defined in libxl.h. Or did
you mean using an explicit one for ARM?

>>                                        ])),
>>                   ("invalid", None),
>>                   ], keyvar_init_val = "LIBXL_DOMAIN_TYPE_INVALID")),
>> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
>> index 6459eec..12c6e48 100644
>> --- a/tools/libxl/xl_cmdimpl.c
>> +++ b/tools/libxl/xl_cmdimpl.c
>> @@ -1718,6 +1718,11 @@ static void parse_config_data(const char *config_source,
>>              exit(1);
>>          }
>>  
>> +#if defined(__arm__) || defined(__aarch64__)
>> +        /* Enable altp2m for PV guests solely on ARM */
>> +        xlu_cfg_get_defbool(config, "altp2m", &b_info->u.pv.altp2m, 0);
>> +#endif
>> +
>>          break;
>>      }
>>      default:
>> -- 
>> 2.8.3
>>

Thank you.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-24 16:06     ` Sergej Proskurin
@ 2016-07-25  8:32       ` Wei Liu
  2016-07-25  9:04         ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Wei Liu @ 2016-07-25  8:32 UTC (permalink / raw)
  To: Sergej Proskurin; +Cc: wei.liu2, xen-devel

On Sun, Jul 24, 2016 at 06:06:00PM +0200, Sergej Proskurin wrote:
> Hi Wei,
> 
> On 07/07/2016 06:27 PM, Wei Liu wrote:
> > On Mon, Jul 04, 2016 at 01:45:45PM +0200, Sergej Proskurin wrote:
> >> The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
> >> This parameter allows further usage of altp2m on ARM. For this, we
> >> define an additional altp2m field for PV domains as part of the
> >> libxl_domain_type struct. This field can be set only on ARM systems
> >> through the "altp2m" switch in the domain's configuration file (i.e.
> >> set altp2m=1).
> >>
> >> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> >> ---
> >> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> >> Cc: Wei Liu <wei.liu2@citrix.com>
> >> ---
> >>  tools/libxl/libxl_create.c  |  1 +
> >>  tools/libxl/libxl_dom.c     | 14 ++++++++++++++
> >>  tools/libxl/libxl_types.idl |  1 +
> >>  tools/libxl/xl_cmdimpl.c    |  5 +++++
> >>  4 files changed, 21 insertions(+)
> >>
> >> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> >> index 1b99472..40b5f61 100644
> >> --- a/tools/libxl/libxl_create.c
> >> +++ b/tools/libxl/libxl_create.c
> >> @@ -400,6 +400,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
> >>              b_info->cmdline = b_info->u.pv.cmdline;
> >>              b_info->u.pv.cmdline = NULL;
> >>          }
> >> +        libxl_defbool_setdefault(&b_info->u.pv.altp2m, false);
> >>          break;
> >>      default:
> >>          LOG(ERROR, "invalid domain type %s in create info",
> >> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> >> index ec29060..ab023a2 100644
> >> --- a/tools/libxl/libxl_dom.c
> >> +++ b/tools/libxl/libxl_dom.c
> >> @@ -277,6 +277,16 @@ err:
> >>  }
> >>  #endif
> >>  
> >> +#if defined(__arm__) || defined(__aarch64__)
> >> +static void pv_set_conf_params(xc_interface *handle, uint32_t domid,
> >> +                               libxl_domain_build_info *const info)
> >> +{
> >> +    if ( libxl_defbool_val(info->u.pv.altp2m) )
> > 
> > Coding style.
> > 
> 
> I am not sure which part does not fulfill Xen's coding style. Could you
> be more specific please? Thank you.
> 

There is a CODING_STYLE file in libxl as well. The coding style in
tools/ is generally different from the coding style  in hypervisor with
the exception of libxc.

Sorry for being too terse on this. Do ask questions! :-)

> >> +        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
> >> +                         libxl_defbool_val(info->u.pv.altp2m));
> >> +}
> >> +#endif
> >> +
> >>  static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
> >>                                  libxl_domain_build_info *const info)
> >>  {
> >> @@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >>              return rc;
> >>  #endif
> >>      }
> >> +#if defined(__arm__) || defined(__aarch64__)
> >> +    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
> >> +        pv_set_conf_params(ctx->xch, domid, info);
> >> +#endif
> >>  
> >>      rc = libxl__arch_domain_create(gc, d_config, domid);
> >>  
> >> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> >> index ef614be..0a164f9 100644
> >> --- a/tools/libxl/libxl_types.idl
> >> +++ b/tools/libxl/libxl_types.idl
> >> @@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
> >>                                        ("features", string, {'const': True}),
> >>                                        # Use host's E820 for PCI passthrough.
> >>                                        ("e820_host", libxl_defbool),
> >> +                                      ("altp2m", libxl_defbool),
> > 
> > Why is this placed in PV instead of arch_arm?
> > 
> 
> The current implementation mirrors the x86's altp2m, where the altp2m
> field represents a property of HVM guests in b_info->u.hvm.altp2m. Since
> guests on ARM are marked as PV, I have placed the field altp2m into
> b_info->u.pv.altp2m.
> 
> However, if you believe that it would make more sense to place altp2m
> into b_info->arch_arm.altp2m, I will try to adapt the affected fields.
> 

OK. I don't think that one should be placed in u.pv because that union
is for x86. However it is also not suitable to just promote that to
a common field because x86 pv doesn't have altp2m support.

I think arch_arm would be a better location.

> > You would also need a LIBXL_HAVE macro in libxl.h for the new field.
> > 
> 
> There is already a LIBXL_HAVE_ALTP2M macro defined in libxl.h. Or did
> you mean using an explicit one for ARM?
> 

Yes, we need a new one for ARM.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-25  8:32       ` Wei Liu
@ 2016-07-25  9:04         ` Sergej Proskurin
  2016-07-25  9:49           ` Julien Grall
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-25  9:04 UTC (permalink / raw)
  To: Wei Liu; +Cc: xen-devel

Hi Wei,


On 07/25/2016 10:32 AM, Wei Liu wrote:
> On Sun, Jul 24, 2016 at 06:06:00PM +0200, Sergej Proskurin wrote:
>> Hi Wei,
>>
>> On 07/07/2016 06:27 PM, Wei Liu wrote:
>>> On Mon, Jul 04, 2016 at 01:45:45PM +0200, Sergej Proskurin wrote:
>>>> The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
>>>> This parameter allows further usage of altp2m on ARM. For this, we
>>>> define an additional altp2m field for PV domains as part of the
>>>> libxl_domain_type struct. This field can be set only on ARM systems
>>>> through the "altp2m" switch in the domain's configuration file (i.e.
>>>> set altp2m=1).
>>>>
>>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>>> ---
>>>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>>>> Cc: Wei Liu <wei.liu2@citrix.com>
>>>> ---
>>>>  tools/libxl/libxl_create.c  |  1 +
>>>>  tools/libxl/libxl_dom.c     | 14 ++++++++++++++
>>>>  tools/libxl/libxl_types.idl |  1 +
>>>>  tools/libxl/xl_cmdimpl.c    |  5 +++++
>>>>  4 files changed, 21 insertions(+)
>>>>
>>>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>>>> index 1b99472..40b5f61 100644
>>>> --- a/tools/libxl/libxl_create.c
>>>> +++ b/tools/libxl/libxl_create.c
>>>> @@ -400,6 +400,7 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>>>>              b_info->cmdline = b_info->u.pv.cmdline;
>>>>              b_info->u.pv.cmdline = NULL;
>>>>          }
>>>> +        libxl_defbool_setdefault(&b_info->u.pv.altp2m, false);
>>>>          break;
>>>>      default:
>>>>          LOG(ERROR, "invalid domain type %s in create info",
>>>> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
>>>> index ec29060..ab023a2 100644
>>>> --- a/tools/libxl/libxl_dom.c
>>>> +++ b/tools/libxl/libxl_dom.c
>>>> @@ -277,6 +277,16 @@ err:
>>>>  }
>>>>  #endif
>>>>  
>>>> +#if defined(__arm__) || defined(__aarch64__)
>>>> +static void pv_set_conf_params(xc_interface *handle, uint32_t domid,
>>>> +                               libxl_domain_build_info *const info)
>>>> +{
>>>> +    if ( libxl_defbool_val(info->u.pv.altp2m) )
>>> Coding style.
>>>
>> I am not sure which part does not fulfill Xen's coding style. Could you
>> be more specific please? Thank you.
>>
> There is a CODING_STYLE file in libxl as well. The coding style in
> tools/ is generally different from the coding style  in hypervisor with
> the exception of libxc.
>
> Sorry for being too terse on this. Do ask questions! :-)

It's all good, no worries :) Thank you for the hint (it must be the
indentation inside the if-clause brackets). I was still in the Xen
coding style, when I wrote this part: Now, I know better, thank you :)

>>>> +        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
>>>> +                         libxl_defbool_val(info->u.pv.altp2m));
>>>> +}
>>>> +#endif
>>>> +
>>>>  static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
>>>>                                  libxl_domain_build_info *const info)
>>>>  {
>>>> @@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>>>>              return rc;
>>>>  #endif
>>>>      }
>>>> +#if defined(__arm__) || defined(__aarch64__)
>>>> +    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
>>>> +        pv_set_conf_params(ctx->xch, domid, info);
>>>> +#endif
>>>>  
>>>>      rc = libxl__arch_domain_create(gc, d_config, domid);
>>>>  
>>>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>>>> index ef614be..0a164f9 100644
>>>> --- a/tools/libxl/libxl_types.idl
>>>> +++ b/tools/libxl/libxl_types.idl
>>>> @@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>>>>                                        ("features", string, {'const': True}),
>>>>                                        # Use host's E820 for PCI passthrough.
>>>>                                        ("e820_host", libxl_defbool),
>>>> +                                      ("altp2m", libxl_defbool),
>>> Why is this placed in PV instead of arch_arm?
>>>
>> The current implementation mirrors the x86's altp2m, where the altp2m
>> field represents a property of HVM guests in b_info->u.hvm.altp2m. Since
>> guests on ARM are marked as PV, I have placed the field altp2m into
>> b_info->u.pv.altp2m.
>>
>> However, if you believe that it would make more sense to place altp2m
>> into b_info->arch_arm.altp2m, I will try to adapt the affected fields.
>>
> OK. I don't think that one should be placed in u.pv because that union
> is for x86. However it is also not suitable to just promote that to
> a common field because x86 pv doesn't have altp2m support.
>
> I think arch_arm would be a better location.
>

Alright, I will move the field altp2m to arch_arm and adopt the
initialization routines.

>>> You would also need a LIBXL_HAVE macro in libxl.h for the new field.
>>>
>> There is already a LIBXL_HAVE_ALTP2M macro defined in libxl.h. Or did
>> you mean using an explicit one for ARM?
>>
> Yes, we need a new one for ARM.
>

Ok. I will provide a LIBXL_HAVE_ARM_ALTP2M macro.

Thank you.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-25  9:04         ` Sergej Proskurin
@ 2016-07-25  9:49           ` Julien Grall
  2016-07-25 10:08             ` Wei Liu
  0 siblings, 1 reply; 126+ messages in thread
From: Julien Grall @ 2016-07-25  9:49 UTC (permalink / raw)
  To: Sergej Proskurin, Wei Liu; +Cc: xen-devel

Hello,

On 25/07/16 10:04, Sergej Proskurin wrote:
> On 07/25/2016 10:32 AM, Wei Liu wrote:
>> On Sun, Jul 24, 2016 at 06:06:00PM +0200, Sergej Proskurin wrote:
>>>>> +        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
>>>>> +                         libxl_defbool_val(info->u.pv.altp2m));
>>>>> +}
>>>>> +#endif
>>>>> +
>>>>>  static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
>>>>>                                  libxl_domain_build_info *const info)
>>>>>  {
>>>>> @@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>>>>>              return rc;
>>>>>  #endif
>>>>>      }
>>>>> +#if defined(__arm__) || defined(__aarch64__)
>>>>> +    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
>>>>> +        pv_set_conf_params(ctx->xch, domid, info);
>>>>> +#endif
>>>>>
>>>>>      rc = libxl__arch_domain_create(gc, d_config, domid);
>>>>>
>>>>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>>>>> index ef614be..0a164f9 100644
>>>>> --- a/tools/libxl/libxl_types.idl
>>>>> +++ b/tools/libxl/libxl_types.idl
>>>>> @@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>>>>>                                        ("features", string, {'const': True}),
>>>>>                                        # Use host's E820 for PCI passthrough.
>>>>>                                        ("e820_host", libxl_defbool),
>>>>> +                                      ("altp2m", libxl_defbool),
>>>> Why is this placed in PV instead of arch_arm?
>>>>
>>> The current implementation mirrors the x86's altp2m, where the altp2m
>>> field represents a property of HVM guests in b_info->u.hvm.altp2m. Since
>>> guests on ARM are marked as PV, I have placed the field altp2m into
>>> b_info->u.pv.altp2m.
>>>
>>> However, if you believe that it would make more sense to place altp2m
>>> into b_info->arch_arm.altp2m, I will try to adapt the affected fields.
>>>
>> OK. I don't think that one should be placed in u.pv because that union
>> is for x86. However it is also not suitable to just promote that to
>> a common field because x86 pv doesn't have altp2m support.
>>
>> I think arch_arm would be a better location.
>>
>
> Alright, I will move the field altp2m to arch_arm and adopt the
> initialization routines.

Would not it make more sense to introduce a generic field 'altp2m' (i.e 
outside of hvm and arch_arm)?

Otherwise, toolstack such as libvirt will need to have specific code to 
handle ARM and x86.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-25  9:49           ` Julien Grall
@ 2016-07-25 10:08             ` Wei Liu
  2016-07-25 11:26               ` Sergej Proskurin
  0 siblings, 1 reply; 126+ messages in thread
From: Wei Liu @ 2016-07-25 10:08 UTC (permalink / raw)
  To: Julien Grall; +Cc: Sergej Proskurin, Wei Liu, xen-devel

On Mon, Jul 25, 2016 at 10:49:41AM +0100, Julien Grall wrote:
> Hello,
> 
> On 25/07/16 10:04, Sergej Proskurin wrote:
> >On 07/25/2016 10:32 AM, Wei Liu wrote:
> >>On Sun, Jul 24, 2016 at 06:06:00PM +0200, Sergej Proskurin wrote:
> >>>>>+        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
> >>>>>+                         libxl_defbool_val(info->u.pv.altp2m));
> >>>>>+}
> >>>>>+#endif
> >>>>>+
> >>>>> static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
> >>>>>                                 libxl_domain_build_info *const info)
> >>>>> {
> >>>>>@@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >>>>>             return rc;
> >>>>> #endif
> >>>>>     }
> >>>>>+#if defined(__arm__) || defined(__aarch64__)
> >>>>>+    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
> >>>>>+        pv_set_conf_params(ctx->xch, domid, info);
> >>>>>+#endif
> >>>>>
> >>>>>     rc = libxl__arch_domain_create(gc, d_config, domid);
> >>>>>
> >>>>>diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> >>>>>index ef614be..0a164f9 100644
> >>>>>--- a/tools/libxl/libxl_types.idl
> >>>>>+++ b/tools/libxl/libxl_types.idl
> >>>>>@@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
> >>>>>                                       ("features", string, {'const': True}),
> >>>>>                                       # Use host's E820 for PCI passthrough.
> >>>>>                                       ("e820_host", libxl_defbool),
> >>>>>+                                      ("altp2m", libxl_defbool),
> >>>>Why is this placed in PV instead of arch_arm?
> >>>>
> >>>The current implementation mirrors the x86's altp2m, where the altp2m
> >>>field represents a property of HVM guests in b_info->u.hvm.altp2m. Since
> >>>guests on ARM are marked as PV, I have placed the field altp2m into
> >>>b_info->u.pv.altp2m.
> >>>
> >>>However, if you believe that it would make more sense to place altp2m
> >>>into b_info->arch_arm.altp2m, I will try to adapt the affected fields.
> >>>
> >>OK. I don't think that one should be placed in u.pv because that union
> >>is for x86. However it is also not suitable to just promote that to
> >>a common field because x86 pv doesn't have altp2m support.
> >>
> >>I think arch_arm would be a better location.
> >>
> >
> >Alright, I will move the field altp2m to arch_arm and adopt the
> >initialization routines.
> 
> Would not it make more sense to introduce a generic field 'altp2m' (i.e
> outside of hvm and arch_arm)?
> 
> Otherwise, toolstack such as libvirt will need to have specific code to
> handle ARM and x86.
> 

Hmm... this is a compelling reason.

After mulling over this issue a bit more, I think I'm fine with
promoting altp2m to a common field.

Sergej, this is a patch to get you started. Please make sure setting the
old field still works. If I'm not clear enough, please ask.


Wei.

---8<---
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index ef614be..81d3ae5 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -494,6 +494,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     # Note that the partial device tree should avoid to use the phandle
     # 65000 which is reserved by the toolstack.
     ("device_tree",      string),
+    ("altp2m",           libxl_defbool),
     ("u", KeyedUnion(None, libxl_domain_type, "type",
                 [("hvm", Struct(None, [("firmware",         string),
                                        ("bios",             libxl_bios_type),
@@ -512,7 +513,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("mmio_hole_memkb",  MemKB),
                                        ("timer_mode",       libxl_timer_mode),
                                        ("nested_hvm",       libxl_defbool),
-                                       ("altp2m",           libxl_defbool),
+                                       ("altp2m",           libxl_defbool), # deprecated, please use common field
                                        ("smbios_firmware",  string),
                                        ("acpi_firmware",    string),
                                        ("hdtype",           libxl_hdtype),

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* Re: [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-25 10:08             ` Wei Liu
@ 2016-07-25 11:26               ` Sergej Proskurin
  2016-07-25 11:37                 ` Wei Liu
  0 siblings, 1 reply; 126+ messages in thread
From: Sergej Proskurin @ 2016-07-25 11:26 UTC (permalink / raw)
  To: Wei Liu, Julien Grall; +Cc: xen-devel


On 07/25/2016 12:08 PM, Wei Liu wrote:
> On Mon, Jul 25, 2016 at 10:49:41AM +0100, Julien Grall wrote:
>> Hello,
>>
>> On 25/07/16 10:04, Sergej Proskurin wrote:
>>> On 07/25/2016 10:32 AM, Wei Liu wrote:
>>>> On Sun, Jul 24, 2016 at 06:06:00PM +0200, Sergej Proskurin wrote:
>>>>>>> +        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
>>>>>>> +                         libxl_defbool_val(info->u.pv.altp2m));
>>>>>>> +}
>>>>>>> +#endif
>>>>>>> +
>>>>>>> static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
>>>>>>>                                 libxl_domain_build_info *const info)
>>>>>>> {
>>>>>>> @@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>>>>>>>             return rc;
>>>>>>> #endif
>>>>>>>     }
>>>>>>> +#if defined(__arm__) || defined(__aarch64__)
>>>>>>> +    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
>>>>>>> +        pv_set_conf_params(ctx->xch, domid, info);
>>>>>>> +#endif
>>>>>>>
>>>>>>>     rc = libxl__arch_domain_create(gc, d_config, domid);
>>>>>>>
>>>>>>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>>>>>>> index ef614be..0a164f9 100644
>>>>>>> --- a/tools/libxl/libxl_types.idl
>>>>>>> +++ b/tools/libxl/libxl_types.idl
>>>>>>> @@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>>>>>>>                                       ("features", string, {'const': True}),
>>>>>>>                                       # Use host's E820 for PCI passthrough.
>>>>>>>                                       ("e820_host", libxl_defbool),
>>>>>>> +                                      ("altp2m", libxl_defbool),
>>>>>> Why is this placed in PV instead of arch_arm?
>>>>>>
>>>>> The current implementation mirrors the x86's altp2m, where the altp2m
>>>>> field represents a property of HVM guests in b_info->u.hvm.altp2m. Since
>>>>> guests on ARM are marked as PV, I have placed the field altp2m into
>>>>> b_info->u.pv.altp2m.
>>>>>
>>>>> However, if you believe that it would make more sense to place altp2m
>>>>> into b_info->arch_arm.altp2m, I will try to adapt the affected fields.
>>>>>
>>>> OK. I don't think that one should be placed in u.pv because that union
>>>> is for x86. However it is also not suitable to just promote that to
>>>> a common field because x86 pv doesn't have altp2m support.
>>>>
>>>> I think arch_arm would be a better location.
>>>>
>>> Alright, I will move the field altp2m to arch_arm and adopt the
>>> initialization routines.
>> Would not it make more sense to introduce a generic field 'altp2m' (i.e
>> outside of hvm and arch_arm)?
>>
>> Otherwise, toolstack such as libvirt will need to have specific code to
>> handle ARM and x86.
>>
> Hmm... this is a compelling reason.
>
> After mulling over this issue a bit more, I think I'm fine with
> promoting altp2m to a common field.
>
> Sergej, this is a patch to get you started. Please make sure setting the
> old field still works. If I'm not clear enough, please ask.
>

Alright, thank you. One question up front: Do we still need two
different LIBXL_HAVE macros then?

Also, concerning the dom configuration parameters: Do you think we
should use two different dom configuration parameters (the legacy
"altp2mhvm" and the new "altp2m") or shall we merge both into "altp2m"
as we will address only one common altp2m field in the struct?

Best regards
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM.
  2016-07-25 11:26               ` Sergej Proskurin
@ 2016-07-25 11:37                 ` Wei Liu
  0 siblings, 0 replies; 126+ messages in thread
From: Wei Liu @ 2016-07-25 11:37 UTC (permalink / raw)
  To: Sergej Proskurin; +Cc: Julien Grall, Wei Liu, xen-devel

On Mon, Jul 25, 2016 at 01:26:29PM +0200, Sergej Proskurin wrote:
> 
> On 07/25/2016 12:08 PM, Wei Liu wrote:
> > On Mon, Jul 25, 2016 at 10:49:41AM +0100, Julien Grall wrote:
> >> Hello,
> >>
> >> On 25/07/16 10:04, Sergej Proskurin wrote:
> >>> On 07/25/2016 10:32 AM, Wei Liu wrote:
> >>>> On Sun, Jul 24, 2016 at 06:06:00PM +0200, Sergej Proskurin wrote:
> >>>>>>> +        xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
> >>>>>>> +                         libxl_defbool_val(info->u.pv.altp2m));
> >>>>>>> +}
> >>>>>>> +#endif
> >>>>>>> +
> >>>>>>> static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
> >>>>>>>                                 libxl_domain_build_info *const info)
> >>>>>>> {
> >>>>>>> @@ -433,6 +443,10 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >>>>>>>             return rc;
> >>>>>>> #endif
> >>>>>>>     }
> >>>>>>> +#if defined(__arm__) || defined(__aarch64__)
> >>>>>>> +    else /* info->type == LIBXL_DOMAIN_TYPE_PV */
> >>>>>>> +        pv_set_conf_params(ctx->xch, domid, info);
> >>>>>>> +#endif
> >>>>>>>
> >>>>>>>     rc = libxl__arch_domain_create(gc, d_config, domid);
> >>>>>>>
> >>>>>>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> >>>>>>> index ef614be..0a164f9 100644
> >>>>>>> --- a/tools/libxl/libxl_types.idl
> >>>>>>> +++ b/tools/libxl/libxl_types.idl
> >>>>>>> @@ -554,6 +554,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
> >>>>>>>                                       ("features", string, {'const': True}),
> >>>>>>>                                       # Use host's E820 for PCI passthrough.
> >>>>>>>                                       ("e820_host", libxl_defbool),
> >>>>>>> +                                      ("altp2m", libxl_defbool),
> >>>>>> Why is this placed in PV instead of arch_arm?
> >>>>>>
> >>>>> The current implementation mirrors the x86's altp2m, where the altp2m
> >>>>> field represents a property of HVM guests in b_info->u.hvm.altp2m. Since
> >>>>> guests on ARM are marked as PV, I have placed the field altp2m into
> >>>>> b_info->u.pv.altp2m.
> >>>>>
> >>>>> However, if you believe that it would make more sense to place altp2m
> >>>>> into b_info->arch_arm.altp2m, I will try to adapt the affected fields.
> >>>>>
> >>>> OK. I don't think that one should be placed in u.pv because that union
> >>>> is for x86. However it is also not suitable to just promote that to
> >>>> a common field because x86 pv doesn't have altp2m support.
> >>>>
> >>>> I think arch_arm would be a better location.
> >>>>
> >>> Alright, I will move the field altp2m to arch_arm and adopt the
> >>> initialization routines.
> >> Would not it make more sense to introduce a generic field 'altp2m' (i.e
> >> outside of hvm and arch_arm)?
> >>
> >> Otherwise, toolstack such as libvirt will need to have specific code to
> >> handle ARM and x86.
> >>
> > Hmm... this is a compelling reason.
> >
> > After mulling over this issue a bit more, I think I'm fine with
> > promoting altp2m to a common field.
> >
> > Sergej, this is a patch to get you started. Please make sure setting the
> > old field still works. If I'm not clear enough, please ask.
> >
> 
> Alright, thank you. One question up front: Do we still need two
> different LIBXL_HAVE macros then?
> 

Yes, we still need one. Something like

  LIBXL_HAVE_COMMON_ALTP2M

?

I'm bad at naming things though.

> Also, concerning the dom configuration parameters: Do you think we
> should use two different dom configuration parameters (the legacy
> "altp2mhvm" and the new "altp2m") or shall we merge both into "altp2m"
> as we will address only one common altp2m field in the struct?
> 

I think using altp2m is fine. But altp2mhvm needs to continue to work
because it is trivial to keep the old one.

In any case, document the new parameter, deprecate the old one and
document and the relationship between the two.

Wei.

> Best regards
> ~Sergej
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 126+ messages in thread

end of thread, other threads:[~2016-07-25 11:37 UTC | newest]

Thread overview: 126+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-04 11:45 [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support for altp2m on ARM Sergej Proskurin
2016-07-04 12:15   ` Andrew Cooper
2016-07-04 13:02     ` Sergej Proskurin
2016-07-04 13:25   ` Julien Grall
2016-07-04 13:43     ` Sergej Proskurin
2016-07-04 17:42   ` Julien Grall
2016-07-04 17:56     ` Tamas K Lengyel
2016-07-04 21:08       ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
2016-07-04 13:36   ` Julien Grall
2016-07-04 13:51     ` Sergej Proskurin
2016-07-05 10:19   ` Julien Grall
2016-07-06  9:14     ` Sergej Proskurin
2016-07-06 13:43       ` Julien Grall
2016-07-06 15:23         ` Tamas K Lengyel
2016-07-06 15:54           ` Julien Grall
2016-07-06 16:05             ` Tamas K Lengyel
2016-07-06 16:29               ` Julien Grall
2016-07-06 16:35                 ` Tamas K Lengyel
2016-07-06 18:35                   ` Julien Grall
2016-07-07  9:14                     ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
2016-07-04 11:45 ` [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
2016-07-04 15:17   ` Julien Grall
2016-07-04 16:40     ` Sergej Proskurin
2016-07-04 16:43       ` Andrew Cooper
2016-07-04 16:56         ` Sergej Proskurin
2016-07-04 17:44           ` Julien Grall
2016-07-04 21:19             ` Sergej Proskurin
2016-07-04 21:35               ` Julien Grall
2016-07-04 21:46               ` Sergej Proskurin
2016-07-04 18:18         ` Julien Grall
2016-07-04 21:37           ` Sergej Proskurin
2016-07-04 18:30       ` Julien Grall
2016-07-04 21:56         ` Sergej Proskurin
2016-07-04 16:15   ` Julien Grall
2016-07-04 16:51     ` Sergej Proskurin
2016-07-04 18:34       ` Julien Grall
2016-07-05  7:45         ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
2016-07-04 15:39   ` Julien Grall
2016-07-05  8:45     ` Sergej Proskurin
2016-07-05 10:11       ` Julien Grall
2016-07-05 12:05         ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines Sergej Proskurin
2016-07-04 12:12   ` Sergej Proskurin
2016-07-04 15:42     ` Julien Grall
2016-07-05  8:52       ` Sergej Proskurin
2016-07-04 15:55   ` Julien Grall
2016-07-05  9:51     ` Sergej Proskurin
2016-07-04 16:20   ` Julien Grall
2016-07-05  9:57     ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
2016-07-04 11:45 ` [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
2016-07-04 16:32   ` Julien Grall
2016-07-05 11:37     ` Sergej Proskurin
2016-07-05 11:48       ` Julien Grall
2016-07-05 12:18         ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
2016-07-04 11:45 ` [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table Sergej Proskurin
2016-07-04 18:43   ` Julien Grall
2016-07-05 13:56     ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m Sergej Proskurin
2016-07-04 12:30   ` Sergej Proskurin
2016-07-04 20:32   ` Julien Grall
2016-07-05 14:48     ` Sergej Proskurin
2016-07-05 15:37       ` Julien Grall
2016-07-05 20:21         ` Sergej Proskurin
2016-07-06 14:28           ` Julien Grall
2016-07-06 14:39             ` Sergej Proskurin
2016-07-07 17:24           ` Julien Grall
2016-07-04 11:45 ` [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
2016-07-15 13:45   ` Julien Grall
2016-07-16 15:18     ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m Sergej Proskurin
2016-07-04 20:34   ` Julien Grall
2016-07-05 20:31     ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
2016-07-05 12:49   ` Julien Grall
2016-07-05 21:55     ` Sergej Proskurin
2016-07-06 14:32       ` Julien Grall
2016-07-06 16:12         ` Tamas K Lengyel
2016-07-06 16:59           ` Julien Grall
2016-07-06 17:03           ` Sergej Proskurin
2016-07-06 17:08   ` Julien Grall
2016-07-07  9:16     ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
2016-07-04 20:53   ` Julien Grall
2016-07-06  8:33     ` Sergej Proskurin
2016-07-06 14:26       ` Julien Grall
2016-07-04 11:45 ` [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM Sergej Proskurin
2016-07-07 16:27   ` Wei Liu
2016-07-24 16:06     ` Sergej Proskurin
2016-07-25  8:32       ` Wei Liu
2016-07-25  9:04         ` Sergej Proskurin
2016-07-25  9:49           ` Julien Grall
2016-07-25 10:08             ` Wei Liu
2016-07-25 11:26               ` Sergej Proskurin
2016-07-25 11:37                 ` Wei Liu
2016-07-04 11:45 ` [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
2016-07-04 20:58   ` Julien Grall
2016-07-06  8:41     ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM Sergej Proskurin
2016-07-04 13:38   ` Razvan Cojocaru
2016-07-06  8:44     ` Sergej Proskurin
2016-07-04 11:45 ` [PATCH 01/18] arm/altp2m: Add cmd-line support " Sergej Proskurin
2016-07-04 11:45 ` [PATCH 02/18] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
2016-07-04 11:45 ` [PATCH 03/18] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
2016-07-04 11:45 ` [PATCH 04/18] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
2016-07-04 11:45 ` [PATCH 05/18] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
2016-07-04 11:45 ` [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines Sergej Proskurin
2016-07-04 11:45 ` [PATCH 07/18] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
2016-07-04 11:45 ` [PATCH 08/18] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
2016-07-04 11:45 ` [PATCH 09/18] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
2016-07-04 11:45 ` [PATCH 10/18] arm/altp2m: Renamed and extended p2m_alloc_table Sergej Proskurin
2016-07-04 11:45 ` [PATCH 11/18] arm/altp2m: Make flush_tlb_domain ready for altp2m Sergej Proskurin
2016-07-04 11:45 ` [PATCH 12/18] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
2016-07-04 11:46 ` [PATCH 13/18] arm/altp2m: Make get_page_from_gva ready for altp2m Sergej Proskurin
2016-07-04 11:46 ` [PATCH 14/18] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
2016-07-04 11:46 ` [PATCH 15/18] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
2016-07-04 11:46 ` [PATCH 16/18] arm/altp2m: Extended libxl to activate altp2m on ARM Sergej Proskurin
2016-07-04 11:46 ` [PATCH 17/18] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
2016-07-04 11:46 ` [PATCH 18/18] arm/altp2m: Extend xen-access for altp2m on ARM Sergej Proskurin
2016-07-04 12:52 ` [PATCH 00/18] arm/altp2m: Introducing altp2m to ARM Andrew Cooper
2016-07-04 13:05   ` Sergej Proskurin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).