All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
@ 2016-08-01 17:10 Sergej Proskurin
  2016-08-01 17:10 ` [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
                   ` (25 more replies)
  0 siblings, 26 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin


Hello all,

The following patch series can be found on Github[0] and is part of my
contribution to this year's Google Summer of Code (GSoC)[1]. My project is
managed by the organization The Honeynet Project. As part of GSoC, I am being
supervised by the Xen developer Tamas K. Lengyel <tamas@tklengyel.com>, George
D. Webster, and Steven Maresca.

In this patch series, we provide an implementation of the altp2m subsystem for
ARM. Our implementation is based on the altp2m subsystem for x86, providing
additional --alternate-- views on the guest's physical memory by means of the
ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
extend the p2m subsystem. Also, we extend libxl to support altp2m on ARM and
modify xen-access to test the suggested functionality.

To be more precise, altp2m allows to create and switch to additional p2m views
(i.e. gfn to mfn mappings). These views can be manipulated and activated as
will through the provided HVMOPs. In this way, the active guest instance in
question can seamlessly proceed execution without noticing that anything has
changed. The prime scope of application of altp2m is Virtual Machine
Introspection, where guest systems are analyzed from the outside of the VM.

Altp2m can be activated by means of the guest control parameter "altp2m" on x86
and ARM architectures.  The altp2m functionality by default can also be used
from within the guest by design. For use-cases requiring purely external access
to altp2m, a custom XSM policy is necessary on both x86 and ARM.

The current code-base is based on Julien Grall's branch abort-handlers-v2[2].

[0] https://github.com/sergej-proskurin/xen (branch arm-altp2m-v2)
[1] https://summerofcode.withgoogle.com/projects/#4970052843470848
[2] git://xenbits.xen.org/people/julieng/xen-unstable.git (branch abort-handlers-v2)


Sergej Proskurin (25):
  arm/altp2m: Add first altp2m HVMOP stubs.
  arm/altp2m: Add HVMOP_altp2m_get_domain_state.
  arm/altp2m: Add struct vttbr.
  arm/altp2m: Move hostp2m init/teardown to individual functions.
  arm/altp2m: Rename and extend p2m_alloc_table.
  arm/altp2m: Cosmetic fixes - function prototypes.
  arm/altp2m: Add altp2m init/teardown routines.
  arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  arm/altp2m: Add altp2m table flushing routine.
  arm/altp2m: Add HVMOP_altp2m_create_p2m.
  arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  arm/altp2m: Add HVMOP_altp2m_switch_p2m.
  arm/altp2m: Make p2m_restore_state ready for altp2m.
  arm/altp2m: Make get_page_from_gva ready for altp2m.
  arm/altp2m: Extend __p2m_lookup.
  arm/altp2m: Make p2m_mem_access_check ready for altp2m.
  arm/altp2m: Cosmetic fixes - function prototypes.
  arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  arm/altp2m: Add altp2m_propagate_change.
  arm/altp2m: Add altp2m paging mechanism.
  arm/altp2m: Add HVMOP_altp2m_change_gfn.
  arm/altp2m: Adjust debug information to altp2m.
  arm/altp2m: Extend libxl to activate altp2m on ARM.
  arm/altp2m: Extend xen-access for altp2m on ARM.
  arm/altp2m: Add test of xc_altp2m_change_gfn.

 tools/libxl/libxl.h                 |   3 +-
 tools/libxl/libxl_create.c          |   8 +-
 tools/libxl/libxl_dom.c             |   4 +-
 tools/libxl/libxl_types.idl         |   4 +-
 tools/libxl/xl_cmdimpl.c            |  26 +-
 tools/tests/xen-access/xen-access.c | 162 ++++++++-
 xen/arch/arm/Makefile               |   1 +
 xen/arch/arm/altp2m.c               | 675 ++++++++++++++++++++++++++++++++++++
 xen/arch/arm/hvm.c                  | 129 +++++++
 xen/arch/arm/p2m.c                  | 430 ++++++++++++++++++-----
 xen/arch/arm/traps.c                | 126 +++++--
 xen/include/asm-arm/altp2m.h        |  79 ++++-
 xen/include/asm-arm/domain.h        |  16 +
 xen/include/asm-arm/flushtlb.h      |   4 +
 xen/include/asm-arm/p2m.h           |  68 +++-
 xen/include/asm-arm/processor.h     |  16 +
 16 files changed, 1594 insertions(+), 157 deletions(-)
 create mode 100644 xen/arch/arm/altp2m.c

-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-03 16:54   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 02/25] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit moves the altp2m-related code from x86 to ARM. Functions
that are no yet supported notify the caller or print a BUG message
stating their absence.

Also, the struct arch_domain is extended with the altp2m_active
attribute, representing the current altp2m activity configuration of the
domain.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Removed altp2m command-line option: Guard through HVM_PARAM_ALTP2M.
    Removed not used altp2m helper stubs in altp2m.h.
---
 xen/arch/arm/hvm.c           | 79 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/altp2m.h |  4 +--
 xen/include/asm-arm/domain.h |  3 ++
 3 files changed, 84 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index d999bde..eb524ae 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -32,6 +32,81 @@
 
 #include <asm/hypercall.h>
 
+#include <asm/altp2m.h>
+
+static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    struct xen_hvm_altp2m_op a;
+    struct domain *d = NULL;
+    int rc = 0;
+
+    if ( copy_from_guest(&a, arg, 1) )
+        return -EFAULT;
+
+    if ( a.pad1 || a.pad2 ||
+         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
+         (a.cmd < HVMOP_altp2m_get_domain_state) ||
+         (a.cmd > HVMOP_altp2m_change_gfn) )
+        return -EINVAL;
+
+    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
+        rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
+
+    if ( d == NULL )
+        return -ESRCH;
+
+    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
+         (a.cmd != HVMOP_altp2m_set_domain_state) &&
+         !d->arch.altp2m_active )
+    {
+        rc = -EOPNOTSUPP;
+        goto out;
+    }
+
+    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_TARGET, d)) )
+        goto out;
+
+    switch ( a.cmd )
+    {
+    case HVMOP_altp2m_get_domain_state:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_set_domain_state:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_vcpu_enable_notify:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_create_p2m:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_destroy_p2m:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_switch_p2m:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_set_mem_access:
+        rc = -EOPNOTSUPP;
+        break;
+
+    case HVMOP_altp2m_change_gfn:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     long rc = 0;
@@ -80,6 +155,10 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             rc = -EINVAL;
         break;
 
+    case HVMOP_altp2m:
+        rc = do_altp2m_op(arg);
+        break;
+
     default:
     {
         gdprintk(XENLOG_DEBUG, "HVMOP op=%lu: not implemented\n", op);
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a87747a..0711796 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -2,6 +2,7 @@
  * Alternate p2m
  *
  * Copyright (c) 2014, Intel Corporation.
+ * Copyright (c) 2016, Sergej Proskurin <proskurin@sec.in.tum.de>.
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
@@ -24,8 +25,7 @@
 /* Alternate p2m on/off per domain */
 static inline bool_t altp2m_active(const struct domain *d)
 {
-    /* Not implemented on ARM. */
-    return 0;
+    return d->arch.altp2m_active;
 }
 
 /* Alternate p2m VCPU */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 9452fcd..cc4bda0 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -126,6 +126,9 @@ struct arch_domain
     paddr_t efi_acpi_gpa;
     paddr_t efi_acpi_len;
 #endif
+
+    /* altp2m: allow multiple copies of host p2m */
+    bool_t altp2m_active;
 }  __cacheline_aligned;
 
 struct arch_vcpu
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 02/25] arm/altp2m: Add HVMOP_altp2m_get_domain_state.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
  2016-08-01 17:10 ` [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-01 17:21   ` Andrew Cooper
  2016-08-01 17:10 ` [PATCH v2 03/25] arm/altp2m: Add struct vttbr Sergej Proskurin
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adopts the x86 HVMOP_altp2m_get_domain_state implementation.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/hvm.c           | 9 ++++++++-
 xen/include/asm-arm/altp2m.h | 2 ++
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index eb524ae..01a3243 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -69,7 +69,14 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
     switch ( a.cmd )
     {
     case HVMOP_altp2m_get_domain_state:
-        rc = -EOPNOTSUPP;
+        if ( !altp2m_enabled(d) )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        a.u.domain_state.state = altp2m_active(d);
+        rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
         break;
 
     case HVMOP_altp2m_set_domain_state:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 0711796..d47b249 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -22,6 +22,8 @@
 
 #include <xen/sched.h>
 
+#define altp2m_enabled(d) ((d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M])
+
 /* Alternate p2m on/off per domain */
 static inline bool_t altp2m_active(const struct domain *d)
 {
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 03/25] arm/altp2m: Add struct vttbr.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
  2016-08-01 17:10 ` [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
  2016-08-01 17:10 ` [PATCH v2 02/25] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-03 17:04   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions Sergej Proskurin
                   ` (22 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin

The struct vttbr introduces a simple way to precisely access the
individual fields of the vttbr.
---
 xen/arch/arm/p2m.c              |  8 ++++----
 xen/arch/arm/traps.c            |  2 +-
 xen/include/asm-arm/p2m.h       |  2 +-
 xen/include/asm-arm/processor.h | 16 ++++++++++++++++
 4 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 40a0b80..cbc64a1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -122,7 +122,7 @@ void p2m_restore_state(struct vcpu *n)
     WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
     isb();
 
-    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
+    WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
     isb();
 
     if ( is_32bit_domain(n->domain) )
@@ -147,10 +147,10 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
      * VMID. So switch to the VTTBR of a given P2M if different.
      */
     ovttbr = READ_SYSREG64(VTTBR_EL2);
-    if ( ovttbr != p2m->vttbr )
+    if ( ovttbr != p2m->vttbr.vttbr )
     {
         local_irq_save(flags);
-        WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
+        WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
         isb();
     }
 
@@ -1293,7 +1293,7 @@ static int p2m_alloc_table(struct domain *d)
 
     p2m->root = page;
 
-    p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
+    p2m->vttbr.vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
 
     /*
      * Make sure that all TLBs corresponding to the new VMID are flushed
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 06f06e3..12be7c9 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -881,7 +881,7 @@ void vcpu_show_registers(const struct vcpu *v)
     ctxt.ifsr32_el2 = v->arch.ifsr;
 #endif
 
-    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
+    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr.vttbr;
 
     _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1, v);
 }
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53c4d78..5c7cd1a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -33,7 +33,7 @@ struct p2m_domain {
     uint8_t vmid;
 
     /* Current Translation Table Base Register for the p2m */
-    uint64_t vttbr;
+    struct vttbr vttbr;
 
     /*
      * Highest guest frame that's ever been mapped in the p2m
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 15bf890..f8ca18c 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -529,6 +529,22 @@ union hsr {
 
 
 };
+
+/* VTTBR: Virtualization Translation Table Base Register */
+struct vttbr {
+    union {
+        struct {
+            u64 baddr :40, /* variable res0: from 0-(x-1) bit */
+                res1  :8,
+                vmid  :8,
+                res2  :8;
+        };
+        u64 vttbr;
+    };
+};
+
+#define INVALID_VTTBR (0UL)
+
 #endif
 
 /* HSR.EC == HSR_CP{15,14,10}_32 */
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (2 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 03/25] arm/altp2m: Add struct vttbr Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-03 17:40   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 05/25] arm/altp2m: Rename and extend p2m_alloc_table Sergej Proskurin
                   ` (21 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit pulls out generic init/teardown functionality out of
p2m_init and p2m_teardown into p2m_init_one, p2m_free_one, and
p2m_flush_table functions.  This allows our future implementation to
reuse existing code for the initialization/teardown of altp2m views.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Added the function p2m_flush_table to the previous version.
---
 xen/arch/arm/p2m.c        | 74 +++++++++++++++++++++++++++++++++++++----------
 xen/include/asm-arm/p2m.h | 11 +++++++
 2 files changed, 70 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index cbc64a1..17f3299 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1360,50 +1360,94 @@ static void p2m_free_vmid(struct domain *d)
     spin_unlock(&vmid_alloc_lock);
 }
 
-void p2m_teardown(struct domain *d)
+/* Reset this p2m table to be empty */
+void p2m_flush_table(struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
-    struct page_info *pg;
+    struct page_info *page, *pg;
+    unsigned int i;
+
+    page = p2m->root;
+
+    /* Clear all concatenated first level pages */
+    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
+        clear_and_clean_page(page + i);
 
+    /* Free the rest of the trie pages back to the paging pool */
     while ( (pg = page_list_remove_head(&p2m->pages)) )
         free_domheap_page(pg);
+}
+
+static inline void p2m_free_one(struct p2m_domain *p2m)
+{
+    p2m_flush_table(p2m);
+
+    /* Free VMID and reset VTTBR */
+    p2m_free_vmid(p2m->domain);
+    p2m->vttbr.vttbr = INVALID_VTTBR;
 
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
 
     p2m->root = NULL;
 
-    p2m_free_vmid(d);
-
     radix_tree_destroy(&p2m->mem_access_settings, NULL);
 }
 
-int p2m_init(struct domain *d)
+int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
     int rc = 0;
 
     rwlock_init(&p2m->lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
 
-    p2m->vmid = INVALID_VMID;
-
     rc = p2m_alloc_vmid(d);
     if ( rc != 0 )
         return rc;
 
-    p2m->max_mapped_gfn = _gfn(0);
-    p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
-
-    p2m->default_access = p2m_access_rwx;
+    p2m->domain = d;
+    p2m->access_required = false;
     p2m->mem_access_enabled = false;
+    p2m->default_access = p2m_access_rwx;
+    p2m->root = NULL;
+    p2m->max_mapped_gfn = _gfn(0);
+    p2m->lowest_mapped_gfn = INVALID_GFN;
+    p2m->vttbr.vttbr = INVALID_VTTBR;
     radix_tree_init(&p2m->mem_access_settings);
 
-    rc = p2m_alloc_table(d);
-
     return rc;
 }
 
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    p2m_free_one(p2m);
+}
+
+void p2m_teardown(struct domain *d)
+{
+    p2m_teardown_hostp2m(d);
+}
+
+static int p2m_init_hostp2m(struct domain *d)
+{
+    int rc;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    p2m->p2m_class = p2m_host;
+
+    rc = p2m_init_one(d, p2m);
+    if ( rc )
+        return rc;
+
+    return p2m_alloc_table(d);
+}
+
+int p2m_init(struct domain *d)
+{
+    return p2m_init_hostp2m(d);
+}
+
 int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 5c7cd1a..1f9c370 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -18,6 +18,11 @@ struct domain;
 
 extern void memory_type_changed(struct domain *);
 
+typedef enum {
+    p2m_host,
+    p2m_alternate,
+} p2m_class_t;
+
 /* Per-p2m-table state */
 struct p2m_domain {
     /* Lock that protects updates to the p2m */
@@ -78,6 +83,12 @@ struct p2m_domain {
      * enough available bits to store this information.
      */
     struct radix_tree_root mem_access_settings;
+
+    /* Choose between: host/alternate */
+    p2m_class_t p2m_class;
+
+    /* Back pointer to domain */
+    struct domain *domain;
 };
 
 /*
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 05/25] arm/altp2m: Rename and extend p2m_alloc_table.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (3 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-03 17:57   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 06/25] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
                   ` (20 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The initially named function "p2m_alloc_table" allocated pages solely
required for the host's p2m. The new implementation leaves p2m
allocation related parts inside of this function to generally initialize
p2m/altp2m tables. This is done generically, as the host's p2m and
altp2m tables are allocated similarly. Since this function will be used
by the altp2m initialization routines, it is not made static. In
addition, this commit provides the overlay function "p2m_table_init"
that is used for the host's p2m allocation/initialization.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Removed altp2m table initialization from "p2m_table_init".
---
 xen/arch/arm/p2m.c | 29 +++++++++++++++++++++++------
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 17f3299..63fe3d9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1277,11 +1277,11 @@ void guest_physmap_remove_page(struct domain *d,
     p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
 }
 
-static int p2m_alloc_table(struct domain *d)
+int p2m_alloc_table(struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
-    struct page_info *page;
     unsigned int i;
+    struct page_info *page;
+    struct vttbr *vttbr = &p2m->vttbr;
 
     page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
     if ( page == NULL )
@@ -1293,11 +1293,28 @@ static int p2m_alloc_table(struct domain *d)
 
     p2m->root = page;
 
-    p2m->vttbr.vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
+    /* Initialize the VTTBR associated with the allocated p2m table. */
+    vttbr->vttbr = 0;
+    vttbr->vmid = p2m->vmid & 0xff;
+    vttbr->baddr = page_to_maddr(p2m->root);
+
+    return 0;
+}
+
+static int p2m_table_init(struct domain *d)
+{
+    int rc;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    rc = p2m_alloc_table(p2m);
+    if ( rc != 0 )
+        return -ENOMEM;
+
+    d->arch.altp2m_active = false;
 
     /*
      * Make sure that all TLBs corresponding to the new VMID are flushed
-     * before using it
+     * before using it.
      */
     p2m_flush_tlb(p2m);
 
@@ -1440,7 +1457,7 @@ static int p2m_init_hostp2m(struct domain *d)
     if ( rc )
         return rc;
 
-    return p2m_alloc_table(d);
+    return p2m_table_init(d);
 }
 
 int p2m_init(struct domain *d)
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 06/25] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (4 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 05/25] arm/altp2m: Rename and extend p2m_alloc_table Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-03 18:02   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
                   ` (19 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit changes the prototype of the following functions:
- p2m_alloc_vmid
- p2m_free_vmid

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 63fe3d9..ff9c0d1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1337,11 +1337,10 @@ void p2m_vmid_allocator_init(void)
     set_bit(INVALID_VMID, vmid_mask);
 }
 
-static int p2m_alloc_vmid(struct domain *d)
+static int p2m_alloc_vmid(struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
-
     int rc, nr;
+    struct domain *d = p2m->domain;
 
     spin_lock(&vmid_alloc_lock);
 
@@ -1367,9 +1366,8 @@ out:
     return rc;
 }
 
-static void p2m_free_vmid(struct domain *d)
+static void p2m_free_vmid(struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
     spin_lock(&vmid_alloc_lock);
     if ( p2m->vmid != INVALID_VMID )
         clear_bit(p2m->vmid, vmid_mask);
@@ -1399,7 +1397,7 @@ static inline void p2m_free_one(struct p2m_domain *p2m)
     p2m_flush_table(p2m);
 
     /* Free VMID and reset VTTBR */
-    p2m_free_vmid(p2m->domain);
+    p2m_free_vmid(p2m);
     p2m->vttbr.vttbr = INVALID_VTTBR;
 
     if ( p2m->root )
@@ -1417,7 +1415,7 @@ int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
     rwlock_init(&p2m->lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
 
-    rc = p2m_alloc_vmid(d);
+    rc = p2m_alloc_vmid(p2m);
     if ( rc != 0 )
         return rc;
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (5 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 06/25] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-03 18:12   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
                   ` (18 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The p2m initialization now invokes initialization routines responsible
for the allocation and initialization of altp2m structures. The same
applies to teardown routines. The functionality has been adopted from
the x86 altp2m implementation.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Shared code between host/altp2m init/teardown functions.
    Added conditional init/teardown of altp2m.
    Altp2m related functions are moved to altp2m.c
---
 xen/arch/arm/Makefile        |  1 +
 xen/arch/arm/altp2m.c        | 71 ++++++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/p2m.c           | 28 +++++++++++++----
 xen/include/asm-arm/altp2m.h |  6 ++++
 xen/include/asm-arm/domain.h |  4 +++
 xen/include/asm-arm/p2m.h    |  5 ++++
 6 files changed, 110 insertions(+), 5 deletions(-)
 create mode 100644 xen/arch/arm/altp2m.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 23aaf52..4a7f660 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -5,6 +5,7 @@ subdir-$(CONFIG_ARM_64) += efi
 subdir-$(CONFIG_ACPI) += acpi
 
 obj-$(CONFIG_ALTERNATIVE) += alternative.o
+obj-y += altp2m.o
 obj-y += bootfdt.o
 obj-y += cpu.o
 obj-y += cpuerrata.o
diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
new file mode 100644
index 0000000..abbd39a
--- /dev/null
+++ b/xen/arch/arm/altp2m.c
@@ -0,0 +1,71 @@
+/*
+ * arch/arm/altp2m.c
+ *
+ * Alternate p2m
+ * Copyright (c) 2016 Sergej Proskurin <proskurin@sec.in.tum.de>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT ANY
+ * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+ * details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/p2m.h>
+#include <asm/altp2m.h>
+
+int altp2m_init(struct domain *d)
+{
+    unsigned int i;
+
+    spin_lock_init(&d->arch.altp2m_lock);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        d->arch.altp2m_p2m[i] = NULL;
+        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
+    }
+
+    return 0;
+}
+
+void altp2m_teardown(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    altp2m_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( !d->arch.altp2m_p2m[i] )
+            continue;
+
+        p2m = d->arch.altp2m_p2m[i];
+        p2m_free_one(p2m);
+        xfree(p2m);
+
+        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
+        d->arch.altp2m_p2m[i] = NULL;
+    }
+
+    d->arch.altp2m_active = false;
+
+    altp2m_unlock(d);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ff9c0d1..29ec5e5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -14,6 +14,8 @@
 #include <asm/hardirq.h>
 #include <asm/page.h>
 
+#include <asm/altp2m.h>
+
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
 static unsigned int __read_mostly p2m_root_level;
@@ -1392,7 +1394,7 @@ void p2m_flush_table(struct p2m_domain *p2m)
         free_domheap_page(pg);
 }
 
-static inline void p2m_free_one(struct p2m_domain *p2m)
+void p2m_free_one(struct p2m_domain *p2m)
 {
     p2m_flush_table(p2m);
 
@@ -1415,9 +1417,13 @@ int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
     rwlock_init(&p2m->lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
 
-    rc = p2m_alloc_vmid(p2m);
-    if ( rc != 0 )
-        return rc;
+    /* Reused altp2m views keep their VMID. */
+    if ( p2m->vmid == INVALID_VMID )
+    {
+        rc = p2m_alloc_vmid(p2m);
+        if ( rc != 0 )
+            return rc;
+    }
 
     p2m->domain = d;
     p2m->access_required = false;
@@ -1441,6 +1447,9 @@ static void p2m_teardown_hostp2m(struct domain *d)
 
 void p2m_teardown(struct domain *d)
 {
+    if ( altp2m_enabled(d) )
+        altp2m_teardown(d);
+
     p2m_teardown_hostp2m(d);
 }
 
@@ -1460,7 +1469,16 @@ static int p2m_init_hostp2m(struct domain *d)
 
 int p2m_init(struct domain *d)
 {
-    return p2m_init_hostp2m(d);
+    int rc;
+
+    rc = p2m_init_hostp2m(d);
+    if ( rc )
+        return rc;
+
+    if ( altp2m_enabled(d) )
+        rc = altp2m_init(d);
+
+    return rc;
 }
 
 int relinquish_p2m_mapping(struct domain *d)
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index d47b249..79ea66b 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -22,6 +22,9 @@
 
 #include <xen/sched.h>
 
+#define altp2m_lock(d)    spin_lock(&(d)->arch.altp2m_lock)
+#define altp2m_unlock(d)  spin_unlock(&(d)->arch.altp2m_lock)
+
 #define altp2m_enabled(d) ((d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M])
 
 /* Alternate p2m on/off per domain */
@@ -38,4 +41,7 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
     return 0;
 }
 
+int altp2m_init(struct domain *d);
+void altp2m_teardown(struct domain *d);
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index cc4bda0..3c25ea5 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -129,6 +129,10 @@ struct arch_domain
 
     /* altp2m: allow multiple copies of host p2m */
     bool_t altp2m_active;
+    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
+    uint64_t altp2m_vttbr[MAX_ALTP2M];
+    /* Covers access to members of the struct altp2m. */
+    spinlock_t altp2m_lock;
 }  __cacheline_aligned;
 
 struct arch_vcpu
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 1f9c370..24a1f61 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -9,6 +9,8 @@
 #include <xen/p2m-common.h>
 #include <public/memory.h>
 
+#define MAX_ALTP2M 10           /* ARM might contain an arbitrary number of
+                                   altp2m views. */
 #define paddr_bits PADDR_BITS
 
 /* Holds the bit size of IPAs in p2m tables.  */
@@ -212,6 +214,9 @@ void guest_physmap_remove_page(struct domain *d,
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
 
+/* Release resources held by the p2m structure. */
+void p2m_free_one(struct p2m_domain *p2m);
+
 /*
  * Populate-on-demand
  */
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (6 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-03 18:41   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 09/25] arm/altp2m: Add altp2m table flushing routine Sergej Proskurin
                   ` (17 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
specific domain. This commit adopts the x86
HVMOP_altp2m_set_domain_state implementation. Note that the function
altp2m_flush is currently implemented in form of a stub.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Dynamically allocate memory for altp2m views only when needed.
    Move altp2m related helpers to altp2m.c.
    p2m_flush_tlb is made publicly accessible.
---
 xen/arch/arm/altp2m.c          | 116 +++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/hvm.c             |  34 +++++++++++-
 xen/arch/arm/p2m.c             |   2 +-
 xen/include/asm-arm/altp2m.h   |  15 ++++++
 xen/include/asm-arm/domain.h   |   9 ++++
 xen/include/asm-arm/flushtlb.h |   4 ++
 xen/include/asm-arm/p2m.h      |  11 ++++
 7 files changed, 189 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index abbd39a..767f233 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -19,6 +19,122 @@
 
 #include <asm/p2m.h>
 #include <asm/altp2m.h>
+#include <asm/flushtlb.h>
+
+struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
+{
+    unsigned int index = vcpu_altp2m(v).p2midx;
+
+    if ( index == INVALID_ALTP2M )
+        return NULL;
+
+    BUG_ON(index >= MAX_ALTP2M);
+
+    return v->domain->arch.altp2m_p2m[index];
+}
+
+static void altp2m_vcpu_reset(struct vcpu *v)
+{
+    struct altp2mvcpu *av = &vcpu_altp2m(v);
+
+    av->p2midx = INVALID_ALTP2M;
+}
+
+void altp2m_vcpu_initialise(struct vcpu *v)
+{
+    if ( v != current )
+        vcpu_pause(v);
+
+    altp2m_vcpu_reset(v);
+    vcpu_altp2m(v).p2midx = 0;
+    atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
+
+    if ( v != current )
+        vcpu_unpause(v);
+}
+
+void altp2m_vcpu_destroy(struct vcpu *v)
+{
+    struct p2m_domain *p2m;
+
+    if ( v != current )
+        vcpu_pause(v);
+
+    if ( (p2m = altp2m_get_altp2m(v)) )
+        atomic_dec(&p2m->active_vcpus);
+
+    altp2m_vcpu_reset(v);
+
+    if ( v != current )
+        vcpu_unpause(v);
+}
+
+static int altp2m_init_helper(struct domain *d, unsigned int idx)
+{
+    int rc;
+    struct p2m_domain *p2m = d->arch.altp2m_p2m[idx];
+
+    if ( p2m == NULL )
+    {
+        /* Allocate a new, zeroed altp2m view. */
+        p2m = xzalloc(struct p2m_domain);
+        if ( p2m == NULL)
+        {
+            rc = -ENOMEM;
+            goto err;
+        }
+    }
+
+    /* Initialize the new altp2m view. */
+    rc = p2m_init_one(d, p2m);
+    if ( rc )
+        goto err;
+
+    /* Allocate a root table for the altp2m view. */
+    rc = p2m_alloc_table(p2m);
+    if ( rc )
+        goto err;
+
+    p2m->p2m_class = p2m_alternate;
+    p2m->access_required = 1;
+    _atomic_set(&p2m->active_vcpus, 0);
+
+    d->arch.altp2m_p2m[idx] = p2m;
+    d->arch.altp2m_vttbr[idx] = p2m->vttbr.vttbr;
+
+    /*
+     * Make sure that all TLBs corresponding to the current VMID are flushed
+     * before using it.
+     */
+    p2m_flush_tlb(p2m);
+
+    return rc;
+
+err:
+    if ( p2m )
+        xfree(p2m);
+
+    d->arch.altp2m_p2m[idx] = NULL;
+
+    return rc;
+}
+
+int altp2m_init_by_id(struct domain *d, unsigned int idx)
+{
+    int rc = -EINVAL;
+
+    if ( idx >= MAX_ALTP2M )
+        return rc;
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] == INVALID_VTTBR )
+        rc = altp2m_init_helper(d, idx);
+
+    altp2m_unlock(d);
+
+    return rc;
+}
 
 int altp2m_init(struct domain *d)
 {
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 01a3243..78370c6 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -80,8 +80,40 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_set_domain_state:
-        rc = -EOPNOTSUPP;
+    {
+        struct vcpu *v;
+        bool_t ostate;
+
+        if ( !altp2m_enabled(d) )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        ostate = d->arch.altp2m_active;
+        d->arch.altp2m_active = !!a.u.domain_state.state;
+
+        /* If the alternate p2m state has changed, handle appropriately */
+        if ( (d->arch.altp2m_active != ostate) &&
+             (ostate || !(rc = altp2m_init_by_id(d, 0))) )
+        {
+            for_each_vcpu( d, v )
+            {
+                if ( !ostate )
+                    altp2m_vcpu_initialise(v);
+                else
+                    altp2m_vcpu_destroy(v);
+            }
+
+            /*
+             * The altp2m_active state has been deactivated. It is now safe to
+             * flush all altp2m views -- including altp2m[0].
+             */
+            if ( ostate )
+                altp2m_flush(d);
+        }
         break;
+    }
 
     case HVMOP_altp2m_vcpu_enable_notify:
         rc = -EOPNOTSUPP;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 29ec5e5..8afea11 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -139,7 +139,7 @@ void p2m_restore_state(struct vcpu *n)
     isb();
 }
 
-static void p2m_flush_tlb(struct p2m_domain *p2m)
+void p2m_flush_tlb(struct p2m_domain *p2m)
 {
     unsigned long flags = 0;
     uint64_t ovttbr;
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 79ea66b..a33c740 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -22,6 +22,8 @@
 
 #include <xen/sched.h>
 
+#define INVALID_ALTP2M    0xffff
+
 #define altp2m_lock(d)    spin_lock(&(d)->arch.altp2m_lock)
 #define altp2m_unlock(d)  spin_unlock(&(d)->arch.altp2m_lock)
 
@@ -44,4 +46,17 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 int altp2m_init(struct domain *d);
 void altp2m_teardown(struct domain *d);
 
+void altp2m_vcpu_initialise(struct vcpu *v);
+void altp2m_vcpu_destroy(struct vcpu *v);
+
+/* Make a specific alternate p2m valid. */
+int altp2m_init_by_id(struct domain *d,
+                      unsigned int idx);
+
+/* Flush all the alternate p2m's for a domain */
+static inline void altp2m_flush(struct domain *d)
+{
+    /* Not yet implemented. */
+}
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 3c25ea5..63a9650 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -135,6 +135,12 @@ struct arch_domain
     spinlock_t altp2m_lock;
 }  __cacheline_aligned;
 
+struct altp2mvcpu {
+    uint16_t p2midx; /* alternate p2m index */
+};
+
+#define vcpu_altp2m(v) ((v)->arch.avcpu)
+
 struct arch_vcpu
 {
     struct {
@@ -264,6 +270,9 @@ struct arch_vcpu
     struct vtimer phys_timer;
     struct vtimer virt_timer;
     bool_t vtimer_initialized;
+
+    /* Alternate p2m context */
+    struct altp2mvcpu avcpu;
 }  __cacheline_aligned;
 
 void vcpu_show_execution_state(struct vcpu *);
diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
index 329fbb4..57c3c34 100644
--- a/xen/include/asm-arm/flushtlb.h
+++ b/xen/include/asm-arm/flushtlb.h
@@ -2,6 +2,7 @@
 #define __ASM_ARM_FLUSHTLB_H__
 
 #include <xen/cpumask.h>
+#include <asm/p2m.h>
 
 /*
  * Filter the given set of CPUs, removing those that definitely flushed their
@@ -25,6 +26,9 @@ do {                                                                    \
 /* Flush specified CPUs' TLBs */
 void flush_tlb_mask(const cpumask_t *mask);
 
+/* Flush CPU's TLBs for the specified domain */
+void p2m_flush_tlb(struct p2m_domain *p2m);
+
 #endif /* __ASM_ARM_FLUSHTLB_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 24a1f61..f13f285 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -9,6 +9,8 @@
 #include <xen/p2m-common.h>
 #include <public/memory.h>
 
+#include <asm/atomic.h>
+
 #define MAX_ALTP2M 10           /* ARM might contain an arbitrary number of
                                    altp2m views. */
 #define paddr_bits PADDR_BITS
@@ -86,6 +88,9 @@ struct p2m_domain {
      */
     struct radix_tree_root mem_access_settings;
 
+    /* Alternate p2m: count of vcpu's currently using this p2m. */
+    atomic_t active_vcpus;
+
     /* Choose between: host/alternate */
     p2m_class_t p2m_class;
 
@@ -214,6 +219,12 @@ void guest_physmap_remove_page(struct domain *d,
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
 
+/* Allocates page table for a p2m. */
+int p2m_alloc_table(struct p2m_domain *p2m);
+
+/* Initialize the p2m structure. */
+int p2m_init_one(struct domain *d, struct p2m_domain *p2m);
+
 /* Release resources held by the p2m structure. */
 void p2m_free_one(struct p2m_domain *p2m);
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 09/25] arm/altp2m: Add altp2m table flushing routine.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (7 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-03 18:44   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 10/25] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
                   ` (16 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The current implementation differentiates between flushing and
destroying altp2m views. This commit adds the function altp2m_flush,
which allows to flush all or individual altp2m views without destroying
the entire table. In this way, altp2m views can be reused at a later
point in time.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Pages in p2m->pages are not cleared in p2m_flush_table anymore.
    VMID is freed in p2m_free_one.
    Cosmetic fixes.
---
 xen/arch/arm/altp2m.c        | 38 ++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/altp2m.h |  5 +----
 xen/include/asm-arm/p2m.h    |  3 +++
 3 files changed, 42 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 767f233..e73424c 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -151,6 +151,44 @@ int altp2m_init(struct domain *d)
     return 0;
 }
 
+void altp2m_flush(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    /*
+     * If altp2m is active, we are not allowed to flush altp2m[0]. This special
+     * view is considered as the hostp2m as long as altp2m is active.
+     */
+    ASSERT(!altp2m_active(d));
+
+    altp2m_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
+            continue;
+
+        p2m = d->arch.altp2m_p2m[i];
+
+        read_lock(&p2m->lock);
+
+        p2m_flush_table(p2m);
+
+        /*
+         * Reset VTTBR.
+         *
+         * Note that VMID is not freed so that it can be reused later.
+         */
+        p2m->vttbr.vttbr = INVALID_VTTBR;
+        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
+
+        read_unlock(&p2m->lock);
+    }
+
+    altp2m_unlock(d);
+}
+
 void altp2m_teardown(struct domain *d)
 {
     unsigned int i;
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a33c740..3ba82a8 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -54,9 +54,6 @@ int altp2m_init_by_id(struct domain *d,
                       unsigned int idx);
 
 /* Flush all the alternate p2m's for a domain */
-static inline void altp2m_flush(struct domain *d)
-{
-    /* Not yet implemented. */
-}
+void altp2m_flush(struct domain *d);
 
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index f13f285..32326cb 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -222,6 +222,9 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
 /* Allocates page table for a p2m. */
 int p2m_alloc_table(struct p2m_domain *p2m);
 
+/* Flushes the page table held by the p2m. */
+void p2m_flush_table(struct p2m_domain *p2m);
+
 /* Initialize the p2m structure. */
 int p2m_init_one(struct domain *d, struct p2m_domain *p2m);
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 10/25] arm/altp2m: Add HVMOP_altp2m_create_p2m.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (8 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 09/25] arm/altp2m: Add altp2m table flushing routine Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-03 18:48   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 11/25] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
                   ` (15 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Cosmetic fixes.
---
 xen/arch/arm/altp2m.c        | 23 +++++++++++++++++++++++
 xen/arch/arm/hvm.c           |  3 ++-
 xen/include/asm-arm/altp2m.h |  4 ++++
 3 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index e73424c..c22d2e4 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -136,6 +136,29 @@ int altp2m_init_by_id(struct domain *d, unsigned int idx)
     return rc;
 }
 
+int altp2m_init_next(struct domain *d, uint16_t *idx)
+{
+    int rc = -EINVAL;
+    unsigned int i;
+
+    altp2m_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( d->arch.altp2m_vttbr[i] != INVALID_VTTBR )
+            continue;
+
+        rc = altp2m_init_helper(d, i);
+        *idx = (uint16_t) i;
+
+        break;
+    }
+
+    altp2m_unlock(d);
+
+    return rc;
+}
+
 int altp2m_init(struct domain *d)
 {
     unsigned int i;
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 78370c6..063a06b 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -120,7 +120,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_create_p2m:
-        rc = -EOPNOTSUPP;
+        if ( !(rc = altp2m_init_next(d, &a.u.view.view)) )
+            rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
         break;
 
     case HVMOP_altp2m_destroy_p2m:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 3ba82a8..3ecae27 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -53,6 +53,10 @@ void altp2m_vcpu_destroy(struct vcpu *v);
 int altp2m_init_by_id(struct domain *d,
                       unsigned int idx);
 
+/* Find an available alternate p2m and make it valid */
+int altp2m_init_next(struct domain *d,
+                     uint16_t *idx);
+
 /* Flush all the alternate p2m's for a domain */
 void altp2m_flush(struct domain *d);
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 11/25] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (9 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 10/25] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 11:46   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 12/25] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
                   ` (14 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Substituted the call to tlb_flush for p2m_flush_table.
    Added comments.
    Cosmetic fixes.
---
 xen/arch/arm/altp2m.c        | 50 ++++++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/hvm.c           |  2 +-
 xen/include/asm-arm/altp2m.h |  4 ++++
 3 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index c22d2e4..80ed553 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -212,6 +212,56 @@ void altp2m_flush(struct domain *d)
     altp2m_unlock(d);
 }
 
+int altp2m_destroy_by_id(struct domain *d, unsigned int idx)
+{
+    struct p2m_domain *p2m;
+    int rc = -EBUSY;
+
+    /*
+     * The altp2m[0] is considered as the hostp2m and is used as a safe harbor
+     * to which you can switch as long as altp2m is active. After deactivating
+     * altp2m, the system switches back to the original hostp2m view. That is,
+     * altp2m[0] should only be destroyed/flushed/freed, when altp2m is
+     * deactivated.
+     */
+    if ( !idx || idx >= MAX_ALTP2M )
+        return rc;
+
+    domain_pause_except_self(d);
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] != INVALID_VTTBR )
+    {
+        p2m = d->arch.altp2m_p2m[idx];
+
+        if ( !_atomic_read(p2m->active_vcpus) )
+        {
+            read_lock(&p2m->lock);
+
+            p2m_flush_table(p2m);
+
+            /*
+             * Reset VTTBR.
+             *
+             * Note that VMID is not freed so that it can be reused later.
+             */
+            p2m->vttbr.vttbr = INVALID_VTTBR;
+            d->arch.altp2m_vttbr[idx] = INVALID_VTTBR;
+
+            read_unlock(&p2m->lock);
+
+            rc = 0;
+        }
+    }
+
+    altp2m_unlock(d);
+
+    domain_unpause_except_self(d);
+
+    return rc;
+}
+
 void altp2m_teardown(struct domain *d)
 {
     unsigned int i;
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 063a06b..df29cdc 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -125,7 +125,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_destroy_p2m:
-        rc = -EOPNOTSUPP;
+        rc = altp2m_destroy_by_id(d, a.u.view.view);
         break;
 
     case HVMOP_altp2m_switch_p2m:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 3ecae27..afa1580 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -60,4 +60,8 @@ int altp2m_init_next(struct domain *d,
 /* Flush all the alternate p2m's for a domain */
 void altp2m_flush(struct domain *d);
 
+/* Make a specific alternate p2m invalid */
+int altp2m_destroy_by_id(struct domain *d,
+                         unsigned int idx);
+
 #endif /* __ASM_ARM_ALTP2M_H */
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 12/25] arm/altp2m: Add HVMOP_altp2m_switch_p2m.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (10 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 11/25] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 11:51   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 13/25] arm/altp2m: Make p2m_restore_state ready for altp2m Sergej Proskurin
                   ` (13 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/altp2m.c        | 32 ++++++++++++++++++++++++++++++++
 xen/arch/arm/hvm.c           |  2 +-
 xen/include/asm-arm/altp2m.h |  4 ++++
 3 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 80ed553..7404f42 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -33,6 +33,38 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
     return v->domain->arch.altp2m_p2m[index];
 }
 
+int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
+{
+    struct vcpu *v;
+    int rc = -EINVAL;
+
+    if ( idx >= MAX_ALTP2M )
+        return rc;
+
+    domain_pause_except_self(d);
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] != INVALID_VTTBR )
+    {
+        for_each_vcpu( d, v )
+            if ( idx != vcpu_altp2m(v).p2midx )
+            {
+                atomic_dec(&altp2m_get_altp2m(v)->active_vcpus);
+                vcpu_altp2m(v).p2midx = idx;
+                atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
+            }
+
+        rc = 0;
+    }
+
+    altp2m_unlock(d);
+
+    domain_unpause_except_self(d);
+
+    return rc;
+}
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
     struct altp2mvcpu *av = &vcpu_altp2m(v);
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index df29cdc..3b508df 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -129,7 +129,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_switch_p2m:
-        rc = -EOPNOTSUPP;
+        rc = altp2m_switch_domain_altp2m_by_id(d, a.u.view.view);
         break;
 
     case HVMOP_altp2m_set_mem_access:
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index afa1580..790bb33 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -49,6 +49,10 @@ void altp2m_teardown(struct domain *d);
 void altp2m_vcpu_initialise(struct vcpu *v);
 void altp2m_vcpu_destroy(struct vcpu *v);
 
+/* Switch alternate p2m for entire domain */
+int altp2m_switch_domain_altp2m_by_id(struct domain *d,
+                                      unsigned int idx);
+
 /* Make a specific alternate p2m valid. */
 int altp2m_init_by_id(struct domain *d,
                       unsigned int idx);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 13/25] arm/altp2m: Make p2m_restore_state ready for altp2m.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (11 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 12/25] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 11:55   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva " Sergej Proskurin
                   ` (12 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adapts the function "p2m_restore_state" in a way that the
currently active altp2m table is considered during state restoration.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c           | 4 +++-
 xen/include/asm-arm/altp2m.h | 3 +++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8afea11..bcad51f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -115,7 +115,9 @@ void p2m_save_state(struct vcpu *p)
 void p2m_restore_state(struct vcpu *n)
 {
     register_t hcr;
-    struct p2m_domain *p2m = &n->domain->arch.p2m;
+    struct domain *d = n->domain;
+    struct p2m_domain *p2m = unlikely(altp2m_active(d)) ?
+                             altp2m_get_altp2m(n) : p2m_get_hostp2m(d);
 
     if ( is_idle_vcpu(n) )
         return;
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 790bb33..a6496b7 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -49,6 +49,9 @@ void altp2m_teardown(struct domain *d);
 void altp2m_vcpu_initialise(struct vcpu *v);
 void altp2m_vcpu_destroy(struct vcpu *v);
 
+/* Get current alternate p2m table. */
+struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
+
 /* Switch alternate p2m for entire domain */
 int altp2m_switch_domain_altp2m_by_id(struct domain *d,
                                       unsigned int idx);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (12 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 13/25] arm/altp2m: Make p2m_restore_state ready for altp2m Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 11:59   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 15/25] arm/altp2m: Extend __p2m_lookup Sergej Proskurin
                   ` (11 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The function get_page_from_gva uses ARM's hardware support to translate
gva's to machine addresses. This function is used, among others, for
memory regulation purposes, e.g, within the context of memory ballooning.
To ensure correct behavior while altp2m is in use, we use the host's p2m
table for the associated gva to ma translation. This is required at this
point, as altp2m lazily copies pages from the host's p2m and even might
be flushed because of changes to the host's p2m (as it is done within
the context of memory ballooning).

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 31 +++++++++++++++++++++++++++++--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index bcad51f..784f8da 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1614,7 +1614,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
                                     unsigned long flags)
 {
     struct domain *d = v->domain;
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
     struct page_info *page = NULL;
     paddr_t maddr = 0;
     int rc;
@@ -1628,7 +1628,34 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
 
     p2m_read_lock(p2m);
 
-    rc = gvirt_to_maddr(va, &maddr, flags);
+    /*
+     * If altp2m is active, we still read the gva from the hostp2m, as it
+     * contains all valid mappings while the currently active altp2m view might
+     * not have the required gva mapping yet.
+     */
+    if ( unlikely(altp2m_active(d)) )
+    {
+        unsigned long irq_flags = 0;
+        uint64_t ovttbr = READ_SYSREG64(VTTBR_EL2);
+
+        if ( ovttbr != p2m->vttbr.vttbr )
+        {
+            local_irq_save(irq_flags);
+            WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
+            isb();
+        }
+
+        rc = gvirt_to_maddr(va, &maddr, flags);
+
+        if ( ovttbr != p2m->vttbr.vttbr )
+        {
+            WRITE_SYSREG64(ovttbr, VTTBR_EL2);
+            isb();
+            local_irq_restore(irq_flags);
+        }
+    }
+    else
+        rc = gvirt_to_maddr(va, &maddr, flags);
 
     if ( rc )
         goto err;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 15/25] arm/altp2m: Extend __p2m_lookup.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (13 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva " Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 12:04   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 16/25] arm/altp2m: Make p2m_mem_access_check ready for altp2m Sergej Proskurin
                   ` (10 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit extends the functionality of the function "__p2m_lookup".
The function "__p2m_lookup" performs the necessary steps gathering
information concerning memory attributes and the p2m table level a
specific gfn is mapped to. Thus, we extend the function's prototype so
that the caller can optionally get these information for further
processing.

Also, we extend the function prototype of "__p2m_lookup" to hold an
argument of type "struct p2m_domain*", as we need to distinguish between
the host's p2m and different altp2m views. While doing so, we needed to
extend the function's prototypes of the following functions:

* __p2m_get_mem_access
* p2m_mem_access_and_get_page

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 59 ++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 39 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 784f8da..326e343 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -168,15 +168,22 @@ void p2m_flush_tlb(struct p2m_domain *p2m)
     }
 }
 
+static int __p2m_get_mem_access(struct p2m_domain*, gfn_t, xenmem_access_t*);
+
 /*
  * Lookup the MFN corresponding to a domain's GFN.
  *
  * There are no processor functions to do a stage 2 only lookup therefore we
  * do a a software walk.
+ *
+ * Optionally, __p2m_lookup takes arguments to provide information about the
+ * p2m type, the p2m table level the paddr is mapped to, associated mem
+ * attributes, and memory access rights.
  */
-static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
+static mfn_t __p2m_lookup(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
+                          unsigned int *level, unsigned int *mattr,
+                          xenmem_access_t *xma)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
     const paddr_t paddr = pfn_to_paddr(gfn_x(gfn));
     const unsigned int offsets[4] = {
         zeroeth_table_offset(paddr),
@@ -191,13 +198,16 @@ static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
     mfn_t mfn = INVALID_MFN;
     paddr_t mask = 0;
     p2m_type_t _t;
-    unsigned int level, root_table;
+    unsigned int _level, _mattr, root_table;
+    int rc;
 
     ASSERT(p2m_is_locked(p2m));
     BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
 
-    /* Allow t to be NULL */
+    /* Allow t. level, and mattr to be NULL */
     t = t ?: &_t;
+    level = level ?: &_level;
+    mattr = mattr ?: &_mattr;
 
     *t = p2m_invalid;
 
@@ -220,20 +230,20 @@ static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
 
     ASSERT(P2M_ROOT_LEVEL < 4);
 
-    for ( level = P2M_ROOT_LEVEL ; level < 4 ; level++ )
+    for ( *level = P2M_ROOT_LEVEL ; *level < 4 ; (*level)++ )
     {
-        mask = masks[level];
+        mask = masks[*level];
 
-        pte = map[offsets[level]];
+        pte = map[offsets[*level]];
 
-        if ( level == 3 && !p2m_table(pte) )
+        if ( *level == 3 && !p2m_table(pte) )
             /* Invalid, clobber the pte */
             pte.bits = 0;
-        if ( level == 3 || !p2m_table(pte) )
+        if ( *level == 3 || !p2m_table(pte) )
             /* Done */
             break;
 
-        ASSERT(level < 3);
+        ASSERT(*level < 3);
 
         /* Map for next level */
         unmap_domain_page(map);
@@ -249,6 +259,16 @@ static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
         mfn = _mfn(paddr_to_pfn((pte.bits & PADDR_MASK & mask) |
                                 (paddr & ~mask)));
         *t = pte.p2m.type;
+        *mattr = pte.p2m.mattr;
+
+        if ( xma )
+        {
+            /* Get mem access attributes if requested. */
+            rc = __p2m_get_mem_access(p2m, gfn, xma);
+            if ( rc )
+                /* Set invalid mfn on error. */
+                mfn = INVALID_MFN;
+        }
     }
 
 err:
@@ -258,10 +278,10 @@ err:
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
 {
     mfn_t ret;
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     p2m_read_lock(p2m);
-    ret = __p2m_lookup(d, gfn, t);
+    ret = __p2m_lookup(p2m, gfn, t, NULL, NULL, NULL);
     p2m_read_unlock(p2m);
 
     return ret;
@@ -479,10 +499,9 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry,
     return 0;
 }
 
-static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
+static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
                                 xenmem_access_t *access)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
     void *i;
     unsigned int index;
 
@@ -525,7 +544,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
          * No setting was found in the Radix tree. Check if the
          * entry exists in the page-tables.
          */
-        mfn_t mfn = __p2m_lookup(d, gfn, NULL);
+        mfn_t mfn = __p2m_lookup(p2m, gfn, NULL, NULL, NULL, NULL);
 
         if ( mfn_eq(mfn, INVALID_MFN) )
             return -ESRCH;
@@ -1519,7 +1538,7 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
  * we indeed found a conflicting mem_access setting.
  */
 static struct page_info*
-p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
+p2m_mem_access_check_and_get_page(struct p2m_domain *p2m, vaddr_t gva, unsigned long flag)
 {
     long rc;
     paddr_t ipa;
@@ -1539,7 +1558,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
      * We do this first as this is faster in the default case when no
      * permission is set on the page.
      */
-    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
+    rc = __p2m_get_mem_access(p2m, gfn, &xma);
     if ( rc < 0 )
         goto err;
 
@@ -1588,7 +1607,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
      * We had a mem_access permission limiting the access, but the page type
      * could also be limiting, so we need to check that as well.
      */
-    mfn = __p2m_lookup(current->domain, gfn, &t);
+    mfn = __p2m_lookup(p2m, gfn, &t, NULL, NULL, NULL);
     if ( mfn_eq(mfn, INVALID_MFN) )
         goto err;
 
@@ -1671,7 +1690,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
 
 err:
     if ( !page && p2m->mem_access_enabled )
-        page = p2m_mem_access_check_and_get_page(va, flags);
+        page = p2m_mem_access_check_and_get_page(p2m, va, flags);
 
     p2m_read_unlock(p2m);
 
@@ -1948,7 +1967,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     p2m_read_lock(p2m);
-    ret = __p2m_get_mem_access(d, gfn, access);
+    ret = __p2m_get_mem_access(p2m, gfn, access);
     p2m_read_unlock(p2m);
 
     return ret;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 16/25] arm/altp2m: Make p2m_mem_access_check ready for altp2m.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (14 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 15/25] arm/altp2m: Extend __p2m_lookup Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-01 17:10 ` [PATCH v2 17/25] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit extends the function "p2m_mem_access_check" to consider
altp2m. Also, the new implementation fills the request buffer to hold
altp2m-related information.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 326e343..53258e1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -14,6 +14,7 @@
 #include <asm/hardirq.h>
 #include <asm/page.h>
 
+#include <asm/vm_event.h>
 #include <asm/altp2m.h>
 
 #ifdef CONFIG_ARM_64
@@ -1775,13 +1776,17 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
     xenmem_access_t xma;
     vm_event_request_t *req;
     struct vcpu *v = current;
-    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = unlikely(altp2m_active(d)) ?
+                             altp2m_get_altp2m(v) : p2m_get_hostp2m(d);
 
     /* Mem_access is not in use. */
     if ( !p2m->mem_access_enabled )
         return true;
 
-    rc = p2m_get_mem_access(v->domain, _gfn(paddr_to_pfn(gpa)), &xma);
+    p2m_read_lock(p2m);
+    rc = __p2m_get_mem_access(p2m, _gfn(paddr_to_pfn(gpa)), &xma);
+    p2m_read_unlock(p2m);
     if ( rc )
         return true;
 
@@ -1887,6 +1892,14 @@ bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
         req->u.mem_access.flags |= npfec.insn_fetch     ? MEM_ACCESS_X : 0;
         req->vcpu_id = v->vcpu_id;
 
+        vm_event_fill_regs(req);
+
+        if ( unlikely(altp2m_active(d)) )
+        {
+            req->flags |= VM_EVENT_FLAG_ALTERNATE_P2M;
+            req->altp2m_idx = vcpu_altp2m(v).p2midx;
+        }
+
         mem_access_send_req(v->domain, req);
         xfree(req);
     }
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 17/25] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (15 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 16/25] arm/altp2m: Make p2m_mem_access_check ready for altp2m Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 12:06   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 18/25] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
                   ` (8 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit changes the prototype of the following functions:
- apply_p2m_changes
- apply_one_level
- p2m_insert_mapping
- p2m_remove_mapping

These changes are required as our implementation reuses most of the
existing ARM p2m implementation to set page table attributes of the
individual altp2m views. Therefore, exiting function prototypes have
been extended to hold another argument (of type struct p2m_domain *).
This allows to specify the p2m/altp2m domain that should be processed by
the individual function -- instead of accessing the host's default p2m
domain.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Adoption of the functions "__p2m_lookup" and "__p2m_get_mem_access"
    have been moved out of this commit.
---
 xen/arch/arm/p2m.c | 49 +++++++++++++++++++++++++------------------------
 1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 53258e1..d4b7c92 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -702,6 +702,7 @@ static int p2m_shatter_page(struct p2m_domain *p2m,
  * -ve == (-Exxx) error.
  */
 static int apply_one_level(struct domain *d,
+                           struct p2m_domain *p2m,
                            lpae_t *entry,
                            unsigned int level,
                            bool_t flush_cache,
@@ -717,7 +718,6 @@ static int apply_one_level(struct domain *d,
     const paddr_t level_size = level_sizes[level];
     const paddr_t level_mask = level_masks[level];
 
-    struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t pte;
     const lpae_t orig_pte = *entry;
     int rc;
@@ -955,6 +955,7 @@ static void update_reference_mapping(struct page_info *page,
 }
 
 static int apply_p2m_changes(struct domain *d,
+                     struct p2m_domain *p2m,
                      enum p2m_operation op,
                      gfn_t sgfn,
                      unsigned long nr,
@@ -967,7 +968,6 @@ static int apply_p2m_changes(struct domain *d,
     paddr_t end_gpaddr = pfn_to_paddr(gfn_x(sgfn) + nr);
     paddr_t maddr = pfn_to_paddr(mfn_x(smfn));
     int rc, ret;
-    struct p2m_domain *p2m = &d->arch.p2m;
     lpae_t *mappings[4] = { NULL, NULL, NULL, NULL };
     struct page_info *pages[4] = { NULL, NULL, NULL, NULL };
     paddr_t addr;
@@ -1093,7 +1093,7 @@ static int apply_p2m_changes(struct domain *d,
             lpae_t *entry = &mappings[level][offset];
             lpae_t old_entry = *entry;
 
-            ret = apply_one_level(d, entry,
+            ret = apply_one_level(d, p2m, entry,
                                   level, flush_pt, op,
                                   start_gpaddr, end_gpaddr,
                                   &addr, &maddr, &flush,
@@ -1178,7 +1178,7 @@ static int apply_p2m_changes(struct domain *d,
 out:
     if ( flush )
     {
-        p2m_flush_tlb(&d->arch.p2m);
+        p2m_flush_tlb(p2m);
         ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
         if ( !rc )
             rc = ret;
@@ -1205,31 +1205,33 @@ out:
          * addr keeps the address of the end of the last successfully-inserted
          * mapping.
          */
-        apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
-                          0, p2m_invalid, d->arch.p2m.default_access);
+        apply_p2m_changes(d, p2m, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
+                          0, p2m_invalid, p2m->default_access);
     }
 
     return rc;
 }
 
 static inline int p2m_insert_mapping(struct domain *d,
+                                     struct p2m_domain *p2m,
                                      gfn_t start_gfn,
                                      unsigned long nr,
                                      mfn_t mfn,
                                      p2m_type_t t)
 {
-    return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
-                             0, t, d->arch.p2m.default_access);
+    return apply_p2m_changes(d, p2m, INSERT, start_gfn, nr, mfn,
+                             0, t, p2m->default_access);
 }
 
 static inline int p2m_remove_mapping(struct domain *d,
+                                     struct p2m_domain *p2m,
                                      gfn_t start_gfn,
                                      unsigned long nr,
                                      mfn_t mfn)
 {
-    return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
+    return apply_p2m_changes(d, p2m, REMOVE, start_gfn, nr, mfn,
                              /* arguments below not used when removing mapping */
-                             0, p2m_invalid, d->arch.p2m.default_access);
+                             0, p2m_invalid, p2m->default_access);
 }
 
 int map_regions_rw_cache(struct domain *d,
@@ -1237,7 +1239,7 @@ int map_regions_rw_cache(struct domain *d,
                          unsigned long nr,
                          mfn_t mfn)
 {
-    return p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_c);
+    return p2m_insert_mapping(d, p2m_get_hostp2m(d), gfn, nr, mfn, p2m_mmio_direct_c);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
@@ -1245,7 +1247,7 @@ int unmap_regions_rw_cache(struct domain *d,
                            unsigned long nr,
                            mfn_t mfn)
 {
-    return p2m_remove_mapping(d, gfn, nr, mfn);
+    return p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, nr, mfn);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -1253,7 +1255,7 @@ int map_mmio_regions(struct domain *d,
                      unsigned long nr,
                      mfn_t mfn)
 {
-    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_nc);
+    return p2m_insert_mapping(d, p2m_get_hostp2m(d), start_gfn, nr, mfn, p2m_mmio_direct_nc);
 }
 
 int unmap_mmio_regions(struct domain *d,
@@ -1261,7 +1263,7 @@ int unmap_mmio_regions(struct domain *d,
                        unsigned long nr,
                        mfn_t mfn)
 {
-    return p2m_remove_mapping(d, start_gfn, nr, mfn);
+    return p2m_remove_mapping(d, p2m_get_hostp2m(d), start_gfn, nr, mfn);
 }
 
 int map_dev_mmio_region(struct domain *d,
@@ -1291,14 +1293,14 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
+    return p2m_insert_mapping(d, p2m_get_hostp2m(d), gfn, (1 << page_order), mfn, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                gfn_t gfn,
                                mfn_t mfn, unsigned int page_order)
 {
-    p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
+    p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, (1 << page_order), mfn);
 }
 
 int p2m_alloc_table(struct p2m_domain *p2m)
@@ -1505,26 +1507,25 @@ int p2m_init(struct domain *d)
 
 int relinquish_p2m_mapping(struct domain *d)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long nr;
 
     nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
 
-    return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
-                             INVALID_MFN, 0, p2m_invalid,
-                             d->arch.p2m.default_access);
+    return apply_p2m_changes(d, p2m, RELINQUISH, p2m->lowest_mapped_gfn, nr,
+                             INVALID_MFN, 0, p2m_invalid, p2m->default_access);
 }
 
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
     gfn_t end = gfn_add(start, nr);
 
     start = gfn_max(start, p2m->lowest_mapped_gfn);
     end = gfn_min(end, p2m->max_mapped_gfn);
 
-    return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
-                             0, p2m_invalid, d->arch.p2m.default_access);
+    return apply_p2m_changes(d, p2m, CACHEFLUSH, start, nr, INVALID_MFN,
+                             0, p2m_invalid, p2m->default_access);
 }
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
@@ -1963,7 +1964,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
         return 0;
     }
 
-    rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
+    rc = apply_p2m_changes(d, p2m, MEMACCESS, gfn_add(gfn, start),
                            (nr - start), INVALID_MFN, mask, 0, a);
     if ( rc < 0 )
         return rc;
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 18/25] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (16 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 17/25] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 14:19   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 19/25] arm/altp2m: Add altp2m_propagate_change Sergej Proskurin
                   ` (7 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The HVMOP HVMOP_altp2m_set_mem_access allows to set gfn permissions of
(currently one page at a time) of a specific altp2m view. In case the
view does not hold the requested gfn entry, it will be first copied from
the hostp2m table and then modified as requested.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Prevent the page reference count from being falsely updated on
    altp2m modification. Therefore, we add a check determining whether
    the target p2m is a hostp2m before p2m_put_l3_page is called.
---
 xen/arch/arm/altp2m.c        | 68 +++++++++++++++++++++++++++++++++++++
 xen/arch/arm/hvm.c           |  7 +++-
 xen/arch/arm/p2m.c           | 81 +++++++++++++++++++++++++++++++++++++-------
 xen/include/asm-arm/altp2m.h | 10 ++++++
 xen/include/asm-arm/p2m.h    | 21 ++++++++++++
 5 files changed, 173 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 7404f42..f98fd73 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -65,6 +65,74 @@ int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
     return rc;
 }
 
+int altp2m_set_mem_access(struct domain *d,
+                          struct p2m_domain *hp2m,
+                          struct p2m_domain *ap2m,
+                          p2m_access_t a,
+                          gfn_t gfn)
+{
+    p2m_type_t p2mt;
+    xenmem_access_t xma_old;
+    paddr_t gpa = pfn_to_paddr(gfn_x(gfn));
+    mfn_t mfn;
+    unsigned int level;
+    int rc;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    altp2m_lock(d);
+
+    /* Check if entry is part of the altp2m view. */
+    mfn = p2m_lookup_attr(ap2m, gfn, &p2mt, &level, NULL, NULL);
+
+    /* Check host p2m if no valid entry in ap2m. */
+    if ( mfn_eq(mfn, INVALID_MFN) )
+    {
+        /* Check if entry is part of the host p2m view. */
+        mfn = p2m_lookup_attr(hp2m, gfn, &p2mt, &level, NULL, &xma_old);
+        if ( mfn_eq(mfn, INVALID_MFN) || p2mt != p2m_ram_rw )
+        {
+            rc = -ESRCH;
+            goto out;
+        }
+
+        /* If this is a superpage, copy that first. */
+        if ( level != 3 )
+        {
+            rc = modify_altp2m_entry(d, ap2m, gpa, pfn_to_paddr(mfn_x(mfn)),
+                                     level, p2mt, memaccess[xma_old]);
+            if ( rc < 0 )
+            {
+                rc = -ESRCH;
+                goto out;
+            }
+        }
+    }
+
+    /* Set mem access attributes - currently supporting only one (4K) page. */
+    level = 3;
+    rc = modify_altp2m_entry(d, ap2m, gpa, pfn_to_paddr(mfn_x(mfn)),
+                             level, p2mt, a);
+
+out:
+    altp2m_unlock(d);
+
+    return rc;
+}
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
     struct altp2mvcpu *av = &vcpu_altp2m(v);
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 3b508df..00a244a 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -133,7 +133,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_set_mem_access:
-        rc = -EOPNOTSUPP;
+        if ( a.u.set_mem_access.pad )
+            rc = -EINVAL;
+        else
+            rc = p2m_set_mem_access(d, _gfn(a.u.set_mem_access.gfn), 1, 0, 0,
+                                    a.u.set_mem_access.hvmmem_access,
+                                    a.u.set_mem_access.view);
         break;
 
     case HVMOP_altp2m_change_gfn:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d4b7c92..e0a7f38 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -288,6 +288,19 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
     return ret;
 }
 
+mfn_t p2m_lookup_attr(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
+                      unsigned int *level, unsigned int *mattr,
+                      xenmem_access_t *xma)
+{
+    mfn_t ret;
+
+    p2m_read_lock(p2m);
+    ret = __p2m_lookup(p2m, gfn, t, level, mattr, xma);
+    p2m_read_unlock(p2m);
+
+    return ret;
+}
+
 int guest_physmap_mark_populate_on_demand(struct domain *d,
                                           unsigned long gfn,
                                           unsigned int order)
@@ -760,7 +773,7 @@ static int apply_one_level(struct domain *d,
                  * of the p2m tree which we would be about to lop off.
                  */
                 BUG_ON(level < 3 && p2m_table(orig_pte));
-                if ( level == 3 )
+                if ( level == 3 && p2m_is_hostp2m(p2m) )
                     p2m_put_l3_page(orig_pte);
             }
             else /* New mapping */
@@ -859,7 +872,7 @@ static int apply_one_level(struct domain *d,
 
         p2m->stats.mappings[level]--;
 
-        if ( level == 3 )
+        if ( level == 3 && p2m_is_hostp2m(p2m) )
             p2m_put_l3_page(orig_pte);
 
         /*
@@ -1303,6 +1316,21 @@ void guest_physmap_remove_page(struct domain *d,
     p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, (1 << page_order), mfn);
 }
 
+int modify_altp2m_entry(struct domain *d, struct p2m_domain *ap2m,
+                        paddr_t gpa, paddr_t maddr, unsigned int level,
+                        p2m_type_t t, p2m_access_t a)
+{
+    paddr_t size = level_sizes[level];
+    paddr_t mask = level_masks[level];
+    gfn_t gfn = _gfn(paddr_to_pfn(gpa & mask));
+    mfn_t mfn = _mfn(paddr_to_pfn(maddr & mask));
+    unsigned long nr = paddr_to_pfn(size);
+
+    ASSERT(p2m_is_altp2m(ap2m));
+
+    return apply_p2m_changes(d, ap2m, INSERT, gfn, nr, mfn, 0, t, a);
+}
+
 int p2m_alloc_table(struct p2m_domain *p2m)
 {
     unsigned int i;
@@ -1920,7 +1948,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
                         uint32_t start, uint32_t mask, xenmem_access_t access,
                         unsigned int altp2m_idx)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
     p2m_access_t a;
     long rc = 0;
 
@@ -1939,33 +1967,60 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
 #undef ACCESS
     };
 
+    /* altp2m view 0 is treated as the hostp2m */
+    if ( altp2m_idx )
+    {
+        if ( altp2m_idx >= MAX_ALTP2M ||
+             d->arch.altp2m_vttbr[altp2m_idx] == INVALID_VTTBR )
+            return -EINVAL;
+
+        ap2m = d->arch.altp2m_p2m[altp2m_idx];
+    }
+
     switch ( access )
     {
     case 0 ... ARRAY_SIZE(memaccess) - 1:
         a = memaccess[access];
         break;
     case XENMEM_access_default:
-        a = p2m->default_access;
+        a = hp2m->default_access;
         break;
     default:
         return -EINVAL;
     }
 
-    /*
-     * Flip mem_access_enabled to true when a permission is set, as to prevent
-     * allocating or inserting super-pages.
-     */
-    p2m->mem_access_enabled = true;
-
     /* If request to set default access. */
     if ( gfn_eq(gfn, INVALID_GFN) )
     {
-        p2m->default_access = a;
+        hp2m->default_access = a;
         return 0;
     }
 
-    rc = apply_p2m_changes(d, p2m, MEMACCESS, gfn_add(gfn, start),
-                           (nr - start), INVALID_MFN, mask, 0, a);
+    if ( ap2m )
+    {
+        /*
+         * Flip mem_access_enabled to true when a permission is set, as to prevent
+         * allocating or inserting super-pages.
+         */
+        ap2m->mem_access_enabled = true;
+
+        /*
+         * ARM altp2m currently supports only setting of memory access rights
+         * of only one (4K) page at a time.
+         */
+        rc = altp2m_set_mem_access(d, hp2m, ap2m, a, gfn);
+    }
+    else
+    {
+        /*
+         * Flip mem_access_enabled to true when a permission is set, as to prevent
+         * allocating or inserting super-pages.
+         */
+        hp2m->mem_access_enabled = true;
+
+        rc = apply_p2m_changes(d, hp2m, MEMACCESS, gfn_add(gfn, start),
+                               (nr - start), INVALID_MFN, mask, 0, a);
+    }
     if ( rc < 0 )
         return rc;
     else if ( rc > 0 )
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index a6496b7..dc41f93 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -71,4 +71,14 @@ void altp2m_flush(struct domain *d);
 int altp2m_destroy_by_id(struct domain *d,
                          unsigned int idx);
 
+/* Set memory access attributes of the gfn in the altp2m view. If the altp2m
+ * view does not contain the particular entry, copy it first from the hostp2m.
+ *
+ * Currently supports memory attribute adoptions of only one (4K) page. */
+int altp2m_set_mem_access(struct domain *d,
+                          struct p2m_domain *hp2m,
+                          struct p2m_domain *ap2m,
+                          p2m_access_t a,
+                          gfn_t gfn);
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 32326cb..9859ad1 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -180,6 +180,17 @@ void p2m_dump_info(struct domain *d);
 /* Look up the MFN corresponding to a domain's GFN. */
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
+/* Lookup the MFN, memory attributes, and page table level corresponding to a
+ * domain's GFN. */
+mfn_t p2m_lookup_attr(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
+                      unsigned int *level, unsigned int *mattr,
+                      xenmem_access_t *xma);
+
+/* Modify an altp2m view's entry or its attributes. */
+int modify_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
+                        paddr_t gpa, paddr_t maddr, unsigned int level,
+                        p2m_type_t t, p2m_access_t a);
+
 /* Clean & invalidate caches corresponding to a region of guest address space */
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
 
@@ -303,6 +314,16 @@ static inline int get_page_and_type(struct page_info *page,
 /* get host p2m table */
 #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
 
+static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
+{
+    return p2m->p2m_class == p2m_host;
+}
+
+static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
+{
+    return p2m->p2m_class == p2m_alternate;
+}
+
 /* vm_event and mem_access are supported on any ARM guest */
 static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
 {
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 19/25] arm/altp2m: Add altp2m_propagate_change.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (17 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 18/25] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 14:50   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
                   ` (6 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit introduces the function "altp2m_propagate_change" that is
responsible to propagate changes applied to the host's p2m to a specific
or even all altp2m views. In this way, Xen can in-/decrease the guest's
physmem at run-time without leaving the altp2m views with
stalled/invalid entries.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/altp2m.c        | 75 ++++++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/p2m.c           | 14 +++++++++
 xen/include/asm-arm/altp2m.h |  9 ++++++
 xen/include/asm-arm/p2m.h    |  5 +++
 4 files changed, 103 insertions(+)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index f98fd73..f3c1cff 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -133,6 +133,81 @@ out:
     return rc;
 }
 
+static inline void altp2m_reset(struct p2m_domain *p2m)
+{
+    read_lock(&p2m->lock);
+
+    p2m_flush_table(p2m);
+    p2m_flush_tlb(p2m);
+
+    p2m->lowest_mapped_gfn = INVALID_GFN;
+    p2m->max_mapped_gfn = _gfn(0);
+
+    read_unlock(&p2m->lock);
+}
+
+void altp2m_propagate_change(struct domain *d,
+                             gfn_t sgfn,
+                             unsigned long nr,
+                             mfn_t smfn,
+                             uint32_t mask,
+                             p2m_type_t p2mt,
+                             p2m_access_t p2ma)
+{
+    struct p2m_domain *p2m;
+    mfn_t m;
+    unsigned int i;
+    unsigned int reset_count = 0;
+    unsigned int last_reset_idx = ~0;
+
+    if ( !altp2m_active(d) )
+        return;
+
+    altp2m_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
+            continue;
+
+        p2m = d->arch.altp2m_p2m[i];
+
+        m = p2m_lookup_attr(p2m, sgfn, NULL, NULL, NULL, NULL);
+
+        /* Check for a dropped page that may impact this altp2m. */
+        if ( (mfn_eq(smfn, INVALID_MFN) || p2mt == p2m_invalid) &&
+             gfn_x(sgfn) >= gfn_x(p2m->lowest_mapped_gfn) &&
+             gfn_x(sgfn) <= gfn_x(p2m->max_mapped_gfn) )
+        {
+            if ( !reset_count++ )
+            {
+                altp2m_reset(p2m);
+                last_reset_idx = i;
+            }
+            else
+            {
+                /* At least 2 altp2m's impacted, so reset everything. */
+                for ( i = 0; i < MAX_ALTP2M; i++ )
+                {
+                    if ( i == last_reset_idx ||
+                         d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
+                        continue;
+
+                    p2m = d->arch.altp2m_p2m[i];
+                    altp2m_reset(p2m);
+                }
+                goto out;
+            }
+        }
+        else if ( !mfn_eq(m, INVALID_MFN) )
+            modify_altp2m_range(d, p2m, sgfn, nr, smfn,
+                                mask, p2mt, p2ma);
+    }
+
+out:
+    altp2m_unlock(d);
+}
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
     struct altp2mvcpu *av = &vcpu_altp2m(v);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index e0a7f38..31810e6 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -992,6 +992,7 @@ static int apply_p2m_changes(struct domain *d,
     const bool_t preempt = !is_idle_vcpu(current);
     bool_t flush = false;
     bool_t flush_pt;
+    bool_t entry_written = false;
     PAGE_LIST_HEAD(free_pages);
     struct page_info *pg;
 
@@ -1112,6 +1113,7 @@ static int apply_p2m_changes(struct domain *d,
                                   &addr, &maddr, &flush,
                                   t, a);
             if ( ret < 0 ) { rc = ret ; goto out; }
+            if ( ret ) entry_written = 1;
             count += ret;
 
             if ( ret != P2M_ONE_PROGRESS_NOP )
@@ -1208,6 +1210,9 @@ out:
 
     p2m_write_unlock(p2m);
 
+    if ( rc >= 0 && entry_written && p2m_is_hostp2m(p2m) )
+        altp2m_propagate_change(d, sgfn, nr, smfn, mask, t, a);
+
     if ( rc < 0 && ( op == INSERT ) &&
          addr != start_gpaddr )
     {
@@ -1331,6 +1336,15 @@ int modify_altp2m_entry(struct domain *d, struct p2m_domain *ap2m,
     return apply_p2m_changes(d, ap2m, INSERT, gfn, nr, mfn, 0, t, a);
 }
 
+int modify_altp2m_range(struct domain *d, struct p2m_domain *ap2m,
+                        gfn_t sgfn, unsigned long nr, mfn_t smfn,
+                        uint32_t m, p2m_type_t t, p2m_access_t a)
+{
+    ASSERT(p2m_is_altp2m(ap2m));
+
+    return apply_p2m_changes(d, ap2m, INSERT, sgfn, nr, smfn, m, t, a);
+}
+
 int p2m_alloc_table(struct p2m_domain *p2m)
 {
     unsigned int i;
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index dc41f93..9aeb7d6 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -81,4 +81,13 @@ int altp2m_set_mem_access(struct domain *d,
                           p2m_access_t a,
                           gfn_t gfn);
 
+/* Propagates changes made to hostp2m to affected altp2m views. */
+void altp2m_propagate_change(struct domain *d,
+                             gfn_t sgfn,
+                             unsigned long nr,
+                             mfn_t smfn,
+                             uint32_t mask,
+                             p2m_type_t p2mt,
+                             p2m_access_t p2ma);
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 9859ad1..59186c9 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -191,6 +191,11 @@ int modify_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
                         paddr_t gpa, paddr_t maddr, unsigned int level,
                         p2m_type_t t, p2m_access_t a);
 
+/* Modify an altp2m view's range of entries or their attributes. */
+int modify_altp2m_range(struct domain *d, struct p2m_domain *p2m,
+                        gfn_t sgfn, unsigned long nr, mfn_t smfn,
+                        uint32_t mask, p2m_type_t t, p2m_access_t a);
+
 /* Clean & invalidate caches corresponding to a region of guest address space */
 int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
 
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (18 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 19/25] arm/altp2m: Add altp2m_propagate_change Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 13:50   ` Julien Grall
  2016-08-04 16:59   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn Sergej Proskurin
                   ` (5 subsequent siblings)
  25 siblings, 2 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adds the function "altp2m_lazy_copy" implementing the altp2m
paging mechanism. The function "altp2m_lazy_copy" lazily copies the
hostp2m's mapping into the currently active altp2m view on 2nd stage
translation violations on instruction or data access. Every altp2m
violation generates a vm_event.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/altp2m.c        |  86 ++++++++++++++++++++++++++++++
 xen/arch/arm/p2m.c           |   6 +++
 xen/arch/arm/traps.c         | 124 +++++++++++++++++++++++++++++++------------
 xen/include/asm-arm/altp2m.h |  15 ++++--
 xen/include/asm-arm/p2m.h    |   6 +--
 5 files changed, 196 insertions(+), 41 deletions(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index f3c1cff..78fc1d5 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -33,6 +33,32 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
     return v->domain->arch.altp2m_p2m[index];
 }
 
+bool_t altp2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
+{
+    struct domain *d = v->domain;
+    bool_t rc = 0;
+
+    if ( idx >= MAX_ALTP2M )
+        return rc;
+
+    altp2m_lock(d);
+
+    if ( d->arch.altp2m_vttbr[idx] != INVALID_VTTBR )
+    {
+        if ( idx != vcpu_altp2m(v).p2midx )
+        {
+            atomic_dec(&altp2m_get_altp2m(v)->active_vcpus);
+            vcpu_altp2m(v).p2midx = idx;
+            atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
+        }
+        rc = 1;
+    }
+
+    altp2m_unlock(d);
+
+    return rc;
+}
+
 int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
 {
     struct vcpu *v;
@@ -133,6 +159,66 @@ out:
     return rc;
 }
 
+bool_t altp2m_lazy_copy(struct vcpu *v,
+                        paddr_t gpa,
+                        unsigned long gva,
+                        struct npfec npfec,
+                        struct p2m_domain **ap2m)
+{
+    struct domain *d = v->domain;
+    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
+    p2m_type_t p2mt;
+    xenmem_access_t xma;
+    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
+    mfn_t mfn;
+    unsigned int level;
+    int rc = 0;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    *ap2m = altp2m_get_altp2m(v);
+    if ( *ap2m == NULL)
+        return 0;
+
+    /* Check if entry is part of the altp2m view */
+    mfn = p2m_lookup_attr(*ap2m, gfn, NULL, NULL, NULL, NULL);
+    if ( !mfn_eq(mfn, INVALID_MFN) )
+        goto out;
+
+    /* Check if entry is part of the host p2m view */
+    mfn = p2m_lookup_attr(hp2m, gfn, &p2mt, &level, NULL, &xma);
+    if ( mfn_eq(mfn, INVALID_MFN) )
+        goto out;
+
+    rc = modify_altp2m_entry(d, *ap2m, gpa, pfn_to_paddr(mfn_x(mfn)), level,
+                             p2mt, memaccess[xma]);
+    if ( rc )
+    {
+        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m %lx\n",
+                (unsigned long)gpa, (unsigned long)(paddr_to_pfn(mfn_x(mfn))),
+                (unsigned long)*ap2m);
+        domain_crash(hp2m->domain);
+    }
+
+    rc = 1;
+
+out:
+    return rc;
+}
+
 static inline void altp2m_reset(struct p2m_domain *p2m)
 {
     read_lock(&p2m->lock);
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 31810e6..bee8be7 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1812,6 +1812,12 @@ void __init setup_virt_paging(void)
     smp_call_function(setup_virt_paging_one, (void *)val, 1);
 }
 
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+{
+    if ( altp2m_active(v->domain) )
+        altp2m_switch_vcpu_altp2m_by_id(v, idx);
+}
+
 bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
 {
     int rc;
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 12be7c9..628abd7 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -48,6 +48,8 @@
 #include <asm/vgic.h>
 #include <asm/cpuerrata.h>
 
+#include <asm/altp2m.h>
+
 /* The base of the stack must always be double-word aligned, which means
  * that both the kernel half of struct cpu_user_regs (which is pushed in
  * entry.S) and struct cpu_info (which lives at the bottom of a Xen
@@ -2403,35 +2405,64 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
     int rc;
     register_t gva = READ_SYSREG(FAR_EL2);
     uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
+    struct vcpu *v = current;
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m = NULL;
+    paddr_t gpa;
+
+    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
+        gpa = get_faulting_ipa(gva);
+    else
+    {
+        /*
+         * Flush the TLB to make sure the DTLB is clear before
+         * doing GVA->IPA translation. If we got here because of
+         * an entry only present in the ITLB, this translation may
+         * still be inaccurate.
+         */
+        flush_tlb_local();
+
+        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
+        if ( rc == -EFAULT )
+            return; /* Try again */
+    }
 
     switch ( fsc )
     {
+    case FSC_FLT_TRANS:
+    {
+        if ( altp2m_active(d) )
+        {
+            const struct npfec npfec = {
+                .insn_fetch = 1,
+                .gla_valid = 1,
+                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+            };
+
+            /*
+             * Copy the entire page of the failing instruction into the
+             * currently active altp2m view.
+             */
+            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
+                return;
+
+            rc = p2m_mem_access_check(gpa, gva, npfec);
+
+            /* Trap was triggered by mem_access, work here is done */
+            if ( !rc )
+                return;
+        }
+
+        break;
+    }
     case FSC_FLT_PERM:
     {
-        paddr_t gpa;
         const struct npfec npfec = {
             .insn_fetch = 1,
             .gla_valid = 1,
             .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
         };
 
-        if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
-            gpa = get_faulting_ipa(gva);
-        else
-        {
-            /*
-             * Flush the TLB to make sure the DTLB is clear before
-             * doing GVA->IPA translation. If we got here because of
-             * an entry only present in the ITLB, this translation may
-             * still be inaccurate.
-             */
-            flush_tlb_local();
-
-            rc = gva_to_ipa(gva, &gpa, GV2M_READ);
-            if ( rc == -EFAULT )
-                return; /* Try again */
-        }
-
         rc = p2m_mem_access_check(gpa, gva, npfec);
 
         /* Trap was triggered by mem_access, work here is done */
@@ -2451,6 +2482,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     int rc;
     mmio_info_t info;
     uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
+    struct vcpu *v = current;
+    struct p2m_domain *p2m = NULL;
 
     info.dabt = dabt;
 #ifdef CONFIG_ARM_32
@@ -2459,7 +2492,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     info.gva = READ_SYSREG64(FAR_EL2);
 #endif
 
-    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
+    if ( hpfar_is_valid(hsr.dabt.s1ptw, fsc) )
         info.gpa = get_faulting_ipa(info.gva);
     else
     {
@@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
 
     switch ( fsc )
     {
-    case FSC_FLT_PERM:
+    case FSC_FLT_TRANS:
     {
-        const struct npfec npfec = {
-            .read_access = !dabt.write,
-            .write_access = dabt.write,
-            .gla_valid = 1,
-            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
-        };
+        if ( altp2m_active(current->domain) )
+        {
+            const struct npfec npfec = {
+                .read_access = !dabt.write,
+                .write_access = dabt.write,
+                .gla_valid = 1,
+                .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+            };
 
-        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
+            /*
+             * Copy the entire page of the failing data access into the
+             * currently active altp2m view.
+             */
+            if ( altp2m_lazy_copy(v, info.gpa, info.gva, npfec, &p2m) )
+                return;
+
+            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
+
+            /* Trap was triggered by mem_access, work here is done */
+            if ( !rc )
+                return;
+        }
 
-        /* Trap was triggered by mem_access, work here is done */
-        if ( !rc )
-            return;
-        break;
-    }
-    case FSC_FLT_TRANS:
         if ( dabt.s1ptw )
             goto bad_data_abort;
 
@@ -2515,6 +2556,23 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
             return;
         }
         break;
+    }
+    case FSC_FLT_PERM:
+    {
+        const struct npfec npfec = {
+            .read_access = !dabt.write,
+            .write_access = dabt.write,
+            .gla_valid = 1,
+            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+        };
+
+        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
+
+        /* Trap was triggered by mem_access, work here is done */
+        if ( !rc )
+            return;
+        break;
+    }
     default:
         gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
                 hsr.bits, dabt.dfsc);
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 9aeb7d6..8bfbc6a 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -38,9 +38,7 @@ static inline bool_t altp2m_active(const struct domain *d)
 /* Alternate p2m VCPU */
 static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
 {
-    /* Not implemented on ARM, should not be reached. */
-    BUG();
-    return 0;
+    return vcpu_altp2m(v).p2midx;
 }
 
 int altp2m_init(struct domain *d);
@@ -52,6 +50,10 @@ void altp2m_vcpu_destroy(struct vcpu *v);
 /* Get current alternate p2m table. */
 struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
 
+/* Switch alternate p2m for a single vcpu. */
+bool_t altp2m_switch_vcpu_altp2m_by_id(struct vcpu *v,
+                                       unsigned int idx);
+
 /* Switch alternate p2m for entire domain */
 int altp2m_switch_domain_altp2m_by_id(struct domain *d,
                                       unsigned int idx);
@@ -81,6 +83,13 @@ int altp2m_set_mem_access(struct domain *d,
                           p2m_access_t a,
                           gfn_t gfn);
 
+/* Alternate p2m paging mechanism. */
+bool_t altp2m_lazy_copy(struct vcpu *v,
+                        paddr_t gpa,
+                        unsigned long gla,
+                        struct npfec npfec,
+                        struct p2m_domain **ap2m);
+
 /* Propagates changes made to hostp2m to affected altp2m views. */
 void altp2m_propagate_change(struct domain *d,
                              gfn_t sgfn,
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 59186c9..16e33ca 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -145,11 +145,7 @@ void p2m_mem_access_emulate_check(struct vcpu *v,
     /* Not supported on ARM. */
 }
 
-static inline
-void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
-{
-    /* Not supported on ARM. */
-}
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
 
 /* Initialise vmid allocator */
 void p2m_vmid_allocator_init(void);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (19 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-04 14:04   ` Julien Grall
  2016-08-01 17:10 ` [PATCH v2 22/25] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
                   ` (4 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adds the functionality to change mfn mappings for specified
gfn's in altp2m views. This mechanism can be used within the context of
VMI, e.g., to establish stealthy debugging.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/altp2m.c        | 116 +++++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/hvm.c           |   7 ++-
 xen/arch/arm/p2m.c           |  14 ++++++
 xen/include/asm-arm/altp2m.h |   6 +++
 xen/include/asm-arm/p2m.h    |   4 ++
 5 files changed, 146 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 78fc1d5..db86c14 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -294,6 +294,122 @@ out:
     altp2m_unlock(d);
 }
 
+int altp2m_change_gfn(struct domain *d,
+                      unsigned int idx,
+                      gfn_t old_gfn,
+                      gfn_t new_gfn)
+{
+    struct p2m_domain *hp2m, *ap2m;
+    paddr_t old_gpa = pfn_to_paddr(gfn_x(old_gfn));
+    mfn_t mfn;
+    xenmem_access_t xma;
+    p2m_type_t p2mt;
+    unsigned int level;
+    int rc = -EINVAL;
+
+    static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+        ACCESS(n),
+        ACCESS(r),
+        ACCESS(w),
+        ACCESS(rw),
+        ACCESS(x),
+        ACCESS(rx),
+        ACCESS(wx),
+        ACCESS(rwx),
+        ACCESS(rx2rw),
+        ACCESS(n2rwx),
+#undef ACCESS
+    };
+
+    if ( idx >= MAX_ALTP2M || d->arch.altp2m_vttbr[idx] == INVALID_VTTBR )
+        return rc;
+
+    hp2m = p2m_get_hostp2m(d);
+    ap2m = d->arch.altp2m_p2m[idx];
+
+    altp2m_lock(d);
+
+    /*
+     * Flip mem_access_enabled to true when a permission is set, as to prevent
+     * allocating or inserting super-pages.
+     */
+    ap2m->mem_access_enabled = true;
+
+    mfn = p2m_lookup_attr(ap2m, old_gfn, &p2mt, &level, NULL, NULL);
+
+    /* Check whether the page needs to be reset. */
+    if ( gfn_eq(new_gfn, INVALID_GFN) )
+    {
+        /* If mfn is mapped by old_gpa, remove old_gpa from the altp2m table. */
+        if ( !mfn_eq(mfn, INVALID_MFN) )
+        {
+            rc = remove_altp2m_entry(d, ap2m, old_gpa, pfn_to_paddr(mfn_x(mfn)), level);
+            if ( rc )
+            {
+                rc = -EINVAL;
+                goto out;
+            }
+        }
+
+        rc = 0;
+        goto out;
+    }
+
+    /* Check host p2m if no valid entry in altp2m present. */
+    if ( mfn_eq(mfn, INVALID_MFN) )
+    {
+        mfn = p2m_lookup_attr(hp2m, old_gfn, &p2mt, &level, NULL, &xma);
+        if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
+        {
+            rc = -EINVAL;
+            goto out;
+        }
+
+        /* If this is a superpage, copy that first. */
+        if ( level != 3 )
+        {
+            rc = modify_altp2m_entry(d, ap2m, old_gpa, pfn_to_paddr(mfn_x(mfn)),
+                                     level, p2mt, memaccess[xma]);
+            if ( rc )
+            {
+                rc = -EINVAL;
+                goto out;
+            }
+        }
+    }
+
+    mfn = p2m_lookup_attr(ap2m, new_gfn, &p2mt, &level, NULL, &xma);
+
+    /* If new_gfn is not part of altp2m, get the mapping information from hp2m */
+    if ( mfn_eq(mfn, INVALID_MFN) )
+        mfn = p2m_lookup_attr(hp2m, new_gfn, &p2mt, &level, NULL, &xma);
+
+    if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
+    {
+        rc = -EINVAL;
+        goto out;
+    }
+
+    /* Set mem access attributes - currently supporting only one (4K) page. */
+    level = 3;
+    rc = modify_altp2m_entry(d, ap2m, old_gpa, pfn_to_paddr(mfn_x(mfn)),
+                             level, p2mt, memaccess[xma]);
+    if ( rc )
+    {
+        rc = -EINVAL;
+        goto out;
+    }
+
+    rc = 0;
+
+out:
+    altp2m_unlock(d);
+
+    return rc;
+}
+
+
 static void altp2m_vcpu_reset(struct vcpu *v)
 {
     struct altp2mvcpu *av = &vcpu_altp2m(v);
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 00a244a..38b32de 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -142,7 +142,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
 
     case HVMOP_altp2m_change_gfn:
-        rc = -EOPNOTSUPP;
+        if ( a.u.change_gfn.pad1 || a.u.change_gfn.pad2 )
+            rc = -EINVAL;
+        else
+            rc = altp2m_change_gfn(d, a.u.change_gfn.view,
+                                   _gfn(a.u.change_gfn.old_gfn),
+                                   _gfn(a.u.change_gfn.new_gfn));
         break;
     }
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index bee8be7..2f4751b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1321,6 +1321,20 @@ void guest_physmap_remove_page(struct domain *d,
     p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, (1 << page_order), mfn);
 }
 
+int remove_altp2m_entry(struct domain *d, struct p2m_domain *ap2m,
+                        paddr_t gpa, paddr_t maddr, unsigned int level)
+{
+    paddr_t size = level_sizes[level];
+    paddr_t mask = level_masks[level];
+    gfn_t gfn = _gfn(paddr_to_pfn(gpa & mask));
+    mfn_t mfn = _mfn(paddr_to_pfn(maddr & mask));
+    unsigned long nr = paddr_to_pfn(size);
+
+    ASSERT(p2m_is_altp2m(ap2m));
+
+    return p2m_remove_mapping(d, ap2m, gfn, nr, mfn);
+}
+
 int modify_altp2m_entry(struct domain *d, struct p2m_domain *ap2m,
                         paddr_t gpa, paddr_t maddr, unsigned int level,
                         p2m_type_t t, p2m_access_t a)
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index 8bfbc6a..64fbff7 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -99,4 +99,10 @@ void altp2m_propagate_change(struct domain *d,
                              p2m_type_t p2mt,
                              p2m_access_t p2ma);
 
+/* Change a gfn->mfn mapping */
+int altp2m_change_gfn(struct domain *d,
+                      unsigned int idx,
+                      gfn_t old_gfn,
+                      gfn_t new_gfn);
+
 #endif /* __ASM_ARM_ALTP2M_H */
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 16e33ca..8433d66 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -182,6 +182,10 @@ mfn_t p2m_lookup_attr(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
                       unsigned int *level, unsigned int *mattr,
                       xenmem_access_t *xma);
 
+/* Remove an altp2m view's entry. */
+int remove_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
+                        paddr_t gpa, paddr_t maddr, unsigned int level);
+
 /* Modify an altp2m view's entry or its attributes. */
 int modify_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
                         paddr_t gpa, paddr_t maddr, unsigned int level,
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 22/25] arm/altp2m: Adjust debug information to altp2m.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (20 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-01 17:10 ` [PATCH v2 23/25] arm/altp2m: Extend libxl to activate altp2m on ARM Sergej Proskurin
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
v2: Dump p2m information of the hostp2m and all altp2m views.
---
 xen/arch/arm/p2m.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2f4751b..84421b5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -106,6 +106,26 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
 
     dump_pt_walk(page_to_maddr(p2m->root), addr,
                  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
+    printk("\n");
+
+    if ( altp2m_active(d) )
+    {
+        unsigned int i;
+
+        for ( i = 0; i < MAX_ALTP2M; i++ )
+        {
+            if ( d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
+                continue;
+
+            p2m = d->arch.altp2m_p2m[i];
+
+            printk("AP2M[%d] @ %p mfn:0x%lx\n",
+                    i, p2m->root, page_to_mfn(p2m->root));
+
+            dump_pt_walk(page_to_maddr(p2m->root), addr, P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
+            printk("\n");
+        }
+    }
 }
 
 void p2m_save_state(struct vcpu *p)
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 23/25] arm/altp2m: Extend libxl to activate altp2m on ARM.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (21 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 22/25] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-02 11:59   ` Wei Liu
  2016-08-01 17:10 ` [PATCH v2 24/25] arm/altp2m: Extend xen-access for " Sergej Proskurin
                   ` (2 subsequent siblings)
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Ian Jackson, Wei Liu

The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
This parameter allows further usage of altp2m on ARM. For this, we
define an additional, common altp2m field as part of the
libxl_domain_type struct. This field can be set on x86 and on ARM systems
through the "altp2m" switch in the domain's configuration file (i.e.
set altp2m=1).

Note, that the old parameter "altp2mhvm" is still valid for x86. Since
this commit defines this old parameter as deprecated, libxl will
generate a warning during processing.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
v2: The macro LIBXL_HAVE_ALTP2M is now valid for x86 and ARM.

    Moved the field altp2m out of info->u.pv.altp2m into the common
    field info->altp2m, where it can be accessed independent by the
    underlying architecture (x86 or ARM). Now, altp2m can be activated
    by the guest control parameter "altp2m".

    Adopted initialization routines accordingly.
---
 tools/libxl/libxl.h         |  3 ++-
 tools/libxl/libxl_create.c  |  8 +++++---
 tools/libxl/libxl_dom.c     |  4 ++--
 tools/libxl/libxl_types.idl |  4 +++-
 tools/libxl/xl_cmdimpl.c    | 26 +++++++++++++++++++++++++-
 5 files changed, 37 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 48a43ce..a2cbd34 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -839,7 +839,8 @@ typedef struct libxl__ctx libxl_ctx;
 
 /*
  * LIBXL_HAVE_ALTP2M
- * If this is defined, then libxl supports alternate p2m functionality.
+ * If this is defined, then libxl supports alternate p2m functionality for
+ * x86 HVM and ARM PV guests.
  */
 #define LIBXL_HAVE_ALTP2M 1
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index d7db9e9..16d3b52 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -319,7 +319,6 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         libxl_defbool_setdefault(&b_info->u.hvm.hpet,               true);
         libxl_defbool_setdefault(&b_info->u.hvm.vpt_align,          true);
         libxl_defbool_setdefault(&b_info->u.hvm.nested_hvm,         false);
-        libxl_defbool_setdefault(&b_info->u.hvm.altp2m,             false);
         libxl_defbool_setdefault(&b_info->u.hvm.usb,                false);
         libxl_defbool_setdefault(&b_info->u.hvm.xen_platform_pci,   true);
 
@@ -406,6 +405,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
             libxl_domain_type_to_string(b_info->type));
         return ERROR_INVAL;
     }
+
+    libxl_defbool_setdefault(&b_info->altp2m, false);
+
     return 0;
 }
 
@@ -901,8 +903,8 @@ static void initiate_domain_create(libxl__egc *egc,
 
     if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
         (libxl_defbool_val(d_config->b_info.u.hvm.nested_hvm) &&
-         libxl_defbool_val(d_config->b_info.u.hvm.altp2m))) {
-        LOG(ERROR, "nestedhvm and altp2mhvm cannot be used together");
+         libxl_defbool_val(d_config->b_info.altp2m))) {
+        LOG(ERROR, "nestedhvm and altp2m cannot be used together");
         goto error_out;
     }
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index ec29060..1550ef8 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -291,8 +291,6 @@ static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
                     libxl_defbool_val(info->u.hvm.vpt_align));
     xc_hvm_param_set(handle, domid, HVM_PARAM_NESTEDHVM,
                     libxl_defbool_val(info->u.hvm.nested_hvm));
-    xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
-                    libxl_defbool_val(info->u.hvm.altp2m));
 }
 
 int libxl__build_pre(libxl__gc *gc, uint32_t domid,
@@ -434,6 +432,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
 #endif
     }
 
+    xc_hvm_param_set(ctx->xch, domid, HVM_PARAM_ALTP2M, libxl_defbool_val(info->altp2m));
+
     rc = libxl__arch_domain_create(gc, d_config, domid);
 
     return rc;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index ef614be..42e7c95 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -512,7 +512,6 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("mmio_hole_memkb",  MemKB),
                                        ("timer_mode",       libxl_timer_mode),
                                        ("nested_hvm",       libxl_defbool),
-                                       ("altp2m",           libxl_defbool),
                                        ("smbios_firmware",  string),
                                        ("acpi_firmware",    string),
                                        ("hdtype",           libxl_hdtype),
@@ -561,6 +560,9 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
                               ])),
+    # Alternate p2m is not bound to any architecture or guest type, as it is
+    # supported by x86 HVM and ARM PV guests.
+    ("altp2m", libxl_defbool),
 
     ], dir=DIR_IN
 )
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 51dc7a0..f4a49ee 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1667,7 +1667,12 @@ static void parse_config_data(const char *config_source,
 
         xlu_cfg_get_defbool(config, "nestedhvm", &b_info->u.hvm.nested_hvm, 0);
 
-        xlu_cfg_get_defbool(config, "altp2mhvm", &b_info->u.hvm.altp2m, 0);
+        /* The config parameter "altp2mhvm" is considered deprecated, however
+         * further considered because of legacy reasons. The config parameter
+         * "altp2m" shall be used instead. */
+        if (!xlu_cfg_get_defbool(config, "altp2mhvm", &b_info->altp2m, 0))
+            fprintf(stderr, "WARNING: Specifying \"altp2mhvm\" is deprecated. "
+                    "Please use a \"altp2m\" instead.\n");
 
         xlu_cfg_replace_string(config, "smbios_firmware",
                                &b_info->u.hvm.smbios_firmware, 0);
@@ -1727,6 +1732,25 @@ static void parse_config_data(const char *config_source,
         abort();
     }
 
+    bool altp2m_support = false;
+#if defined(__i386__) || defined(__x86_64__)
+    /* Alternate p2m support on x86 is available only for HVM guests. */
+    if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
+        altp2m_support = true;
+#elif defined(__arm__) || defined(__aarch64__)
+    /* Alternate p2m support on ARM is available for all guests. */
+    altp2m_support = true;
+#endif
+
+    if (altp2m_support) {
+        /* The config parameter "altp2m" replaces the parameter "altp2mhvm".
+         * For legacy reasons, both parameters are accepted on x86 HVM guests
+         * (only "altp2m" is accepted on ARM guests). If both parameters are
+         * given, it must be considered that the config parameter "altp2m" will
+         * always have priority over "altp2mhvm". */
+        xlu_cfg_get_defbool(config, "altp2m", &b_info->altp2m, 0);
+    }
+
     if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
         b_info->num_ioports = num_ioports;
         b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 24/25] arm/altp2m: Extend xen-access for altp2m on ARM.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (22 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 23/25] arm/altp2m: Extend libxl to activate altp2m on ARM Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-01 17:10 ` [PATCH v2 25/25] arm/altp2m: Add test of xc_altp2m_change_gfn Sergej Proskurin
  2016-08-01 18:15 ` [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Julien Grall
  25 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergej Proskurin, Tamas K Lengyel, Ian Jackson, Wei Liu, Razvan Cojocaru

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>
---
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 tools/tests/xen-access/xen-access.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index ebb63b1..eafd7d6 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -337,8 +337,9 @@ void usage(char* progname)
 {
     fprintf(stderr, "Usage: %s [-m] <domain_id> write|exec", progname);
 #if defined(__i386__) || defined(__x86_64__)
-            fprintf(stderr, "|breakpoint|altp2m_write|altp2m_exec|debug|cpuid");
+            fprintf(stderr, "|breakpoint|debug|cpuid");
 #endif
+            fprintf(stderr, "|altp2m_write|altp2m_exec");
             fprintf(stderr,
             "\n"
             "Logs first page writes, execs, or breakpoint traps that occur on the domain.\n"
@@ -411,6 +412,15 @@ int main(int argc, char *argv[])
     {
         breakpoint = 1;
     }
+    else if ( !strcmp(argv[0], "debug") )
+    {
+        debug = 1;
+    }
+    else if ( !strcmp(argv[0], "cpuid") )
+    {
+        cpuid = 1;
+    }
+#endif
     else if ( !strcmp(argv[0], "altp2m_write") )
     {
         default_access = XENMEM_access_rx;
@@ -423,15 +433,6 @@ int main(int argc, char *argv[])
         altp2m = 1;
         memaccess = 1;
     }
-    else if ( !strcmp(argv[0], "debug") )
-    {
-        debug = 1;
-    }
-    else if ( !strcmp(argv[0], "cpuid") )
-    {
-        cpuid = 1;
-    }
-#endif
     else
     {
         usage(argv[0]);
@@ -504,12 +505,14 @@ int main(int argc, char *argv[])
             goto exit;
         }
 
+#if defined(__i386__) || defined(__x86_64__)
         rc = xc_monitor_singlestep( xch, domain_id, 1 );
         if ( rc < 0 )
         {
             ERROR("Error %d failed to enable singlestep monitoring!\n", rc);
             goto exit;
         }
+#endif
     }
 
     if ( memaccess && !altp2m )
@@ -583,7 +586,9 @@ int main(int argc, char *argv[])
                 rc = xc_altp2m_switch_to_view( xch, domain_id, 0 );
                 rc = xc_altp2m_destroy_view(xch, domain_id, altp2m_view_id);
                 rc = xc_altp2m_set_domain_state(xch, domain_id, 0);
+#if defined(__i386__) || defined(__x86_64__)
                 rc = xc_monitor_singlestep(xch, domain_id, 0);
+#endif
             } else {
                 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
                 rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, START_PFN,
@@ -773,9 +778,11 @@ int main(int argc, char *argv[])
 exit:
     if ( altp2m )
     {
+#if defined(__i386__) || defined(__x86_64__)
         uint32_t vcpu_id;
         for ( vcpu_id = 0; vcpu_id<XEN_LEGACY_MAX_VCPUS; vcpu_id++)
             rc = control_singlestep(xch, domain_id, vcpu_id, 0);
+#endif
     }
 
     /* Tear down domain xenaccess */
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* [PATCH v2 25/25] arm/altp2m: Add test of xc_altp2m_change_gfn.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (23 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 24/25] arm/altp2m: Extend xen-access for " Sergej Proskurin
@ 2016-08-01 17:10 ` Sergej Proskurin
  2016-08-02  9:14   ` Razvan Cojocaru
  2016-08-01 18:15 ` [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Julien Grall
  25 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:10 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergej Proskurin, Tamas K Lengyel, Ian Jackson, Wei Liu, Razvan Cojocaru

This commit extends xen-access by a simple test of the functionality
provided by "xc_altp2m_change_gfn". The idea is to dynamically remap a
trapping gfn to another mfn, which holds the same content as the
original mfn.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
 tools/tests/xen-access/xen-access.c | 135 +++++++++++++++++++++++++++++++++++-
 1 file changed, 132 insertions(+), 3 deletions(-)

diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index eafd7d6..39b7ddf 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -38,6 +38,7 @@
 #include <sys/mman.h>
 #include <sys/poll.h>
 
+#define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
 #include <xenevtchn.h>
 #include <xen/vm_event.h>
@@ -49,6 +50,8 @@
 #define START_PFN 0ULL
 #endif
 
+#define INVALID_GFN ~(0UL)
+
 #define DPRINTF(a, b...) fprintf(stderr, a, ## b)
 #define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
 #define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
@@ -72,9 +75,14 @@ typedef struct xenaccess {
     xen_pfn_t max_gpfn;
 
     vm_event_t vm_event;
+
+    unsigned int ap2m_idx;
+    xen_pfn_t gfn_old;
+    xen_pfn_t gfn_new;
 } xenaccess_t;
 
 static int interrupted;
+static int gfn_changed = 0;
 bool evtchn_bind = 0, evtchn_open = 0, mem_access_enable = 0;
 
 static void close_handler(int sig)
@@ -82,6 +90,94 @@ static void close_handler(int sig)
     interrupted = sig;
 }
 
+static int copy_gfn(xc_interface *xch, domid_t domain_id,
+                    xen_pfn_t dst_gfn, xen_pfn_t src_gfn)
+{
+    void *src_vaddr = NULL;
+    void *dst_vaddr = NULL;
+
+    src_vaddr = xc_map_foreign_range(xch, domain_id, XC_PAGE_SIZE,
+                                     PROT_READ, src_gfn);
+    if ( src_vaddr == MAP_FAILED || src_vaddr == NULL)
+    {
+        return -1;
+    }
+
+    dst_vaddr = xc_map_foreign_range(xch, domain_id, XC_PAGE_SIZE,
+                                     PROT_WRITE, dst_gfn);
+    if ( dst_vaddr == MAP_FAILED || dst_vaddr == NULL)
+    {
+        return -1;
+    }
+
+    memcpy(dst_vaddr, src_vaddr, XC_PAGE_SIZE);
+
+    munmap(src_vaddr, XC_PAGE_SIZE);
+    munmap(dst_vaddr, XC_PAGE_SIZE);
+
+    return 0;
+}
+
+/*
+ * This function allocates and populates a page in the guest's physmap that is
+ * subsequently filled with contents of the trapping address. Finally, through
+ * the invocation of xc_altp2m_change_gfn, the altp2m subsystem changes the gfn
+ * to mfn mapping of the target altp2m view.
+ */
+static int xenaccess_change_gfn(xc_interface *xch,
+                                domid_t domain_id,
+                                unsigned int ap2m_idx,
+                                xen_pfn_t gfn_old,
+                                xen_pfn_t *gfn_new)
+{
+    int rc;
+
+    /*
+     * We perform this function only once as it is intended to be used for
+     * testing and demonstration purposes. Thus, we signalize that further
+     * altp2m-related traps will not change trapping gfn's.
+     */
+    gfn_changed = 1;
+
+    rc = xc_domain_increase_reservation_exact(xch, domain_id, 1, 0, 0, gfn_new);
+    if ( rc < 0 )
+        return -1;
+
+    rc = xc_domain_populate_physmap_exact(xch, domain_id, 1, 0, 0, gfn_new);
+    if ( rc < 0 )
+        return -1;
+
+    /* Copy content of the old gfn into the newly allocated gfn */
+    rc = copy_gfn(xch, domain_id, *gfn_new, gfn_old);
+    if ( rc < 0 )
+        return -1;
+
+    xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_old, *gfn_new);
+
+    return 0;
+}
+
+static int xenaccess_reset_gfn(xc_interface *xch,
+                               domid_t domain_id,
+                               unsigned int ap2m_idx,
+                               xen_pfn_t gfn_old,
+                               xen_pfn_t gfn_new)
+{
+    int rc;
+
+    /* Reset previous state */
+    xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_old, INVALID_GFN);
+
+    /* Invalidate the new gfn */
+    xc_altp2m_change_gfn(xch, domain_id, ap2m_idx, gfn_new, INVALID_GFN);
+
+    rc = xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &gfn_new);
+    if ( rc < 0 )
+        return -1;
+
+    return 0;
+}
+
 int xc_wait_for_event_or_timeout(xc_interface *xch, xenevtchn_handle *xce, unsigned long ms)
 {
     struct pollfd fd = { .fd = xenevtchn_fd(xce), .events = POLLIN | POLLERR };
@@ -227,6 +323,10 @@ xenaccess_t *xenaccess_init(xc_interface **xch_r, domid_t domain_id)
     }
     mem_access_enable = 1;
 
+    xenaccess->ap2m_idx = ~(0);
+    xenaccess->gfn_old = INVALID_GFN;
+    xenaccess->gfn_new = INVALID_GFN;
+
     /* Open event channel */
     xenaccess->vm_event.xce_handle = xenevtchn_open(NULL, 0);
     if ( xenaccess->vm_event.xce_handle == NULL )
@@ -662,10 +762,27 @@ int main(int argc, char *argv[])
 
                 if ( altp2m && req.flags & VM_EVENT_FLAG_ALTERNATE_P2M)
                 {
-                    DPRINTF("\tSwitching back to default view!\n");
-
                     rsp.flags |= (VM_EVENT_FLAG_ALTERNATE_P2M | VM_EVENT_FLAG_TOGGLE_SINGLESTEP);
-                    rsp.altp2m_idx = 0;
+
+                    if ( !gfn_changed )
+                    {
+                        /* Store trapping gfn and ap2m index for cleanup. */
+                        xenaccess->gfn_old = req.u.mem_access.gfn;
+                        xenaccess->ap2m_idx = req.altp2m_idx;
+
+                        /* Note that this function is called only once. */
+                        xenaccess_change_gfn(xenaccess->xc_handle, domain_id, req.altp2m_idx,
+                                             xenaccess->gfn_old, &xenaccess->gfn_new);
+
+                        /* Do not change the currently active altp2m view, yet. */
+                        rsp.altp2m_idx = req.altp2m_idx;
+                    }
+                    else
+                    {
+                        DPRINTF("\tSwitching back to default view!\n");
+
+                        rsp.altp2m_idx = 0;
+                    }
                 }
                 else if ( default_access != after_first_access )
                 {
@@ -783,6 +900,18 @@ exit:
         for ( vcpu_id = 0; vcpu_id<XEN_LEGACY_MAX_VCPUS; vcpu_id++)
             rc = control_singlestep(xch, domain_id, vcpu_id, 0);
 #endif
+
+        /* Reset changed gfn. */
+        if ( xenaccess->gfn_new != INVALID_GFN )
+        {
+            rc = xenaccess_reset_gfn(xenaccess->xc_handle, xenaccess->vm_event.domain_id,
+                                     xenaccess->ap2m_idx, xenaccess->gfn_old, xenaccess->gfn_new);
+            if ( rc != 0 )
+            {
+                ERROR("Error resetting the remapped gfn");
+                return rc;
+            }
+        }
     }
 
     /* Tear down domain xenaccess */
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 02/25] arm/altp2m: Add HVMOP_altp2m_get_domain_state.
  2016-08-01 17:10 ` [PATCH v2 02/25] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
@ 2016-08-01 17:21   ` Andrew Cooper
  2016-08-01 17:34     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Andrew Cooper @ 2016-08-01 17:21 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Julien Grall, Stefano Stabellini

On 01/08/16 18:10, Sergej Proskurin wrote:
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index 0711796..d47b249 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -22,6 +22,8 @@
>  
>  #include <xen/sched.h>
>  
> +#define altp2m_enabled(d) ((d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M])

This macro expects to be used as a predicate, but may return non-boolean
values.  You should use
(!!(d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M]) to make sure it is
strictly 1 or 0 being returned.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 02/25] arm/altp2m: Add HVMOP_altp2m_get_domain_state.
  2016-08-01 17:21   ` Andrew Cooper
@ 2016-08-01 17:34     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 17:34 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel; +Cc: Julien Grall, Stefano Stabellini

Hi Andrew,

On 08/01/2016 07:21 PM, Andrew Cooper wrote:
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index 0711796..d47b249 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -22,6 +22,8 @@
>>  
>>  #include <xen/sched.h>
>>  
>> +#define altp2m_enabled(d) ((d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M])
> 
> This macro expects to be used as a predicate, but may return non-boolean
> values.  You should use
> (!!(d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M]) to make sure it is
> strictly 1 or 0 being returned.
> 

I will fix that, thank you.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
                   ` (24 preceding siblings ...)
  2016-08-01 17:10 ` [PATCH v2 25/25] arm/altp2m: Add test of xc_altp2m_change_gfn Sergej Proskurin
@ 2016-08-01 18:15 ` Julien Grall
  2016-08-01 19:20   ` Tamas K Lengyel
  2016-08-01 23:14   ` Andrew Cooper
  25 siblings, 2 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-01 18:15 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel, Stefano Stabellini

On 01/08/16 18:10, Sergej Proskurin wrote:
>
> Hello all,

Hello Sergej,

> The following patch series can be found on Github[0] and is part of my
> contribution to this year's Google Summer of Code (GSoC)[1]. My project is
> managed by the organization The Honeynet Project. As part of GSoC, I am being
> supervised by the Xen developer Tamas K. Lengyel <tamas@tklengyel.com>, George
> D. Webster, and Steven Maresca.
>
> In this patch series, we provide an implementation of the altp2m subsystem for
> ARM. Our implementation is based on the altp2m subsystem for x86, providing
> additional --alternate-- views on the guest's physical memory by means of the
> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
> extend the p2m subsystem. Also, we extend libxl to support altp2m on ARM and
> modify xen-access to test the suggested functionality.
>
> To be more precise, altp2m allows to create and switch to additional p2m views
> (i.e. gfn to mfn mappings). These views can be manipulated and activated as
> will through the provided HVMOPs. In this way, the active guest instance in
> question can seamlessly proceed execution without noticing that anything has
> changed. The prime scope of application of altp2m is Virtual Machine
> Introspection, where guest systems are analyzed from the outside of the VM.
>
> Altp2m can be activated by means of the guest control parameter "altp2m" on x86
> and ARM architectures.  The altp2m functionality by default can also be used
> from within the guest by design. For use-cases requiring purely external access
> to altp2m, a custom XSM policy is necessary on both x86 and ARM.

As said on the previous version, altp2m operation *should not* be 
exposed to ARM guest. Any design written for x86 may not fit exactly for 
ARM (and vice versa), you will need to explain why you think we should 
follow the same pattern.

Speaking about security, I skimmed through the series and noticed that a 
lot of my previous comments have not been addressed. For instance there 
are still no locking on the altp2m operations and a guest could disable 
altp2m.

I will give a look to the rest of the series once this is fixed.

Regards,

> The current code-base is based on Julien Grall's branch abort-handlers-v2[2].
>
> [0] https://github.com/sergej-proskurin/xen (branch arm-altp2m-v2)
> [1] https://summerofcode.withgoogle.com/projects/#4970052843470848
> [2] git://xenbits.xen.org/people/julieng/xen-unstable.git (branch abort-handlers-v2)
>
>
> Sergej Proskurin (25):
>   arm/altp2m: Add first altp2m HVMOP stubs.
>   arm/altp2m: Add HVMOP_altp2m_get_domain_state.
>   arm/altp2m: Add struct vttbr.
>   arm/altp2m: Move hostp2m init/teardown to individual functions.
>   arm/altp2m: Rename and extend p2m_alloc_table.
>   arm/altp2m: Cosmetic fixes - function prototypes.
>   arm/altp2m: Add altp2m init/teardown routines.
>   arm/altp2m: Add HVMOP_altp2m_set_domain_state.
>   arm/altp2m: Add altp2m table flushing routine.
>   arm/altp2m: Add HVMOP_altp2m_create_p2m.
>   arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
>   arm/altp2m: Add HVMOP_altp2m_switch_p2m.
>   arm/altp2m: Make p2m_restore_state ready for altp2m.
>   arm/altp2m: Make get_page_from_gva ready for altp2m.
>   arm/altp2m: Extend __p2m_lookup.
>   arm/altp2m: Make p2m_mem_access_check ready for altp2m.
>   arm/altp2m: Cosmetic fixes - function prototypes.
>   arm/altp2m: Add HVMOP_altp2m_set_mem_access.
>   arm/altp2m: Add altp2m_propagate_change.
>   arm/altp2m: Add altp2m paging mechanism.
>   arm/altp2m: Add HVMOP_altp2m_change_gfn.
>   arm/altp2m: Adjust debug information to altp2m.
>   arm/altp2m: Extend libxl to activate altp2m on ARM.
>   arm/altp2m: Extend xen-access for altp2m on ARM.
>   arm/altp2m: Add test of xc_altp2m_change_gfn.
>
>  tools/libxl/libxl.h                 |   3 +-
>  tools/libxl/libxl_create.c          |   8 +-
>  tools/libxl/libxl_dom.c             |   4 +-
>  tools/libxl/libxl_types.idl         |   4 +-
>  tools/libxl/xl_cmdimpl.c            |  26 +-
>  tools/tests/xen-access/xen-access.c | 162 ++++++++-
>  xen/arch/arm/Makefile               |   1 +
>  xen/arch/arm/altp2m.c               | 675 ++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/hvm.c                  | 129 +++++++
>  xen/arch/arm/p2m.c                  | 430 ++++++++++++++++++-----
>  xen/arch/arm/traps.c                | 126 +++++--
>  xen/include/asm-arm/altp2m.h        |  79 ++++-
>  xen/include/asm-arm/domain.h        |  16 +
>  xen/include/asm-arm/flushtlb.h      |   4 +
>  xen/include/asm-arm/p2m.h           |  68 +++-
>  xen/include/asm-arm/processor.h     |  16 +
>  16 files changed, 1594 insertions(+), 157 deletions(-)
>  create mode 100644 xen/arch/arm/altp2m.c
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-01 18:15 ` [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Julien Grall
@ 2016-08-01 19:20   ` Tamas K Lengyel
  2016-08-01 19:55     ` Julien Grall
  2016-08-01 23:14   ` Andrew Cooper
  1 sibling, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-01 19:20 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Jan Beulich, Andrew Cooper, Xen-devel

On Mon, Aug 1, 2016 at 12:15 PM, Julien Grall <julien.grall@arm.com> wrote:
> On 01/08/16 18:10, Sergej Proskurin wrote:
>>
>>
>> Hello all,
>
>
> Hello Sergej,
>
>> The following patch series can be found on Github[0] and is part of my
>> contribution to this year's Google Summer of Code (GSoC)[1]. My project is
>> managed by the organization The Honeynet Project. As part of GSoC, I am
>> being
>> supervised by the Xen developer Tamas K. Lengyel <tamas@tklengyel.com>,
>> George
>> D. Webster, and Steven Maresca.
>>
>> In this patch series, we provide an implementation of the altp2m subsystem
>> for
>> ARM. Our implementation is based on the altp2m subsystem for x86,
>> providing
>> additional --alternate-- views on the guest's physical memory by means of
>> the
>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
>> extend the p2m subsystem. Also, we extend libxl to support altp2m on ARM
>> and
>> modify xen-access to test the suggested functionality.
>>
>> To be more precise, altp2m allows to create and switch to additional p2m
>> views
>> (i.e. gfn to mfn mappings). These views can be manipulated and activated
>> as
>> will through the provided HVMOPs. In this way, the active guest instance
>> in
>> question can seamlessly proceed execution without noticing that anything
>> has
>> changed. The prime scope of application of altp2m is Virtual Machine
>> Introspection, where guest systems are analyzed from the outside of the
>> VM.
>>
>> Altp2m can be activated by means of the guest control parameter "altp2m"
>> on x86
>> and ARM architectures.  The altp2m functionality by default can also be
>> used
>> from within the guest by design. For use-cases requiring purely external
>> access
>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>
>
> As said on the previous version, altp2m operation *should not* be exposed to
> ARM guest. Any design written for x86 may not fit exactly for ARM (and vice
> versa), you will need to explain why you think we should follow the same
> pattern.
>
> Speaking about security, I skimmed through the series and noticed that a lot
> of my previous comments have not been addressed. For instance there are
> still no locking on the altp2m operations and a guest could disable altp2m.
>
> I will give a look to the rest of the series once this is fixed.
>

Julien,
we did discuss whether altp2m on ARM should be exposed to guests or
not but we did not agree whether restricting it on ARM is absolutely
necessary. Altp2m was designed even on the x86 to be accessible from
within the guest on all systems irrespective of actual hardware
support for it. Thus, this design fits ARM as well where there is no
dedicated hardware support, from the altp2m perspective there is no
difference. The fact that a guest can disable altp2m by default on
itself is *not* a security issue - it is by design. There are usecases
where the guest is allowed to use altp2m on itself - like having a
security kernel module managing VMM memory permissions from within the
guest. For use-cases where such operations are undesirable - like in a
purely external usecase - the in-guest operations can be readily
restricted by XSM. Thus, it is unreasonable to design a completely
separate altp2m mode for ARM when the current design from x86 works
just as well.

If you consider this design unacceptable for ARM we will need the
other Maintainers' input on the topic as well before we proceed. It
would be possible to introduce another set of domctl's for altp2m and
only implement that for ARM, but unless there is strong consensus
about that being absolutely necessary, changing the design of altp2m
should be avoided. I certainly see no reason to do that right now.

As for locking on altp2m paths, we haven't encountered any issues with
the current version during our tests but it is possible we missed
something. If you have spotted missed locking, please point it out.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-01 19:20   ` Tamas K Lengyel
@ 2016-08-01 19:55     ` Julien Grall
  2016-08-01 20:35       ` Sergej Proskurin
  2016-08-01 20:41       ` Tamas K Lengyel
  0 siblings, 2 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-01 19:55 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Jan Beulich, Andrew Cooper, Xen-devel



On 01/08/2016 20:20, Tamas K Lengyel wrote:
> On Mon, Aug 1, 2016 at 12:15 PM, Julien Grall <julien.grall@arm.com> wrote:
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>
>>>
>>> Hello all,
>>
>>
>> Hello Sergej,
>>
>>> The following patch series can be found on Github[0] and is part of my
>>> contribution to this year's Google Summer of Code (GSoC)[1]. My project is
>>> managed by the organization The Honeynet Project. As part of GSoC, I am
>>> being
>>> supervised by the Xen developer Tamas K. Lengyel <tamas@tklengyel.com>,
>>> George
>>> D. Webster, and Steven Maresca.
>>>
>>> In this patch series, we provide an implementation of the altp2m subsystem
>>> for
>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>> providing
>>> additional --alternate-- views on the guest's physical memory by means of
>>> the
>>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs and
>>> extend the p2m subsystem. Also, we extend libxl to support altp2m on ARM
>>> and
>>> modify xen-access to test the suggested functionality.
>>>
>>> To be more precise, altp2m allows to create and switch to additional p2m
>>> views
>>> (i.e. gfn to mfn mappings). These views can be manipulated and activated
>>> as
>>> will through the provided HVMOPs. In this way, the active guest instance
>>> in
>>> question can seamlessly proceed execution without noticing that anything
>>> has
>>> changed. The prime scope of application of altp2m is Virtual Machine
>>> Introspection, where guest systems are analyzed from the outside of the
>>> VM.
>>>
>>> Altp2m can be activated by means of the guest control parameter "altp2m"
>>> on x86
>>> and ARM architectures.  The altp2m functionality by default can also be
>>> used
>>> from within the guest by design. For use-cases requiring purely external
>>> access
>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>
>>
>> As said on the previous version, altp2m operation *should not* be exposed to
>> ARM guest. Any design written for x86 may not fit exactly for ARM (and vice
>> versa), you will need to explain why you think we should follow the same
>> pattern.
>>
>> Speaking about security, I skimmed through the series and noticed that a lot
>> of my previous comments have not been addressed. For instance there are
>> still no locking on the altp2m operations and a guest could disable altp2m.
>>
>> I will give a look to the rest of the series once this is fixed.
>>
>
> Julien,
> we did discuss whether altp2m on ARM should be exposed to guests or
> not but we did not agree whether restricting it on ARM is absolutely
> necessary. Altp2m was designed even on the x86 to be accessible from
> within the guest on all systems irrespective of actual hardware
> support for it.  Thus, this design fits ARM as well where there is no
> dedicated hardware support, from the altp2m perspective there is no
> difference.

Really? I looked at the design document [1] which is Intel focus. 
Similar think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).

> The fact that a guest can disable altp2m by default on
> itself is *not* a security issue - it is by design. There are usecases
> where the guest is allowed to use altp2m on itself - like having a
> security kernel module managing VMM memory permissions from within the
> guest. For use-cases where such operations are undesirable - like in a
> purely external usecase - the in-guest operations can be readily
> restricted by XSM. Thus, it is unreasonable to design a completely
> separate altp2m mode for ARM when the current design from x86 works
> just as well.

x86 does not allow to disable the altp2m feature via setparam. Please 
look at the code (see hvmop_set_param in arch/x86/hvm/hvm.c). It is 
clearly an issue that needs to be fixed for ARM which I already pointed 
out in the previous version.

>
> If you consider this design unacceptable for ARM we will need the
> other Maintainers' input on the topic as well before we proceed. It
> would be possible to introduce another set of domctl's for altp2m and
> only implement that for ARM, but unless there is strong consensus
> about that being absolutely necessary, changing the design of altp2m
> should be avoided. I certainly see no reason to do that right now.

I don't see any reason to expose the altp2m feature to the guest 
regardless how x86 does. We diverged on some feature between ARM and x86 
because they do not fit (I have in mind the pirq to event channel code). 
Some of the hypercalls are not implemented by ARM for guest...

>
> As for locking on altp2m paths, we haven't encountered any issues with
> the current version during our tests but it is possible we missed
> something. If you have spotted missed locking, please point it out.

I already pointed them out in the previous series. Please read the 
comments I made there.

Regards,

[1] 
https://lists.xenproject.org/archives/html/xen-devel/2015-06/msg01319.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-01 19:55     ` Julien Grall
@ 2016-08-01 20:35       ` Sergej Proskurin
  2016-08-01 20:41       ` Tamas K Lengyel
  1 sibling, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-01 20:35 UTC (permalink / raw)
  To: Julien Grall, Tamas K Lengyel
  Cc: Stefano Stabellini, George Dunlap, Andrew Cooper, Tim Deegan,
	Jan Beulich, Xen-devel

Hi Julien,

On 08/01/2016 09:55 PM, Julien Grall wrote:
> 
> 
> On 01/08/2016 20:20, Tamas K Lengyel wrote:
>> On Mon, Aug 1, 2016 at 12:15 PM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>
>>>>
>>>> Hello all,
>>>
>>>
>>> Hello Sergej,
>>>
>>>> The following patch series can be found on Github[0] and is part of my
>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>>> project is
>>>> managed by the organization The Honeynet Project. As part of GSoC, I am
>>>> being
>>>> supervised by the Xen developer Tamas K. Lengyel <tamas@tklengyel.com>,
>>>> George
>>>> D. Webster, and Steven Maresca.
>>>>
>>>> In this patch series, we provide an implementation of the altp2m
>>>> subsystem
>>>> for
>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>> providing
>>>> additional --alternate-- views on the guest's physical memory by
>>>> means of
>>>> the
>>>> ARM 2nd stage translation mechanism. The patches introduce new
>>>> HVMOPs and
>>>> extend the p2m subsystem. Also, we extend libxl to support altp2m on
>>>> ARM
>>>> and
>>>> modify xen-access to test the suggested functionality.
>>>>
>>>> To be more precise, altp2m allows to create and switch to additional
>>>> p2m
>>>> views
>>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>>> activated
>>>> as
>>>> will through the provided HVMOPs. In this way, the active guest
>>>> instance
>>>> in
>>>> question can seamlessly proceed execution without noticing that
>>>> anything
>>>> has
>>>> changed. The prime scope of application of altp2m is Virtual Machine
>>>> Introspection, where guest systems are analyzed from the outside of the
>>>> VM.
>>>>
>>>> Altp2m can be activated by means of the guest control parameter
>>>> "altp2m"
>>>> on x86
>>>> and ARM architectures.  The altp2m functionality by default can also be
>>>> used
>>>> from within the guest by design. For use-cases requiring purely
>>>> external
>>>> access
>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>
>>>
>>> As said on the previous version, altp2m operation *should not* be
>>> exposed to
>>> ARM guest. Any design written for x86 may not fit exactly for ARM
>>> (and vice
>>> versa), you will need to explain why you think we should follow the same
>>> pattern.
>>>
>>> Speaking about security, I skimmed through the series and noticed
>>> that a lot
>>> of my previous comments have not been addressed. For instance there are
>>> still no locking on the altp2m operations and a guest could disable
>>> altp2m.
>>>
>>> I will give a look to the rest of the series once this is fixed.
>>>
>>
>> Julien,
>> we did discuss whether altp2m on ARM should be exposed to guests or
>> not but we did not agree whether restricting it on ARM is absolutely
>> necessary. Altp2m was designed even on the x86 to be accessible from
>> within the guest on all systems irrespective of actual hardware
>> support for it.  Thus, this design fits ARM as well where there is no
>> dedicated hardware support, from the altp2m perspective there is no
>> difference.
> 
> Really? I looked at the design document [1] which is Intel focus.
> Similar think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
> 
>> The fact that a guest can disable altp2m by default on
>> itself is *not* a security issue - it is by design. There are usecases
>> where the guest is allowed to use altp2m on itself - like having a
>> security kernel module managing VMM memory permissions from within the
>> guest. For use-cases where such operations are undesirable - like in a
>> purely external usecase - the in-guest operations can be readily
>> restricted by XSM. Thus, it is unreasonable to design a completely
>> separate altp2m mode for ARM when the current design from x86 works
>> just as well.
> 
> x86 does not allow to disable the altp2m feature via setparam. Please
> look at the code (see hvmop_set_param in arch/x86/hvm/hvm.c). It is
> clearly an issue that needs to be fixed for ARM which I already pointed
> out in the previous version.
> 

Thank you for pointing that out. The x86 implementation provides an
explicit filter, controlling which HVM params are allowed to be set by
the guest domain. Thus, the implementation restricts re-setting of the
altp2m HVM param.

We have indeed missed that check. I will provide a fix as part of the
next series.

>>
>> If you consider this design unacceptable for ARM we will need the
>> other Maintainers' input on the topic as well before we proceed. It
>> would be possible to introduce another set of domctl's for altp2m and
>> only implement that for ARM, but unless there is strong consensus
>> about that being absolutely necessary, changing the design of altp2m
>> should be avoided. I certainly see no reason to do that right now.
> 
> I don't see any reason to expose the altp2m feature to the guest
> regardless how x86 does. We diverged on some feature between ARM and x86
> because they do not fit (I have in mind the pirq to event channel code).
> Some of the hypercalls are not implemented by ARM for guest...
> 
>>
>> As for locking on altp2m paths, we haven't encountered any issues with
>> the current version during our tests but it is possible we missed
>> something. If you have spotted missed locking, please point it out.
> 
> I already pointed them out in the previous series. Please read the
> comments I made there.
> 
> Regards,
> 
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2015-06/msg01319.html
> 

Thanks again for the thorough review.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-01 19:55     ` Julien Grall
  2016-08-01 20:35       ` Sergej Proskurin
@ 2016-08-01 20:41       ` Tamas K Lengyel
  2016-08-02  7:38         ` Julien Grall
  1 sibling, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-01 20:41 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Jan Beulich, Andrew Cooper, Xen-devel

On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 01/08/2016 20:20, Tamas K Lengyel wrote:
>>
>> On Mon, Aug 1, 2016 at 12:15 PM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>
>>>>
>>>>
>>>> Hello all,
>>>
>>>
>>>
>>> Hello Sergej,
>>>
>>>> The following patch series can be found on Github[0] and is part of my
>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My project
>>>> is
>>>> managed by the organization The Honeynet Project. As part of GSoC, I am
>>>> being
>>>> supervised by the Xen developer Tamas K. Lengyel <tamas@tklengyel.com>,
>>>> George
>>>> D. Webster, and Steven Maresca.
>>>>
>>>> In this patch series, we provide an implementation of the altp2m
>>>> subsystem
>>>> for
>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>> providing
>>>> additional --alternate-- views on the guest's physical memory by means
>>>> of
>>>> the
>>>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs
>>>> and
>>>> extend the p2m subsystem. Also, we extend libxl to support altp2m on ARM
>>>> and
>>>> modify xen-access to test the suggested functionality.
>>>>
>>>> To be more precise, altp2m allows to create and switch to additional p2m
>>>> views
>>>> (i.e. gfn to mfn mappings). These views can be manipulated and activated
>>>> as
>>>> will through the provided HVMOPs. In this way, the active guest instance
>>>> in
>>>> question can seamlessly proceed execution without noticing that anything
>>>> has
>>>> changed. The prime scope of application of altp2m is Virtual Machine
>>>> Introspection, where guest systems are analyzed from the outside of the
>>>> VM.
>>>>
>>>> Altp2m can be activated by means of the guest control parameter "altp2m"
>>>> on x86
>>>> and ARM architectures.  The altp2m functionality by default can also be
>>>> used
>>>> from within the guest by design. For use-cases requiring purely external
>>>> access
>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>
>>>
>>>
>>> As said on the previous version, altp2m operation *should not* be exposed
>>> to
>>> ARM guest. Any design written for x86 may not fit exactly for ARM (and
>>> vice
>>> versa), you will need to explain why you think we should follow the same
>>> pattern.
>>>
>>> Speaking about security, I skimmed through the series and noticed that a
>>> lot
>>> of my previous comments have not been addressed. For instance there are
>>> still no locking on the altp2m operations and a guest could disable
>>> altp2m.
>>>
>>> I will give a look to the rest of the series once this is fixed.
>>>
>>
>> Julien,
>> we did discuss whether altp2m on ARM should be exposed to guests or
>> not but we did not agree whether restricting it on ARM is absolutely
>> necessary. Altp2m was designed even on the x86 to be accessible from
>> within the guest on all systems irrespective of actual hardware
>> support for it.  Thus, this design fits ARM as well where there is no
>> dedicated hardware support, from the altp2m perspective there is no
>> difference.
>
>
> Really? I looked at the design document [1] which is Intel focus. Similar
> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).

That design cover letter mentions specifically "Both VMFUNC and #VE
are designed such that a VMM can emulate them on legacy CPUs". While
they certainly had only Intel hardware in-mind, the software route can
also be taken on ARM as well. As our primary use-case is purely
external use of altp2m we have not implemented the bits that enable
the injection of mem_access faults into the guest (equivalent of #VE).
Whether without that the altp2m switching from within the guest make
sense or not is beyond the scope of this series but as it could
technically be implemented in the future, I don't see a reason to
disable that possibility right away.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-01 18:15 ` [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Julien Grall
  2016-08-01 19:20   ` Tamas K Lengyel
@ 2016-08-01 23:14   ` Andrew Cooper
  2016-08-02  7:34     ` Julien Grall
  1 sibling, 1 reply; 159+ messages in thread
From: Andrew Cooper @ 2016-08-01 23:14 UTC (permalink / raw)
  To: Julien Grall, Sergej Proskurin, xen-devel, Stefano Stabellini

On 01/08/2016 19:15, Julien Grall wrote:
> On 01/08/16 18:10, Sergej Proskurin wrote:
>>
>> Hello all,
>
> Hello Sergej,
>
>> The following patch series can be found on Github[0] and is part of my
>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>> project is
>> managed by the organization The Honeynet Project. As part of GSoC, I
>> am being
>> supervised by the Xen developer Tamas K. Lengyel
>> <tamas@tklengyel.com>, George
>> D. Webster, and Steven Maresca.
>>
>> In this patch series, we provide an implementation of the altp2m
>> subsystem for
>> ARM. Our implementation is based on the altp2m subsystem for x86,
>> providing
>> additional --alternate-- views on the guest's physical memory by
>> means of the
>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs
>> and
>> extend the p2m subsystem. Also, we extend libxl to support altp2m on
>> ARM and
>> modify xen-access to test the suggested functionality.
>>
>> To be more precise, altp2m allows to create and switch to additional
>> p2m views
>> (i.e. gfn to mfn mappings). These views can be manipulated and
>> activated as
>> will through the provided HVMOPs. In this way, the active guest
>> instance in
>> question can seamlessly proceed execution without noticing that
>> anything has
>> changed. The prime scope of application of altp2m is Virtual Machine
>> Introspection, where guest systems are analyzed from the outside of
>> the VM.
>>
>> Altp2m can be activated by means of the guest control parameter
>> "altp2m" on x86
>> and ARM architectures.  The altp2m functionality by default can also
>> be used
>> from within the guest by design. For use-cases requiring purely
>> external access
>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>
> As said on the previous version, altp2m operation *should not* be
> exposed to ARM guest. Any design written for x86 may not fit exactly
> for ARM (and vice versa), you will need to explain why you think we
> should follow the same pattern.

Sorry, but I am going to step in here and disagree.  All the x86
justifications for altp2m being accessible to guests apply equally to
ARM, as they are hardware independent.

I realise you are maintainer, but the onus is on you to justify why the
behaviour should be different between x86 and ARM, rather than simply to
complain at it being the same.

Naturally, technical issues about the details of the implementation, or
the algorithms etc. are of course fine, but I don't see any plausible
reason why ARM should purposefully different from x86 in terms of
available functionality, and several good reasons why it should be the
same (least of all, feature parity across architectures).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-01 23:14   ` Andrew Cooper
@ 2016-08-02  7:34     ` Julien Grall
  2016-08-02 16:08       ` Andrew Cooper
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-02  7:34 UTC (permalink / raw)
  To: Andrew Cooper, Sergej Proskurin, xen-devel, Stefano Stabellini

Hi Andrew,

On 02/08/2016 00:14, Andrew Cooper wrote:
> On 01/08/2016 19:15, Julien Grall wrote:
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>
>>> Hello all,
>>
>> Hello Sergej,
>>
>>> The following patch series can be found on Github[0] and is part of my
>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>> project is
>>> managed by the organization The Honeynet Project. As part of GSoC, I
>>> am being
>>> supervised by the Xen developer Tamas K. Lengyel
>>> <tamas@tklengyel.com>, George
>>> D. Webster, and Steven Maresca.
>>>
>>> In this patch series, we provide an implementation of the altp2m
>>> subsystem for
>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>> providing
>>> additional --alternate-- views on the guest's physical memory by
>>> means of the
>>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs
>>> and
>>> extend the p2m subsystem. Also, we extend libxl to support altp2m on
>>> ARM and
>>> modify xen-access to test the suggested functionality.
>>>
>>> To be more precise, altp2m allows to create and switch to additional
>>> p2m views
>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>> activated as
>>> will through the provided HVMOPs. In this way, the active guest
>>> instance in
>>> question can seamlessly proceed execution without noticing that
>>> anything has
>>> changed. The prime scope of application of altp2m is Virtual Machine
>>> Introspection, where guest systems are analyzed from the outside of
>>> the VM.
>>>
>>> Altp2m can be activated by means of the guest control parameter
>>> "altp2m" on x86
>>> and ARM architectures.  The altp2m functionality by default can also
>>> be used
>>> from within the guest by design. For use-cases requiring purely
>>> external access
>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>
>> As said on the previous version, altp2m operation *should not* be
>> exposed to ARM guest. Any design written for x86 may not fit exactly
>> for ARM (and vice versa), you will need to explain why you think we
>> should follow the same pattern.
>
> Sorry, but I am going to step in here and disagree.  All the x86
> justifications for altp2m being accessible to guests apply equally to
> ARM, as they are hardware independent.
>
> I realise you are maintainer, but the onus is on you to justify why the
> behaviour should be different between x86 and ARM, rather than simply to
> complain at it being the same.
>
> Naturally, technical issues about the details of the implementation, or
> the algorithms etc. are of course fine, but I don't see any plausible
> reason why ARM should purposefully different from x86 in terms of
> available functionality, and several good reasons why it should be the
> same (least of all, feature parity across architectures).

The question here, is how a guest could take advantage to access to 
altp2m on ARM today? Whilst on x86 a guest could be notified about 
memaccess change, this is not yet the case on ARM.

So, from my understanding, exposing this feature to a guest is like 
exposing a no-op with side effects. We should avoid to expose feature to 
the guest until there is a real usage and the guest could do something 
useful with it.

This has always been the case where some features were not fully ported 
on ARM until there is an actual usage (or we differ because there will 
be no-usage). The interface is already there, so a future Xen release 
can decide to give access to the guest when (and only when) this will be 
useful.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-01 20:41       ` Tamas K Lengyel
@ 2016-08-02  7:38         ` Julien Grall
  2016-08-02 11:17           ` George Dunlap
  2016-08-02 16:00           ` Tamas K Lengyel
  0 siblings, 2 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-02  7:38 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Jan Beulich, Andrew Cooper, Xen-devel

Hello Tamas,

On 01/08/2016 21:41, Tamas K Lengyel wrote:
> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com> wrote:
>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>> not but we did not agree whether restricting it on ARM is absolutely
>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>> within the guest on all systems irrespective of actual hardware
>>> support for it.  Thus, this design fits ARM as well where there is no
>>> dedicated hardware support, from the altp2m perspective there is no
>>> difference.
>>
>>
>> Really? I looked at the design document [1] which is Intel focus. Similar
>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>
> That design cover letter mentions specifically "Both VMFUNC and #VE
> are designed such that a VMM can emulate them on legacy CPUs". While
> they certainly had only Intel hardware in-mind, the software route can
> also be taken on ARM as well. As our primary use-case is purely
> external use of altp2m we have not implemented the bits that enable
> the injection of mem_access faults into the guest (equivalent of #VE).
> Whether without that the altp2m switching from within the guest make
> sense or not is beyond the scope of this series but as it could
> technically be implemented in the future, I don't see a reason to
> disable that possibility right away.

The question here, is how a guest could take advantage to access to 
altp2m on ARM today? Whilst on x86 a guest could be notified about 
memaccess change, this is not yet the case on ARM.

So, from my understanding, exposing this feature to a guest is like 
exposing a no-op with side effects. We should avoid to expose feature to 
the guest until there is a real usage and the guest could do something 
useful with it.

This has always been the case where some features were not fully ported 
on ARM until there is an actual usage (or we differ because there will 
be no-usage). The interface is already there, so a future Xen release 
can decide to give access to the guest when (and only when) this will be 
useful.

Regards,

>
> Tamas
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 25/25] arm/altp2m: Add test of xc_altp2m_change_gfn.
  2016-08-01 17:10 ` [PATCH v2 25/25] arm/altp2m: Add test of xc_altp2m_change_gfn Sergej Proskurin
@ 2016-08-02  9:14   ` Razvan Cojocaru
  2016-08-02  9:50     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Razvan Cojocaru @ 2016-08-02  9:14 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Wei Liu, Tamas K Lengyel, Ian Jackson

On 08/01/2016 08:10 PM, Sergej Proskurin wrote:
> This commit extends xen-access by a simple test of the functionality
> provided by "xc_altp2m_change_gfn". The idea is to dynamically remap a
> trapping gfn to another mfn, which holds the same content as the
> original mfn.
> 
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Tamas K Lengyel <tamas@tklengyel.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> ---
>  tools/tests/xen-access/xen-access.c | 135 +++++++++++++++++++++++++++++++++++-
>  1 file changed, 132 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
> index eafd7d6..39b7ddf 100644
> --- a/tools/tests/xen-access/xen-access.c
> +++ b/tools/tests/xen-access/xen-access.c
> @@ -38,6 +38,7 @@
>  #include <sys/mman.h>
>  #include <sys/poll.h>
>  
> +#define XC_WANT_COMPAT_MAP_FOREIGN_API
>  #include <xenctrl.h>
>  #include <xenevtchn.h>
>  #include <xen/vm_event.h>
> @@ -49,6 +50,8 @@
>  #define START_PFN 0ULL
>  #endif
>  
> +#define INVALID_GFN ~(0UL)
> +
>  #define DPRINTF(a, b...) fprintf(stderr, a, ## b)
>  #define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
>  #define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
> @@ -72,9 +75,14 @@ typedef struct xenaccess {
>      xen_pfn_t max_gpfn;
>  
>      vm_event_t vm_event;
> +
> +    unsigned int ap2m_idx;
> +    xen_pfn_t gfn_old;
> +    xen_pfn_t gfn_new;
>  } xenaccess_t;
>  
>  static int interrupted;
> +static int gfn_changed = 0;
>  bool evtchn_bind = 0, evtchn_open = 0, mem_access_enable = 0;
>  
>  static void close_handler(int sig)
> @@ -82,6 +90,94 @@ static void close_handler(int sig)
>      interrupted = sig;
>  }
>  
> +static int copy_gfn(xc_interface *xch, domid_t domain_id,
> +                    xen_pfn_t dst_gfn, xen_pfn_t src_gfn)
> +{
> +    void *src_vaddr = NULL;
> +    void *dst_vaddr = NULL;
> +
> +    src_vaddr = xc_map_foreign_range(xch, domain_id, XC_PAGE_SIZE,
> +                                     PROT_READ, src_gfn);
> +    if ( src_vaddr == MAP_FAILED || src_vaddr == NULL)
> +    {
> +        return -1;
> +    }

Please don't use scopes for single return statements (i.e. no { ... }
around them).

> +
> +    dst_vaddr = xc_map_foreign_range(xch, domain_id, XC_PAGE_SIZE,
> +                                     PROT_WRITE, dst_gfn);
> +    if ( dst_vaddr == MAP_FAILED || dst_vaddr == NULL)
> +    {
> +        return -1;
> +    }

But if this failed and the first one succeeds, shouldn't we munmap() the
first one before the return?


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 25/25] arm/altp2m: Add test of xc_altp2m_change_gfn.
  2016-08-02  9:14   ` Razvan Cojocaru
@ 2016-08-02  9:50     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-02  9:50 UTC (permalink / raw)
  To: Razvan Cojocaru, xen-devel; +Cc: Wei Liu, Tamas K Lengyel, Ian Jackson

Hi Razvan,


On 08/02/2016 11:14 AM, Razvan Cojocaru wrote:
> On 08/01/2016 08:10 PM, Sergej Proskurin wrote:
>> This commit extends xen-access by a simple test of the functionality
>> provided by "xc_altp2m_change_gfn". The idea is to dynamically remap a
>> trapping gfn to another mfn, which holds the same content as the
>> original mfn.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
>> Cc: Tamas K Lengyel <tamas@tklengyel.com>
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> Cc: Wei Liu <wei.liu2@citrix.com>
>> ---
>>  tools/tests/xen-access/xen-access.c | 135 +++++++++++++++++++++++++++++++++++-
>>  1 file changed, 132 insertions(+), 3 deletions(-)
>>
>> diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
>> index eafd7d6..39b7ddf 100644
>> --- a/tools/tests/xen-access/xen-access.c
>> +++ b/tools/tests/xen-access/xen-access.c
>> @@ -38,6 +38,7 @@
>>  #include <sys/mman.h>
>>  #include <sys/poll.h>
>>  
>> +#define XC_WANT_COMPAT_MAP_FOREIGN_API
>>  #include <xenctrl.h>
>>  #include <xenevtchn.h>
>>  #include <xen/vm_event.h>
>> @@ -49,6 +50,8 @@
>>  #define START_PFN 0ULL
>>  #endif
>>  
>> +#define INVALID_GFN ~(0UL)
>> +
>>  #define DPRINTF(a, b...) fprintf(stderr, a, ## b)
>>  #define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
>>  #define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
>> @@ -72,9 +75,14 @@ typedef struct xenaccess {
>>      xen_pfn_t max_gpfn;
>>  
>>      vm_event_t vm_event;
>> +
>> +    unsigned int ap2m_idx;
>> +    xen_pfn_t gfn_old;
>> +    xen_pfn_t gfn_new;
>>  } xenaccess_t;
>>  
>>  static int interrupted;
>> +static int gfn_changed = 0;
>>  bool evtchn_bind = 0, evtchn_open = 0, mem_access_enable = 0;
>>  
>>  static void close_handler(int sig)
>> @@ -82,6 +90,94 @@ static void close_handler(int sig)
>>      interrupted = sig;
>>  }
>>  
>> +static int copy_gfn(xc_interface *xch, domid_t domain_id,
>> +                    xen_pfn_t dst_gfn, xen_pfn_t src_gfn)
>> +{
>> +    void *src_vaddr = NULL;
>> +    void *dst_vaddr = NULL;
>> +
>> +    src_vaddr = xc_map_foreign_range(xch, domain_id, XC_PAGE_SIZE,
>> +                                     PROT_READ, src_gfn);
>> +    if ( src_vaddr == MAP_FAILED || src_vaddr == NULL)
>> +    {
>> +        return -1;
>> +    }
> Please don't use scopes for single return statements (i.e. no { ... }
> around them).
>

I will adapt the scopes to the coding style, thank you. These are
leftovers from my test outputs.

>> +
>> +    dst_vaddr = xc_map_foreign_range(xch, domain_id, XC_PAGE_SIZE,
>> +                                     PROT_WRITE, dst_gfn);
>> +    if ( dst_vaddr == MAP_FAILED || dst_vaddr == NULL)
>> +    {
>> +        return -1;
>> +    }
> But if this failed and the first one succeeds, shouldn't we munmap() the
> first one before the return?
>

You are absolutely right. I will adapt the implementation accordingly.

Thank you very much.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02  7:38         ` Julien Grall
@ 2016-08-02 11:17           ` George Dunlap
  2016-08-02 15:48             ` Tamas K Lengyel
  2016-08-02 16:00           ` Tamas K Lengyel
  1 sibling, 1 reply; 159+ messages in thread
From: George Dunlap @ 2016-08-02 11:17 UTC (permalink / raw)
  To: Julien Grall, Tamas K Lengyel
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Jan Beulich, Andrew Cooper, Xen-devel

On 02/08/16 08:38, Julien Grall wrote:
> Hello Tamas,
> 
> On 01/08/2016 21:41, Tamas K Lengyel wrote:
>> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>>> not but we did not agree whether restricting it on ARM is absolutely
>>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>>> within the guest on all systems irrespective of actual hardware
>>>> support for it.  Thus, this design fits ARM as well where there is no
>>>> dedicated hardware support, from the altp2m perspective there is no
>>>> difference.
>>>
>>>
>>> Really? I looked at the design document [1] which is Intel focus.
>>> Similar
>>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>>
>> That design cover letter mentions specifically "Both VMFUNC and #VE
>> are designed such that a VMM can emulate them on legacy CPUs". While
>> they certainly had only Intel hardware in-mind, the software route can
>> also be taken on ARM as well. As our primary use-case is purely
>> external use of altp2m we have not implemented the bits that enable
>> the injection of mem_access faults into the guest (equivalent of #VE).
>> Whether without that the altp2m switching from within the guest make
>> sense or not is beyond the scope of this series but as it could
>> technically be implemented in the future, I don't see a reason to
>> disable that possibility right away.
> 
> The question here, is how a guest could take advantage to access to
> altp2m on ARM today? Whilst on x86 a guest could be notified about
> memaccess change, this is not yet the case on ARM.
> 
> So, from my understanding, exposing this feature to a guest is like
> exposing a no-op with side effects. We should avoid to expose feature to
> the guest until there is a real usage and the guest could do something
> useful with it.

It seems like having guest altp2m support without the equivalent of a
#VE does seem pretty useless.  Would you disagree with this assessment,
Tamas?

Every interface we expose to the guest increases the surface of attack;
so it seems like until there is a usecase for guest altp2m, we should
probably disable it.

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 23/25] arm/altp2m: Extend libxl to activate altp2m on ARM.
  2016-08-01 17:10 ` [PATCH v2 23/25] arm/altp2m: Extend libxl to activate altp2m on ARM Sergej Proskurin
@ 2016-08-02 11:59   ` Wei Liu
  2016-08-02 14:07     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Wei Liu @ 2016-08-02 11:59 UTC (permalink / raw)
  To: Sergej Proskurin; +Cc: xen-devel, Ian Jackson, Wei Liu

On Mon, Aug 01, 2016 at 07:10:26PM +0200, Sergej Proskurin wrote:
> The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
> This parameter allows further usage of altp2m on ARM. For this, we
> define an additional, common altp2m field as part of the
> libxl_domain_type struct. This field can be set on x86 and on ARM systems
> through the "altp2m" switch in the domain's configuration file (i.e.
> set altp2m=1).
> 
> Note, that the old parameter "altp2mhvm" is still valid for x86. Since
> this commit defines this old parameter as deprecated, libxl will
> generate a warning during processing.
> 
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> ---
> v2: The macro LIBXL_HAVE_ALTP2M is now valid for x86 and ARM.
> 
>     Moved the field altp2m out of info->u.pv.altp2m into the common
>     field info->altp2m, where it can be accessed independent by the
>     underlying architecture (x86 or ARM). Now, altp2m can be activated
>     by the guest control parameter "altp2m".
> 
>     Adopted initialization routines accordingly.
> ---
>  tools/libxl/libxl.h         |  3 ++-
>  tools/libxl/libxl_create.c  |  8 +++++---
>  tools/libxl/libxl_dom.c     |  4 ++--
>  tools/libxl/libxl_types.idl |  4 +++-
>  tools/libxl/xl_cmdimpl.c    | 26 +++++++++++++++++++++++++-
>  5 files changed, 37 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 48a43ce..a2cbd34 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -839,7 +839,8 @@ typedef struct libxl__ctx libxl_ctx;
>  
>  /*
>   * LIBXL_HAVE_ALTP2M
> - * If this is defined, then libxl supports alternate p2m functionality.
> + * If this is defined, then libxl supports alternate p2m functionality for
> + * x86 HVM and ARM PV guests.
>   */
>  #define LIBXL_HAVE_ALTP2M 1

Either you misunderstood or I said something wrong.

These macros have defined semantics that shouldn't be changed because
application code uses them to detect the presence / absence of certain
things.

We need a new macro for ARM altp2m. I think

   LIBXL_HAVE_ARM_ALTP2M

would do.

>  
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index d7db9e9..16d3b52 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -319,7 +319,6 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>          libxl_defbool_setdefault(&b_info->u.hvm.hpet,               true);
>          libxl_defbool_setdefault(&b_info->u.hvm.vpt_align,          true);
>          libxl_defbool_setdefault(&b_info->u.hvm.nested_hvm,         false);
> -        libxl_defbool_setdefault(&b_info->u.hvm.altp2m,             false);
>          libxl_defbool_setdefault(&b_info->u.hvm.usb,                false);
>          libxl_defbool_setdefault(&b_info->u.hvm.xen_platform_pci,   true);
>  
> @@ -406,6 +405,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>              libxl_domain_type_to_string(b_info->type));
>          return ERROR_INVAL;
>      }
> +
> +    libxl_defbool_setdefault(&b_info->altp2m, false);
> +
>      return 0;
>  }
>  
> @@ -901,8 +903,8 @@ static void initiate_domain_create(libxl__egc *egc,
>  
>      if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
>          (libxl_defbool_val(d_config->b_info.u.hvm.nested_hvm) &&
> -         libxl_defbool_val(d_config->b_info.u.hvm.altp2m))) {
> -        LOG(ERROR, "nestedhvm and altp2mhvm cannot be used together");
> +         libxl_defbool_val(d_config->b_info.altp2m))) {
> +        LOG(ERROR, "nestedhvm and altp2m cannot be used together");
>          goto error_out;
>      }
>  
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index ec29060..1550ef8 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -291,8 +291,6 @@ static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
>                      libxl_defbool_val(info->u.hvm.vpt_align));
>      xc_hvm_param_set(handle, domid, HVM_PARAM_NESTEDHVM,
>                      libxl_defbool_val(info->u.hvm.nested_hvm));
> -    xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
> -                    libxl_defbool_val(info->u.hvm.altp2m));
>  }
>  
>  int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> @@ -434,6 +432,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>  #endif
>      }
>  
> +    xc_hvm_param_set(ctx->xch, domid, HVM_PARAM_ALTP2M, libxl_defbool_val(info->altp2m));
> +

And the reason for moving this call to this function is?

>      rc = libxl__arch_domain_create(gc, d_config, domid);
>  
>      return rc;
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index ef614be..42e7c95 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -512,7 +512,6 @@ libxl_domain_build_info = Struct("domain_build_info",[
>                                         ("mmio_hole_memkb",  MemKB),
>                                         ("timer_mode",       libxl_timer_mode),
>                                         ("nested_hvm",       libxl_defbool),
> -                                       ("altp2m",           libxl_defbool),

No, you can't remove existing field -- that would break old
applications which use the old field.

And please handle compatibility in libxl with old applications in mind.

>                                         ("smbios_firmware",  string),
>                                         ("acpi_firmware",    string),
>                                         ("hdtype",           libxl_hdtype),
> @@ -561,6 +560,9 @@ libxl_domain_build_info = Struct("domain_build_info",[
>  
>      ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
>                                ])),
> +    # Alternate p2m is not bound to any architecture or guest type, as it is
> +    # supported by x86 HVM and ARM PV guests.

Just "ARM guests" would do. ARM doesn't have notion of PV vs HVM.

> +    ("altp2m", libxl_defbool),
>  
>      ], dir=DIR_IN
>  )
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 51dc7a0..f4a49ee 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1667,7 +1667,12 @@ static void parse_config_data(const char *config_source,
>  
>          xlu_cfg_get_defbool(config, "nestedhvm", &b_info->u.hvm.nested_hvm, 0);
>  
> -        xlu_cfg_get_defbool(config, "altp2mhvm", &b_info->u.hvm.altp2m, 0);
> +        /* The config parameter "altp2mhvm" is considered deprecated, however
> +         * further considered because of legacy reasons. The config parameter
> +         * "altp2m" shall be used instead. */
> +        if (!xlu_cfg_get_defbool(config, "altp2mhvm", &b_info->altp2m, 0))
> +            fprintf(stderr, "WARNING: Specifying \"altp2mhvm\" is deprecated. "
> +                    "Please use a \"altp2m\" instead.\n");

In this case you should:

 if both altp2mhvm and altp2m are present, use the latter.
 if only altp2mhvm is present, honour it.

Note that we have not yet removed the old option. Ideally we would give
users a transition period before removing the option.

Also you need to patch docs/man/xl.pod.1.in for the new option.

>  
>          xlu_cfg_replace_string(config, "smbios_firmware",
>                                 &b_info->u.hvm.smbios_firmware, 0);
> @@ -1727,6 +1732,25 @@ static void parse_config_data(const char *config_source,
>          abort();
>      }
>  
> +    bool altp2m_support = false;
> +#if defined(__i386__) || defined(__x86_64__)
> +    /* Alternate p2m support on x86 is available only for HVM guests. */
> +    if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
> +        altp2m_support = true;
> +#elif defined(__arm__) || defined(__aarch64__)
> +    /* Alternate p2m support on ARM is available for all guests. */
> +    altp2m_support = true;
> +#endif
> +

I don't think you need to care too much about dead option here.
Xl should be able to set altp2m field all the time. And there should be
code in libxl to handle situation when altp2m is not available.

> +    if (altp2m_support) {
> +        /* The config parameter "altp2m" replaces the parameter "altp2mhvm".
> +         * For legacy reasons, both parameters are accepted on x86 HVM guests
> +         * (only "altp2m" is accepted on ARM guests). If both parameters are
> +         * given, it must be considered that the config parameter "altp2m" will
> +         * always have priority over "altp2mhvm". */
> +        xlu_cfg_get_defbool(config, "altp2m", &b_info->altp2m, 0);
> +    }
> +

As always, if what I said above doesn't make sense to you, feel free to
ask.

Wei.

>      if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
>          b_info->num_ioports = num_ioports;
>          b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
> -- 
> 2.9.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 23/25] arm/altp2m: Extend libxl to activate altp2m on ARM.
  2016-08-02 11:59   ` Wei Liu
@ 2016-08-02 14:07     ` Sergej Proskurin
  2016-08-11 16:00       ` Wei Liu
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-02 14:07 UTC (permalink / raw)
  To: xen-devel, wei.liu2

Hi Wei,


On 08/02/2016 01:59 PM, Wei Liu wrote:
> On Mon, Aug 01, 2016 at 07:10:26PM +0200, Sergej Proskurin wrote:
>> The current implementation allows to set the parameter HVM_PARAM_ALTP2M.
>> This parameter allows further usage of altp2m on ARM. For this, we
>> define an additional, common altp2m field as part of the
>> libxl_domain_type struct. This field can be set on x86 and on ARM systems
>> through the "altp2m" switch in the domain's configuration file (i.e.
>> set altp2m=1).
>>
>> Note, that the old parameter "altp2mhvm" is still valid for x86. Since
>> this commit defines this old parameter as deprecated, libxl will
>> generate a warning during processing.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> Cc: Wei Liu <wei.liu2@citrix.com>
>> ---
>> v2: The macro LIBXL_HAVE_ALTP2M is now valid for x86 and ARM.
>>
>>     Moved the field altp2m out of info->u.pv.altp2m into the common
>>     field info->altp2m, where it can be accessed independent by the
>>     underlying architecture (x86 or ARM). Now, altp2m can be activated
>>     by the guest control parameter "altp2m".
>>
>>     Adopted initialization routines accordingly.
>> ---
>>  tools/libxl/libxl.h         |  3 ++-
>>  tools/libxl/libxl_create.c  |  8 +++++---
>>  tools/libxl/libxl_dom.c     |  4 ++--
>>  tools/libxl/libxl_types.idl |  4 +++-
>>  tools/libxl/xl_cmdimpl.c    | 26 +++++++++++++++++++++++++-
>>  5 files changed, 37 insertions(+), 8 deletions(-)
>>
>> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
>> index 48a43ce..a2cbd34 100644
>> --- a/tools/libxl/libxl.h
>> +++ b/tools/libxl/libxl.h
>> @@ -839,7 +839,8 @@ typedef struct libxl__ctx libxl_ctx;
>>  
>>  /*
>>   * LIBXL_HAVE_ALTP2M
>> - * If this is defined, then libxl supports alternate p2m functionality.
>> + * If this is defined, then libxl supports alternate p2m functionality for
>> + * x86 HVM and ARM PV guests.
>>   */
>>  #define LIBXL_HAVE_ALTP2M 1
> Either you misunderstood or I said something wrong.
>
> These macros have defined semantics that shouldn't be changed because
> application code uses them to detect the presence / absence of certain
> things.
>
> We need a new macro for ARM altp2m. I think
>
>    LIBXL_HAVE_ARM_ALTP2M
>
> would do.

Sorry, this is entirely my fault. Although I have explicitly asked
whether we need two different LIBXL_HAVE_* macros, I somehow omitted
that one. I will fix that right now and provide two LIBXL_HAVE_* macros
in the next patch.

>>  
>> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>> index d7db9e9..16d3b52 100644
>> --- a/tools/libxl/libxl_create.c
>> +++ b/tools/libxl/libxl_create.c
>> @@ -319,7 +319,6 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>>          libxl_defbool_setdefault(&b_info->u.hvm.hpet,               true);
>>          libxl_defbool_setdefault(&b_info->u.hvm.vpt_align,          true);
>>          libxl_defbool_setdefault(&b_info->u.hvm.nested_hvm,         false);
>> -        libxl_defbool_setdefault(&b_info->u.hvm.altp2m,             false);
>>          libxl_defbool_setdefault(&b_info->u.hvm.usb,                false);
>>          libxl_defbool_setdefault(&b_info->u.hvm.xen_platform_pci,   true);
>>  
>> @@ -406,6 +405,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>>              libxl_domain_type_to_string(b_info->type));
>>          return ERROR_INVAL;
>>      }
>> +
>> +    libxl_defbool_setdefault(&b_info->altp2m, false);
>> +
>>      return 0;
>>  }
>>  
>> @@ -901,8 +903,8 @@ static void initiate_domain_create(libxl__egc *egc,
>>  
>>      if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
>>          (libxl_defbool_val(d_config->b_info.u.hvm.nested_hvm) &&
>> -         libxl_defbool_val(d_config->b_info.u.hvm.altp2m))) {
>> -        LOG(ERROR, "nestedhvm and altp2mhvm cannot be used together");
>> +         libxl_defbool_val(d_config->b_info.altp2m))) {
>> +        LOG(ERROR, "nestedhvm and altp2m cannot be used together");
>>          goto error_out;
>>      }
>>  
>> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
>> index ec29060..1550ef8 100644
>> --- a/tools/libxl/libxl_dom.c
>> +++ b/tools/libxl/libxl_dom.c
>> @@ -291,8 +291,6 @@ static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
>>                      libxl_defbool_val(info->u.hvm.vpt_align));
>>      xc_hvm_param_set(handle, domid, HVM_PARAM_NESTEDHVM,
>>                      libxl_defbool_val(info->u.hvm.nested_hvm));
>> -    xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
>> -                    libxl_defbool_val(info->u.hvm.altp2m));
>>  }
>>  
>>  int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>> @@ -434,6 +432,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>>  #endif
>>      }
>>  
>> +    xc_hvm_param_set(ctx->xch, domid, HVM_PARAM_ALTP2M, libxl_defbool_val(info->altp2m));
>> +
> And the reason for moving this call to this function is?

Since this implementation removes the field info->u.hvm.altp2m and
rather uses the common field info->altp2m, I wanted to set the altp2m
parameter outside of a function that is associated with an HVM domain
(as the function hvm_set_conf_params is called only if the field
info->type == LIBXL_DOMAIN_TYPE_HVM). The idea was to have only one call
to xc_hvm_param_set independent of the domain type, as we do not
distinguish between the underlying architecture anymore.If you believe
that we nevertheless need two calls in the code, I will move the
function call in question back to hvm_set_conf_params and add an
additional call to xc_hvm_param_set for the general field info->altp2m.
Yet, IMHO the architecture would benefit if we would have only one call
to xc_hvm_param_set.

>>      rc = libxl__arch_domain_create(gc, d_config, domid);
>>  
>>      return rc;
>> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
>> index ef614be..42e7c95 100644
>> --- a/tools/libxl/libxl_types.idl
>> +++ b/tools/libxl/libxl_types.idl
>> @@ -512,7 +512,6 @@ libxl_domain_build_info = Struct("domain_build_info",[
>>                                         ("mmio_hole_memkb",  MemKB),
>>                                         ("timer_mode",       libxl_timer_mode),
>>                                         ("nested_hvm",       libxl_defbool),
>> -                                       ("altp2m",           libxl_defbool),
> No, you can't remove existing field -- that would break old
> applications which use the old field.
>
> And please handle compatibility in libxl with old applications in mind.

I did not expect other applications using this field outside of libxl
but of course you are right. My next patch will contain the legacy
info->u.hvm.altp2m field in addition to the general/common field
info->altp2m. Thank you for pointing that out.

>>                                         ("smbios_firmware",  string),
>>                                         ("acpi_firmware",    string),
>>                                         ("hdtype",           libxl_hdtype),
>> @@ -561,6 +560,9 @@ libxl_domain_build_info = Struct("domain_build_info",[
>>  
>>      ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
>>                                ])),
>> +    # Alternate p2m is not bound to any architecture or guest type, as it is
>> +    # supported by x86 HVM and ARM PV guests.
> Just "ARM guests" would do. ARM doesn't have notion of PV vs HVM.

I will change that, thank you. I mentioned ARM PV as currently it is the
type that is registered for guests on ARM.

>> +    ("altp2m", libxl_defbool),
>>  
>>      ], dir=DIR_IN
>>  )
>> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
>> index 51dc7a0..f4a49ee 100644
>> --- a/tools/libxl/xl_cmdimpl.c
>> +++ b/tools/libxl/xl_cmdimpl.c
>> @@ -1667,7 +1667,12 @@ static void parse_config_data(const char *config_source,
>>  
>>          xlu_cfg_get_defbool(config, "nestedhvm", &b_info->u.hvm.nested_hvm, 0);
>>  
>> -        xlu_cfg_get_defbool(config, "altp2mhvm", &b_info->u.hvm.altp2m, 0);
>> +        /* The config parameter "altp2mhvm" is considered deprecated, however
>> +         * further considered because of legacy reasons. The config parameter
>> +         * "altp2m" shall be used instead. */
>> +        if (!xlu_cfg_get_defbool(config, "altp2mhvm", &b_info->altp2m, 0))
>> +            fprintf(stderr, "WARNING: Specifying \"altp2mhvm\" is deprecated. "
>> +                    "Please use a \"altp2m\" instead.\n");
> In this case you should:
>
>  if both altp2mhvm and altp2m are present, use the latter.
>  if only altp2mhvm is present, honour it.

This is exactly the behavior right now (see comment below):

+ /* The config parameter "altp2m" replaces the parameter "altp2mhvm".
+  * For legacy reasons, both parameters are accepted on x86 HVM guests
+  * (only "altp2m" is accepted on ARM guests). If both parameters are
+  * given, it must be considered that the config parameter "altp2m" will
+  * always have priority over "altp2mhvm". */

The warning is just displayed at this point; "altp2mhvm" is considered
as a valid parameter.

>
> Note that we have not yet removed the old option. Ideally we would give
> users a transition period before removing the option.
>
> Also you need to patch docs/man/xl.pod.1.in for the new option.

I cannot find any entry concerning the current "altp2mhvm" option.
Please correct me if I am wrong, but as far as I understand, this
document holds information about the "xl" tool. Since altp2m is
currently not controlled through the xl tool, I am actually not sure
whether it is the right place for it. I believe you meant the file
docs/man/xl.cfg.pod.5. If yes, I will gladly extend it, thank you.

>>  
>>          xlu_cfg_replace_string(config, "smbios_firmware",
>>                                 &b_info->u.hvm.smbios_firmware, 0);
>> @@ -1727,6 +1732,25 @@ static void parse_config_data(const char *config_source,
>>          abort();
>>      }
>>  
>> +    bool altp2m_support = false;
>> +#if defined(__i386__) || defined(__x86_64__)
>> +    /* Alternate p2m support on x86 is available only for HVM guests. */
>> +    if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
>> +        altp2m_support = true;
>> +#elif defined(__arm__) || defined(__aarch64__)
>> +    /* Alternate p2m support on ARM is available for all guests. */
>> +    altp2m_support = true;
>> +#endif
>> +
> I don't think you need to care too much about dead option here.
> Xl should be able to set altp2m field all the time. 

I am actually not sure what you mean by the dead option. Could you be
more specific, please?
 
Also, in the last patch, we came to the agreement to guard the altp2m
functionality solely through the HVM param (instead of the additional
Xen cmd-line option altp2m), which is set through libxl. Because of
this, the entire domu initialization routines depend on this option to
be set at the moment of domain creation -- not after it. That is, being
able to set the altp2m option at all time would actually break the system.

> And there should be
> code in libxl to handle situation when altp2m is not available.

I am not so sure about that either:

Currently, altp2m is supported on x86 HVM and on ARM.
 * on x86, altp2m depends on HW to support altp2m. Therefore, the
cmd-line option "altp2m" is used do activate altp2m. All libxl
interaction with the altp2m subsystem will be discarded at this point.
 * on ARM, altp2m is implemented purely in SW. That is, we do not have
ARM architectures, that would not support altp2m -- so the idea.
 * All other architectures should not be able to activate altp2m, which
is why I chose the #defines (__arm__, __x86_64__, ...) to guard the
upper option.

>> +    if (altp2m_support) {
>> +        /* The config parameter "altp2m" replaces the parameter "altp2mhvm".
>> +         * For legacy reasons, both parameters are accepted on x86 HVM guests
>> +         * (only "altp2m" is accepted on ARM guests). If both parameters are
>> +         * given, it must be considered that the config parameter "altp2m" will
>> +         * always have priority over "altp2mhvm". */
>> +        xlu_cfg_get_defbool(config, "altp2m", &b_info->altp2m, 0);
>> +    }
>> +
> As always, if what I said above doesn't make sense to you, feel free to
> ask.

Thank you very much for your review.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 11:17           ` George Dunlap
@ 2016-08-02 15:48             ` Tamas K Lengyel
  2016-08-02 16:05               ` George Dunlap
  0 siblings, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-02 15:48 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Julien Grall, Jan Beulich, Andrew Cooper, Xen-devel

On Tue, Aug 2, 2016 at 5:17 AM, George Dunlap <george.dunlap@citrix.com> wrote:
> On 02/08/16 08:38, Julien Grall wrote:
>> Hello Tamas,
>>
>> On 01/08/2016 21:41, Tamas K Lengyel wrote:
>>> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com>
>>> wrote:
>>>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>>>> not but we did not agree whether restricting it on ARM is absolutely
>>>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>>>> within the guest on all systems irrespective of actual hardware
>>>>> support for it.  Thus, this design fits ARM as well where there is no
>>>>> dedicated hardware support, from the altp2m perspective there is no
>>>>> difference.
>>>>
>>>>
>>>> Really? I looked at the design document [1] which is Intel focus.
>>>> Similar
>>>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>>>
>>> That design cover letter mentions specifically "Both VMFUNC and #VE
>>> are designed such that a VMM can emulate them on legacy CPUs". While
>>> they certainly had only Intel hardware in-mind, the software route can
>>> also be taken on ARM as well. As our primary use-case is purely
>>> external use of altp2m we have not implemented the bits that enable
>>> the injection of mem_access faults into the guest (equivalent of #VE).
>>> Whether without that the altp2m switching from within the guest make
>>> sense or not is beyond the scope of this series but as it could
>>> technically be implemented in the future, I don't see a reason to
>>> disable that possibility right away.
>>
>> The question here, is how a guest could take advantage to access to
>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>> memaccess change, this is not yet the case on ARM.
>>
>> So, from my understanding, exposing this feature to a guest is like
>> exposing a no-op with side effects. We should avoid to expose feature to
>> the guest until there is a real usage and the guest could do something
>> useful with it.
>
> It seems like having guest altp2m support without the equivalent of a
> #VE does seem pretty useless.  Would you disagree with this assessment,
> Tamas?
>
> Every interface we expose to the guest increases the surface of attack;
> so it seems like until there is a usecase for guest altp2m, we should
> probably disable it.
>

Hi George,
I disagree. On x86 using VMFUNC EPTP switching is not bound to #VE in
any way. The two can certainly benefit from being used together but
there is no enforced interdependence between the two. It is certainly
possible to derive a use-case for just having the altp2m switch
operations available to the guest. For example, I could imagine the
gfn remapping be used to protect kernel memory areas against
information disclosure by only switching to the accessible mapping
when certain conditions are met.

As our usecase is purely external implementing the emulated #VE at
this time has been deemed out-of-scope but it could be certainly
implemented for ARM as well. Now that I'm thinking about it it might
actually not be necessary to implement the #VE at all the way x86 does
by injecting an interrupt as we might just be able to allow the domain
to enable the existing mem_access ring directly.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02  7:38         ` Julien Grall
  2016-08-02 11:17           ` George Dunlap
@ 2016-08-02 16:00           ` Tamas K Lengyel
  2016-08-02 16:11             ` Julien Grall
  1 sibling, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-02 16:00 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Jan Beulich, Andrew Cooper, Xen-devel

On Tue, Aug 2, 2016 at 1:38 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hello Tamas,
>
> On 01/08/2016 21:41, Tamas K Lengyel wrote:
>>
>> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com> wrote:
>>>>
>>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>>> not but we did not agree whether restricting it on ARM is absolutely
>>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>>> within the guest on all systems irrespective of actual hardware
>>>> support for it.  Thus, this design fits ARM as well where there is no
>>>> dedicated hardware support, from the altp2m perspective there is no
>>>> difference.
>>>
>>>
>>>
>>> Really? I looked at the design document [1] which is Intel focus. Similar
>>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>>
>>
>> That design cover letter mentions specifically "Both VMFUNC and #VE
>> are designed such that a VMM can emulate them on legacy CPUs". While
>> they certainly had only Intel hardware in-mind, the software route can
>> also be taken on ARM as well. As our primary use-case is purely
>> external use of altp2m we have not implemented the bits that enable
>> the injection of mem_access faults into the guest (equivalent of #VE).
>> Whether without that the altp2m switching from within the guest make
>> sense or not is beyond the scope of this series but as it could
>> technically be implemented in the future, I don't see a reason to
>> disable that possibility right away.
>
>
> The question here, is how a guest could take advantage to access to altp2m
> on ARM today? Whilst on x86 a guest could be notified about memaccess
> change, this is not yet the case on ARM.
>
> So, from my understanding, exposing this feature to a guest is like exposing
> a no-op with side effects. We should avoid to expose feature to the guest
> until there is a real usage and the guest could do something useful with it.
>
> This has always been the case where some features were not fully ported on
> ARM until there is an actual usage (or we differ because there will be
> no-usage). The interface is already there, so a future Xen release can
> decide to give access to the guest when (and only when) this will be useful.
>

Hi Julien,
as I said our use-case is purely external so I don't have an actual
use-case for anything being accessible from within the guest. However,
I could imagine the gfn remapping be used to protect kernel memory
areas against information disclosure by only switching to the
accessible mapping
when certain conditions are met. Also, I had been able to use
mem_access from domUs with the use of XSM so I believe it would be
possible for a domain to enable mem_access on itself that way and thus
not having to implement #VE exactly the way x86 does and still have
feature parity.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 15:48             ` Tamas K Lengyel
@ 2016-08-02 16:05               ` George Dunlap
  2016-08-02 16:09                 ` Tamas K Lengyel
  2016-08-02 16:40                 ` Julien Grall
  0 siblings, 2 replies; 159+ messages in thread
From: George Dunlap @ 2016-08-02 16:05 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Julien Grall, Jan Beulich, Andrew Cooper, Xen-devel

On 02/08/16 16:48, Tamas K Lengyel wrote:
> On Tue, Aug 2, 2016 at 5:17 AM, George Dunlap <george.dunlap@citrix.com> wrote:
>> On 02/08/16 08:38, Julien Grall wrote:
>>> Hello Tamas,
>>>
>>> On 01/08/2016 21:41, Tamas K Lengyel wrote:
>>>> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com>
>>>> wrote:
>>>>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>>>>> not but we did not agree whether restricting it on ARM is absolutely
>>>>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>>>>> within the guest on all systems irrespective of actual hardware
>>>>>> support for it.  Thus, this design fits ARM as well where there is no
>>>>>> dedicated hardware support, from the altp2m perspective there is no
>>>>>> difference.
>>>>>
>>>>>
>>>>> Really? I looked at the design document [1] which is Intel focus.
>>>>> Similar
>>>>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>>>>
>>>> That design cover letter mentions specifically "Both VMFUNC and #VE
>>>> are designed such that a VMM can emulate them on legacy CPUs". While
>>>> they certainly had only Intel hardware in-mind, the software route can
>>>> also be taken on ARM as well. As our primary use-case is purely
>>>> external use of altp2m we have not implemented the bits that enable
>>>> the injection of mem_access faults into the guest (equivalent of #VE).
>>>> Whether without that the altp2m switching from within the guest make
>>>> sense or not is beyond the scope of this series but as it could
>>>> technically be implemented in the future, I don't see a reason to
>>>> disable that possibility right away.
>>>
>>> The question here, is how a guest could take advantage to access to
>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>> memaccess change, this is not yet the case on ARM.
>>>
>>> So, from my understanding, exposing this feature to a guest is like
>>> exposing a no-op with side effects. We should avoid to expose feature to
>>> the guest until there is a real usage and the guest could do something
>>> useful with it.
>>
>> It seems like having guest altp2m support without the equivalent of a
>> #VE does seem pretty useless.  Would you disagree with this assessment,
>> Tamas?
>>
>> Every interface we expose to the guest increases the surface of attack;
>> so it seems like until there is a usecase for guest altp2m, we should
>> probably disable it.
>>
> 
> Hi George,
> I disagree. On x86 using VMFUNC EPTP switching is not bound to #VE in
> any way. The two can certainly benefit from being used together but
> there is no enforced interdependence between the two. It is certainly
> possible to derive a use-case for just having the altp2m switch
> operations available to the guest. For example, I could imagine the
> gfn remapping be used to protect kernel memory areas against
> information disclosure by only switching to the accessible mapping
> when certain conditions are met.

That's true -- I suppose gfn remapping is something that would be useful
even without #VE.

> As our usecase is purely external implementing the emulated #VE at
> this time has been deemed out-of-scope but it could be certainly
> implemented for ARM as well. Now that I'm thinking about it it might
> actually not be necessary to implement the #VE at all the way x86 does
> by injecting an interrupt as we might just be able to allow the domain
> to enable the existing mem_access ring directly.

That would be a possibility, but before that could be considered a
feature we'd need someone to go through and make sure that this
self-mem_access funcitonality worked properly.  (And I take it at the
moment that's not work you're volunteering to do.)

But the gfn remapping is something that could be used immediately.

I'd leave it to Julien to determine the window of functionality he wants
to support.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02  7:34     ` Julien Grall
@ 2016-08-02 16:08       ` Andrew Cooper
  2016-08-02 16:30         ` Tamas K Lengyel
  2016-08-03 11:53         ` Julien Grall
  0 siblings, 2 replies; 159+ messages in thread
From: Andrew Cooper @ 2016-08-02 16:08 UTC (permalink / raw)
  To: Julien Grall, Sergej Proskurin, xen-devel, Stefano Stabellini

On 02/08/16 08:34, Julien Grall wrote:
> Hi Andrew,
>
> On 02/08/2016 00:14, Andrew Cooper wrote:
>> On 01/08/2016 19:15, Julien Grall wrote:
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>
>>>> Hello all,
>>>
>>> Hello Sergej,
>>>
>>>> The following patch series can be found on Github[0] and is part of my
>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>>> project is
>>>> managed by the organization The Honeynet Project. As part of GSoC, I
>>>> am being
>>>> supervised by the Xen developer Tamas K. Lengyel
>>>> <tamas@tklengyel.com>, George
>>>> D. Webster, and Steven Maresca.
>>>>
>>>> In this patch series, we provide an implementation of the altp2m
>>>> subsystem for
>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>> providing
>>>> additional --alternate-- views on the guest's physical memory by
>>>> means of the
>>>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs
>>>> and
>>>> extend the p2m subsystem. Also, we extend libxl to support altp2m on
>>>> ARM and
>>>> modify xen-access to test the suggested functionality.
>>>>
>>>> To be more precise, altp2m allows to create and switch to additional
>>>> p2m views
>>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>>> activated as
>>>> will through the provided HVMOPs. In this way, the active guest
>>>> instance in
>>>> question can seamlessly proceed execution without noticing that
>>>> anything has
>>>> changed. The prime scope of application of altp2m is Virtual Machine
>>>> Introspection, where guest systems are analyzed from the outside of
>>>> the VM.
>>>>
>>>> Altp2m can be activated by means of the guest control parameter
>>>> "altp2m" on x86
>>>> and ARM architectures.  The altp2m functionality by default can also
>>>> be used
>>>> from within the guest by design. For use-cases requiring purely
>>>> external access
>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>
>>> As said on the previous version, altp2m operation *should not* be
>>> exposed to ARM guest. Any design written for x86 may not fit exactly
>>> for ARM (and vice versa), you will need to explain why you think we
>>> should follow the same pattern.
>>
>> Sorry, but I am going to step in here and disagree.  All the x86
>> justifications for altp2m being accessible to guests apply equally to
>> ARM, as they are hardware independent.
>>
>> I realise you are maintainer, but the onus is on you to justify why the
>> behaviour should be different between x86 and ARM, rather than simply to
>> complain at it being the same.
>>
>> Naturally, technical issues about the details of the implementation, or
>> the algorithms etc. are of course fine, but I don't see any plausible
>> reason why ARM should purposefully different from x86 in terms of
>> available functionality, and several good reasons why it should be the
>> same (least of all, feature parity across architectures).
>
> The question here, is how a guest could take advantage to access to
> altp2m on ARM today? Whilst on x86 a guest could be notified about
> memaccess change, this is not yet the case on ARM.

Does ARM have anything like #VE whereby an in-guest entity can receive
notification of violations?

If not, then I can't see any way that an in-guest entity can usefully
use altp2m, and by that logic, shouldn't have access to something it
can't usefully use.

I suppose something could be synthesized with an event channel, if
in-guest use is wanted/needed.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 16:05               ` George Dunlap
@ 2016-08-02 16:09                 ` Tamas K Lengyel
  2016-08-02 16:40                 ` Julien Grall
  1 sibling, 0 replies; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-02 16:09 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Julien Grall, Jan Beulich, Andrew Cooper, Xen-devel

On Tue, Aug 2, 2016 at 10:05 AM, George Dunlap <george.dunlap@citrix.com> wrote:
> On 02/08/16 16:48, Tamas K Lengyel wrote:
>> On Tue, Aug 2, 2016 at 5:17 AM, George Dunlap <george.dunlap@citrix.com> wrote:
>>> On 02/08/16 08:38, Julien Grall wrote:
>>>> Hello Tamas,
>>>>
>>>> On 01/08/2016 21:41, Tamas K Lengyel wrote:
>>>>> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com>
>>>>> wrote:
>>>>>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>>>>>> not but we did not agree whether restricting it on ARM is absolutely
>>>>>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>>>>>> within the guest on all systems irrespective of actual hardware
>>>>>>> support for it.  Thus, this design fits ARM as well where there is no
>>>>>>> dedicated hardware support, from the altp2m perspective there is no
>>>>>>> difference.
>>>>>>
>>>>>>
>>>>>> Really? I looked at the design document [1] which is Intel focus.
>>>>>> Similar
>>>>>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>>>>>
>>>>> That design cover letter mentions specifically "Both VMFUNC and #VE
>>>>> are designed such that a VMM can emulate them on legacy CPUs". While
>>>>> they certainly had only Intel hardware in-mind, the software route can
>>>>> also be taken on ARM as well. As our primary use-case is purely
>>>>> external use of altp2m we have not implemented the bits that enable
>>>>> the injection of mem_access faults into the guest (equivalent of #VE).
>>>>> Whether without that the altp2m switching from within the guest make
>>>>> sense or not is beyond the scope of this series but as it could
>>>>> technically be implemented in the future, I don't see a reason to
>>>>> disable that possibility right away.
>>>>
>>>> The question here, is how a guest could take advantage to access to
>>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>>> memaccess change, this is not yet the case on ARM.
>>>>
>>>> So, from my understanding, exposing this feature to a guest is like
>>>> exposing a no-op with side effects. We should avoid to expose feature to
>>>> the guest until there is a real usage and the guest could do something
>>>> useful with it.
>>>
>>> It seems like having guest altp2m support without the equivalent of a
>>> #VE does seem pretty useless.  Would you disagree with this assessment,
>>> Tamas?
>>>
>>> Every interface we expose to the guest increases the surface of attack;
>>> so it seems like until there is a usecase for guest altp2m, we should
>>> probably disable it.
>>>
>>
>> Hi George,
>> I disagree. On x86 using VMFUNC EPTP switching is not bound to #VE in
>> any way. The two can certainly benefit from being used together but
>> there is no enforced interdependence between the two. It is certainly
>> possible to derive a use-case for just having the altp2m switch
>> operations available to the guest. For example, I could imagine the
>> gfn remapping be used to protect kernel memory areas against
>> information disclosure by only switching to the accessible mapping
>> when certain conditions are met.
>
> That's true -- I suppose gfn remapping is something that would be useful
> even without #VE.
>
>> As our usecase is purely external implementing the emulated #VE at
>> this time has been deemed out-of-scope but it could be certainly
>> implemented for ARM as well. Now that I'm thinking about it it might
>> actually not be necessary to implement the #VE at all the way x86 does
>> by injecting an interrupt as we might just be able to allow the domain
>> to enable the existing mem_access ring directly.
>
> That would be a possibility, but before that could be considered a
> feature we'd need someone to go through and make sure that this
> self-mem_access funcitonality worked properly.  (And I take it at the
> moment that's not work you're volunteering to do.)

Right, not at this time, it's a bit beyond our scope for now.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 16:00           ` Tamas K Lengyel
@ 2016-08-02 16:11             ` Julien Grall
  2016-08-02 16:22               ` Tamas K Lengyel
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-02 16:11 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Jan Beulich, Andrew Cooper, Xen-devel



On 02/08/16 17:00, Tamas K Lengyel wrote:
> On Tue, Aug 2, 2016 at 1:38 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Julien,
> as I said our use-case is purely external so I don't have an actual
> use-case for anything being accessible from within the guest. However,
> I could imagine the gfn remapping be used to protect kernel memory
> areas against information disclosure by only switching to the
> accessible mapping
> when certain conditions are met. Also, I had been able to use
> mem_access from domUs with the use of XSM so I believe it would be
> possible for a domain to enable mem_access on itself that way and thus
> not having to implement #VE exactly the way x86 does and still have
> feature parity.

I believe that your suggestion does not currently work. memaccess will 
pause the current vCPU whilst the introspection app will handle the 
access (see p2m_mem_access_check). How can the guest handle the event if 
the vCPU has been paused?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 16:11             ` Julien Grall
@ 2016-08-02 16:22               ` Tamas K Lengyel
  0 siblings, 0 replies; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-02 16:22 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Jan Beulich, Andrew Cooper, Xen-devel

On Tue, Aug 2, 2016 at 10:11 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 02/08/16 17:00, Tamas K Lengyel wrote:
>>
>> On Tue, Aug 2, 2016 at 1:38 AM, Julien Grall <julien.grall@arm.com> wrote:
>> Hi Julien,
>> as I said our use-case is purely external so I don't have an actual
>> use-case for anything being accessible from within the guest. However,
>> I could imagine the gfn remapping be used to protect kernel memory
>> areas against information disclosure by only switching to the
>> accessible mapping
>> when certain conditions are met. Also, I had been able to use
>> mem_access from domUs with the use of XSM so I believe it would be
>> possible for a domain to enable mem_access on itself that way and thus
>> not having to implement #VE exactly the way x86 does and still have
>> feature parity.
>
>
> I believe that your suggestion does not currently work. memaccess will pause
> the current vCPU whilst the introspection app will handle the access (see
> p2m_mem_access_check). How can the guest handle the event if the vCPU has
> been paused?
>

True. Not in all cases though - there are async violations - but yea,
that certainly could be a pain.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 16:08       ` Andrew Cooper
@ 2016-08-02 16:30         ` Tamas K Lengyel
  2016-08-03 11:53         ` Julien Grall
  1 sibling, 0 replies; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-02 16:30 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini, Xen-devel

On Tue, Aug 2, 2016 at 10:08 AM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 02/08/16 08:34, Julien Grall wrote:
>> Hi Andrew,
>>
>> On 02/08/2016 00:14, Andrew Cooper wrote:
>>> On 01/08/2016 19:15, Julien Grall wrote:
>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>>
>>>>> Hello all,
>>>>
>>>> Hello Sergej,
>>>>
>>>>> The following patch series can be found on Github[0] and is part of my
>>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>>>> project is
>>>>> managed by the organization The Honeynet Project. As part of GSoC, I
>>>>> am being
>>>>> supervised by the Xen developer Tamas K. Lengyel
>>>>> <tamas@tklengyel.com>, George
>>>>> D. Webster, and Steven Maresca.
>>>>>
>>>>> In this patch series, we provide an implementation of the altp2m
>>>>> subsystem for
>>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>>> providing
>>>>> additional --alternate-- views on the guest's physical memory by
>>>>> means of the
>>>>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs
>>>>> and
>>>>> extend the p2m subsystem. Also, we extend libxl to support altp2m on
>>>>> ARM and
>>>>> modify xen-access to test the suggested functionality.
>>>>>
>>>>> To be more precise, altp2m allows to create and switch to additional
>>>>> p2m views
>>>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>>>> activated as
>>>>> will through the provided HVMOPs. In this way, the active guest
>>>>> instance in
>>>>> question can seamlessly proceed execution without noticing that
>>>>> anything has
>>>>> changed. The prime scope of application of altp2m is Virtual Machine
>>>>> Introspection, where guest systems are analyzed from the outside of
>>>>> the VM.
>>>>>
>>>>> Altp2m can be activated by means of the guest control parameter
>>>>> "altp2m" on x86
>>>>> and ARM architectures.  The altp2m functionality by default can also
>>>>> be used
>>>>> from within the guest by design. For use-cases requiring purely
>>>>> external access
>>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>>
>>>> As said on the previous version, altp2m operation *should not* be
>>>> exposed to ARM guest. Any design written for x86 may not fit exactly
>>>> for ARM (and vice versa), you will need to explain why you think we
>>>> should follow the same pattern.
>>>
>>> Sorry, but I am going to step in here and disagree.  All the x86
>>> justifications for altp2m being accessible to guests apply equally to
>>> ARM, as they are hardware independent.
>>>
>>> I realise you are maintainer, but the onus is on you to justify why the
>>> behaviour should be different between x86 and ARM, rather than simply to
>>> complain at it being the same.
>>>
>>> Naturally, technical issues about the details of the implementation, or
>>> the algorithms etc. are of course fine, but I don't see any plausible
>>> reason why ARM should purposefully different from x86 in terms of
>>> available functionality, and several good reasons why it should be the
>>> same (least of all, feature parity across architectures).
>>
>> The question here, is how a guest could take advantage to access to
>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>> memaccess change, this is not yet the case on ARM.
>
> Does ARM have anything like #VE whereby an in-guest entity can receive
> notification of violations?
>
> If not, then I can't see any way that an in-guest entity can usefully
> use altp2m, and by that logic, shouldn't have access to something it
> can't usefully use.

As I mentioned earlier, #VE and VMFUNC are not interdependent and
there can be use-cases just for having the altp2m switch ops and gfn
remapping, without any use mem_access and need for a #VE equivalent.
It's not our usecase so we are fine with just restricting the whole
altp2m system for the guest if that's the consensus.

>
> I suppose something could be synthesized with an event channel, if
> in-guest use is wanted/needed.

True, that might be a viable route for a #VE equivalent in the future.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 16:05               ` George Dunlap
  2016-08-02 16:09                 ` Tamas K Lengyel
@ 2016-08-02 16:40                 ` Julien Grall
  2016-08-02 17:01                   ` Tamas K Lengyel
  2016-08-02 17:22                   ` Tamas K Lengyel
  1 sibling, 2 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-02 16:40 UTC (permalink / raw)
  To: George Dunlap, Tamas K Lengyel
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	Jan Beulich, Andrew Cooper, Xen-devel



On 02/08/16 17:05, George Dunlap wrote:
> On 02/08/16 16:48, Tamas K Lengyel wrote:
>> On Tue, Aug 2, 2016 at 5:17 AM, George Dunlap <george.dunlap@citrix.com> wrote:
>>> On 02/08/16 08:38, Julien Grall wrote:
>>>> Hello Tamas,
>>>>
>>>> On 01/08/2016 21:41, Tamas K Lengyel wrote:
>>>>> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com>
>>>>> wrote:
>>>>>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>>>>>> not but we did not agree whether restricting it on ARM is absolutely
>>>>>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>>>>>> within the guest on all systems irrespective of actual hardware
>>>>>>> support for it.  Thus, this design fits ARM as well where there is no
>>>>>>> dedicated hardware support, from the altp2m perspective there is no
>>>>>>> difference.
>>>>>>
>>>>>>
>>>>>> Really? I looked at the design document [1] which is Intel focus.
>>>>>> Similar
>>>>>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>>>>>
>>>>> That design cover letter mentions specifically "Both VMFUNC and #VE
>>>>> are designed such that a VMM can emulate them on legacy CPUs". While
>>>>> they certainly had only Intel hardware in-mind, the software route can
>>>>> also be taken on ARM as well. As our primary use-case is purely
>>>>> external use of altp2m we have not implemented the bits that enable
>>>>> the injection of mem_access faults into the guest (equivalent of #VE).
>>>>> Whether without that the altp2m switching from within the guest make
>>>>> sense or not is beyond the scope of this series but as it could
>>>>> technically be implemented in the future, I don't see a reason to
>>>>> disable that possibility right away.
>>>>
>>>> The question here, is how a guest could take advantage to access to
>>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>>> memaccess change, this is not yet the case on ARM.
>>>>
>>>> So, from my understanding, exposing this feature to a guest is like
>>>> exposing a no-op with side effects. We should avoid to expose feature to
>>>> the guest until there is a real usage and the guest could do something
>>>> useful with it.
>>>
>>> It seems like having guest altp2m support without the equivalent of a
>>> #VE does seem pretty useless.  Would you disagree with this assessment,
>>> Tamas?
>>>
>>> Every interface we expose to the guest increases the surface of attack;
>>> so it seems like until there is a usecase for guest altp2m, we should
>>> probably disable it.
>>>
>>
>> Hi George,
>> I disagree. On x86 using VMFUNC EPTP switching is not bound to #VE in
>> any way. The two can certainly benefit from being used together but
>> there is no enforced interdependence between the two. It is certainly
>> possible to derive a use-case for just having the altp2m switch
>> operations available to the guest. For example, I could imagine the
>> gfn remapping be used to protect kernel memory areas against
>> information disclosure by only switching to the accessible mapping
>> when certain conditions are met.
>
> That's true -- I suppose gfn remapping is something that would be useful
> even without #VE.
>
>> As our usecase is purely external implementing the emulated #VE at
>> this time has been deemed out-of-scope but it could be certainly
>> implemented for ARM as well. Now that I'm thinking about it it might
>> actually not be necessary to implement the #VE at all the way x86 does
>> by injecting an interrupt as we might just be able to allow the domain
>> to enable the existing mem_access ring directly.
>
> That would be a possibility, but before that could be considered a
> feature we'd need someone to go through and make sure that this
> self-mem_access funcitonality worked properly.  (And I take it at the
> moment that's not work you're volunteering to do.)
>
> But the gfn remapping is something that could be used immediately.

I looked at the implementation of gfn remapping and I am a bit confused.
 From my understanding of the code, the same MFN could be mapped twice 
in the altp2m. Is that right? May I ask the purpose of this?

So if the guest is removing the mapping from the host p2m (by calling 
XENMEM_decrease_reservation), only one of the mapping will be removed.

That will leave the other mapping potentially mapped in one of the 
altp2m. However, AFAICT, x86 does take a reference on the page for 
mapping. So this may point to memory that does not belong to the domain 
anymore. Did I miss anything?

After writing the above paragraphs, I noticed that p2m_change_altp2m_gfn 
seem to take care of this use case.

However, the locking is not the same to protect min_remapped_gfn and 
max_remapped_gfn. p2m_altp2m_propagate_change is protecting them by 
taking the altp2m_list_lock whilst p2m_change_altp2m_gfn is using the 
altp2m lock.

Maybe George or Tamas could give more insight here.

BTW, the x86 version of p2m_change_altp2m_gfn (I have not yet looked at 
the ARM one) does not seem to lock the hostp2m. Is it normal?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 16:40                 ` Julien Grall
@ 2016-08-02 17:01                   ` Tamas K Lengyel
  2016-08-02 17:22                   ` Tamas K Lengyel
  1 sibling, 0 replies; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-02 17:01 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	George Dunlap, Jan Beulich, Andrew Cooper, Xen-devel

On Tue, Aug 2, 2016 at 10:40 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 02/08/16 17:05, George Dunlap wrote:
>>
>> On 02/08/16 16:48, Tamas K Lengyel wrote:
>>>
>>> On Tue, Aug 2, 2016 at 5:17 AM, George Dunlap <george.dunlap@citrix.com>
>>> wrote:
>>>>
>>>> On 02/08/16 08:38, Julien Grall wrote:
>>>>>
>>>>> Hello Tamas,
>>>>>
>>>>> On 01/08/2016 21:41, Tamas K Lengyel wrote:
>>>>>>
>>>>>> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com>
>>>>>> wrote:
>>>>>>>>
>>>>>>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>>>>>>> not but we did not agree whether restricting it on ARM is absolutely
>>>>>>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>>>>>>> within the guest on all systems irrespective of actual hardware
>>>>>>>> support for it.  Thus, this design fits ARM as well where there is
>>>>>>>> no
>>>>>>>> dedicated hardware support, from the altp2m perspective there is no
>>>>>>>> difference.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Really? I looked at the design document [1] which is Intel focus.
>>>>>>> Similar
>>>>>>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>>>>>>
>>>>>>
>>>>>> That design cover letter mentions specifically "Both VMFUNC and #VE
>>>>>> are designed such that a VMM can emulate them on legacy CPUs". While
>>>>>> they certainly had only Intel hardware in-mind, the software route can
>>>>>> also be taken on ARM as well. As our primary use-case is purely
>>>>>> external use of altp2m we have not implemented the bits that enable
>>>>>> the injection of mem_access faults into the guest (equivalent of #VE).
>>>>>> Whether without that the altp2m switching from within the guest make
>>>>>> sense or not is beyond the scope of this series but as it could
>>>>>> technically be implemented in the future, I don't see a reason to
>>>>>> disable that possibility right away.
>>>>>
>>>>>
>>>>> The question here, is how a guest could take advantage to access to
>>>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>>>> memaccess change, this is not yet the case on ARM.
>>>>>
>>>>> So, from my understanding, exposing this feature to a guest is like
>>>>> exposing a no-op with side effects. We should avoid to expose feature
>>>>> to
>>>>> the guest until there is a real usage and the guest could do something
>>>>> useful with it.
>>>>
>>>>
>>>> It seems like having guest altp2m support without the equivalent of a
>>>> #VE does seem pretty useless.  Would you disagree with this assessment,
>>>> Tamas?
>>>>
>>>> Every interface we expose to the guest increases the surface of attack;
>>>> so it seems like until there is a usecase for guest altp2m, we should
>>>> probably disable it.
>>>>
>>>
>>> Hi George,
>>> I disagree. On x86 using VMFUNC EPTP switching is not bound to #VE in
>>> any way. The two can certainly benefit from being used together but
>>> there is no enforced interdependence between the two. It is certainly
>>> possible to derive a use-case for just having the altp2m switch
>>> operations available to the guest. For example, I could imagine the
>>> gfn remapping be used to protect kernel memory areas against
>>> information disclosure by only switching to the accessible mapping
>>> when certain conditions are met.
>>
>>
>> That's true -- I suppose gfn remapping is something that would be useful
>> even without #VE.
>>
>>> As our usecase is purely external implementing the emulated #VE at
>>> this time has been deemed out-of-scope but it could be certainly
>>> implemented for ARM as well. Now that I'm thinking about it it might
>>> actually not be necessary to implement the #VE at all the way x86 does
>>> by injecting an interrupt as we might just be able to allow the domain
>>> to enable the existing mem_access ring directly.
>>
>>
>> That would be a possibility, but before that could be considered a
>> feature we'd need someone to go through and make sure that this
>> self-mem_access funcitonality worked properly.  (And I take it at the
>> moment that's not work you're volunteering to do.)
>>
>> But the gfn remapping is something that could be used immediately.
>
>
> I looked at the implementation of gfn remapping and I am a bit confused.
> From my understanding of the code, the same MFN could be mapped twice in the
> altp2m. Is that right? May I ask the purpose of this?

Yes, that's correct. I can tell you my use-case for it. I'm using
breakpoints to trace the execution of the kernel by writing 0xCC to
function entry points and configure the VMCS to trap to the VMM when
such instructions execute.

Now, the kernel can detect that these instruction are written to it's
memory and for example Windows would blue-screen. So, to avoid that, I
create a shadow copy of the page and only add breakpoints there. Then
create an altp2m view and use gfn remapping to point the gfn to this
new mfn. This view is execute only, thus when the kernel tries to
read, we can switch back to view 0 where it just sees it's original
page without breakpoints.

At this point the shadow copy is also in the guest_physmap so a
malicious kernel could just scan the physical memory and search for
such shadow copies. Thus to avoid the detection I also have an empty
page added to the physmap. In both the default view and the execute
view all shadow copy pages are marked non-readable using their actual
gfn. By creating another, third view, we remap all shadow copy pages
to the empty page. This way, if the kernel is trying to read the
contents of the shadow copy pages with their actual gfn's we can
switch to this third view and the kernel will only see a blank page.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 16:40                 ` Julien Grall
  2016-08-02 17:01                   ` Tamas K Lengyel
@ 2016-08-02 17:22                   ` Tamas K Lengyel
  1 sibling, 0 replies; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-02 17:22 UTC (permalink / raw)
  To: Julien Grall
  Cc: Stefano Stabellini, George Dunlap, Sergej Proskurin, Tim Deegan,
	George Dunlap, Jan Beulich, Andrew Cooper, Xen-devel

On Tue, Aug 2, 2016 at 10:40 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 02/08/16 17:05, George Dunlap wrote:
>>
>> On 02/08/16 16:48, Tamas K Lengyel wrote:
>>>
>>> On Tue, Aug 2, 2016 at 5:17 AM, George Dunlap <george.dunlap@citrix.com>
>>> wrote:
>>>>
>>>> On 02/08/16 08:38, Julien Grall wrote:
>>>>>
>>>>> Hello Tamas,
>>>>>
>>>>> On 01/08/2016 21:41, Tamas K Lengyel wrote:
>>>>>>
>>>>>> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@arm.com>
>>>>>> wrote:
>>>>>>>>
>>>>>>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>>>>>>> not but we did not agree whether restricting it on ARM is absolutely
>>>>>>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>>>>>>> within the guest on all systems irrespective of actual hardware
>>>>>>>> support for it.  Thus, this design fits ARM as well where there is
>>>>>>>> no
>>>>>>>> dedicated hardware support, from the altp2m perspective there is no
>>>>>>>> difference.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Really? I looked at the design document [1] which is Intel focus.
>>>>>>> Similar
>>>>>>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>>>>>>
>>>>>>
>>>>>> That design cover letter mentions specifically "Both VMFUNC and #VE
>>>>>> are designed such that a VMM can emulate them on legacy CPUs". While
>>>>>> they certainly had only Intel hardware in-mind, the software route can
>>>>>> also be taken on ARM as well. As our primary use-case is purely
>>>>>> external use of altp2m we have not implemented the bits that enable
>>>>>> the injection of mem_access faults into the guest (equivalent of #VE).
>>>>>> Whether without that the altp2m switching from within the guest make
>>>>>> sense or not is beyond the scope of this series but as it could
>>>>>> technically be implemented in the future, I don't see a reason to
>>>>>> disable that possibility right away.
>>>>>
>>>>>
>>>>> The question here, is how a guest could take advantage to access to
>>>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>>>> memaccess change, this is not yet the case on ARM.
>>>>>
>>>>> So, from my understanding, exposing this feature to a guest is like
>>>>> exposing a no-op with side effects. We should avoid to expose feature
>>>>> to
>>>>> the guest until there is a real usage and the guest could do something
>>>>> useful with it.
>>>>
>>>>
>>>> It seems like having guest altp2m support without the equivalent of a
>>>> #VE does seem pretty useless.  Would you disagree with this assessment,
>>>> Tamas?
>>>>
>>>> Every interface we expose to the guest increases the surface of attack;
>>>> so it seems like until there is a usecase for guest altp2m, we should
>>>> probably disable it.
>>>>
>>>
>>> Hi George,
>>> I disagree. On x86 using VMFUNC EPTP switching is not bound to #VE in
>>> any way. The two can certainly benefit from being used together but
>>> there is no enforced interdependence between the two. It is certainly
>>> possible to derive a use-case for just having the altp2m switch
>>> operations available to the guest. For example, I could imagine the
>>> gfn remapping be used to protect kernel memory areas against
>>> information disclosure by only switching to the accessible mapping
>>> when certain conditions are met.
>>
>>
>> That's true -- I suppose gfn remapping is something that would be useful
>> even without #VE.
>>
>>> As our usecase is purely external implementing the emulated #VE at
>>> this time has been deemed out-of-scope but it could be certainly
>>> implemented for ARM as well. Now that I'm thinking about it it might
>>> actually not be necessary to implement the #VE at all the way x86 does
>>> by injecting an interrupt as we might just be able to allow the domain
>>> to enable the existing mem_access ring directly.
>>
>>
>> That would be a possibility, but before that could be considered a
>> feature we'd need someone to go through and make sure that this
>> self-mem_access funcitonality worked properly.  (And I take it at the
>> moment that's not work you're volunteering to do.)
>>
>> But the gfn remapping is something that could be used immediately.
>
>
> I looked at the implementation of gfn remapping and I am a bit confused.
> From my understanding of the code, the same MFN could be mapped twice in the
> altp2m. Is that right? May I ask the purpose of this?
>
> So if the guest is removing the mapping from the host p2m (by calling
> XENMEM_decrease_reservation), only one of the mapping will be removed.
>
> That will leave the other mapping potentially mapped in one of the altp2m.
> However, AFAICT, x86 does take a reference on the page for mapping. So this
> may point to memory that does not belong to the domain anymore. Did I miss
> anything?
>
> After writing the above paragraphs, I noticed that p2m_change_altp2m_gfn
> seem to take care of this use case.

Right, using the min/max_remapped_gfn the p2m_altp2m_propagate_change
can check if the change involves the removal of an mfn affecting a
remap. I'm not a fan of the current implementation that just resets
the views altogether if this happens though but I can't really think
of a better way of handling it either. In the end it's easier for the
toolstack side to make sane decisions when it's removing memory from
the domain.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-02 16:08       ` Andrew Cooper
  2016-08-02 16:30         ` Tamas K Lengyel
@ 2016-08-03 11:53         ` Julien Grall
  2016-08-03 12:00           ` Andrew Cooper
  1 sibling, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 11:53 UTC (permalink / raw)
  To: Andrew Cooper, Sergej Proskurin, xen-devel, Stefano Stabellini



On 02/08/16 17:08, Andrew Cooper wrote:
> On 02/08/16 08:34, Julien Grall wrote:
>> Hi Andrew,
>>
>> On 02/08/2016 00:14, Andrew Cooper wrote:
>>> On 01/08/2016 19:15, Julien Grall wrote:
>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>>
>>>>> Hello all,
>>>>
>>>> Hello Sergej,
>>>>
>>>>> The following patch series can be found on Github[0] and is part of my
>>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>>>> project is
>>>>> managed by the organization The Honeynet Project. As part of GSoC, I
>>>>> am being
>>>>> supervised by the Xen developer Tamas K. Lengyel
>>>>> <tamas@tklengyel.com>, George
>>>>> D. Webster, and Steven Maresca.
>>>>>
>>>>> In this patch series, we provide an implementation of the altp2m
>>>>> subsystem for
>>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>>> providing
>>>>> additional --alternate-- views on the guest's physical memory by
>>>>> means of the
>>>>> ARM 2nd stage translation mechanism. The patches introduce new HVMOPs
>>>>> and
>>>>> extend the p2m subsystem. Also, we extend libxl to support altp2m on
>>>>> ARM and
>>>>> modify xen-access to test the suggested functionality.
>>>>>
>>>>> To be more precise, altp2m allows to create and switch to additional
>>>>> p2m views
>>>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>>>> activated as
>>>>> will through the provided HVMOPs. In this way, the active guest
>>>>> instance in
>>>>> question can seamlessly proceed execution without noticing that
>>>>> anything has
>>>>> changed. The prime scope of application of altp2m is Virtual Machine
>>>>> Introspection, where guest systems are analyzed from the outside of
>>>>> the VM.
>>>>>
>>>>> Altp2m can be activated by means of the guest control parameter
>>>>> "altp2m" on x86
>>>>> and ARM architectures.  The altp2m functionality by default can also
>>>>> be used
>>>>> from within the guest by design. For use-cases requiring purely
>>>>> external access
>>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>>
>>>> As said on the previous version, altp2m operation *should not* be
>>>> exposed to ARM guest. Any design written for x86 may not fit exactly
>>>> for ARM (and vice versa), you will need to explain why you think we
>>>> should follow the same pattern.
>>>
>>> Sorry, but I am going to step in here and disagree.  All the x86
>>> justifications for altp2m being accessible to guests apply equally to
>>> ARM, as they are hardware independent.
>>>
>>> I realise you are maintainer, but the onus is on you to justify why the
>>> behaviour should be different between x86 and ARM, rather than simply to
>>> complain at it being the same.
>>>
>>> Naturally, technical issues about the details of the implementation, or
>>> the algorithms etc. are of course fine, but I don't see any plausible
>>> reason why ARM should purposefully different from x86 in terms of
>>> available functionality, and several good reasons why it should be the
>>> same (least of all, feature parity across architectures).
>>
>> The question here, is how a guest could take advantage to access to
>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>> memaccess change, this is not yet the case on ARM.
>
> Does ARM have anything like #VE whereby an in-guest entity can receive
> notification of violations?

I am not entirely sure what is exactly the #VE. From my understanding, 
it use to report stage 2 violation to the guest, right? If so, I am not 
aware of any.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 11:53         ` Julien Grall
@ 2016-08-03 12:00           ` Andrew Cooper
  2016-08-03 12:13             ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Andrew Cooper @ 2016-08-03 12:00 UTC (permalink / raw)
  To: Julien Grall, Sergej Proskurin, xen-devel, Stefano Stabellini

On 03/08/16 12:53, Julien Grall wrote:
>
>
> On 02/08/16 17:08, Andrew Cooper wrote:
>> On 02/08/16 08:34, Julien Grall wrote:
>>> Hi Andrew,
>>>
>>> On 02/08/2016 00:14, Andrew Cooper wrote:
>>>> On 01/08/2016 19:15, Julien Grall wrote:
>>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>>>
>>>>>> Hello all,
>>>>>
>>>>> Hello Sergej,
>>>>>
>>>>>> The following patch series can be found on Github[0] and is part
>>>>>> of my
>>>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>>>>> project is
>>>>>> managed by the organization The Honeynet Project. As part of GSoC, I
>>>>>> am being
>>>>>> supervised by the Xen developer Tamas K. Lengyel
>>>>>> <tamas@tklengyel.com>, George
>>>>>> D. Webster, and Steven Maresca.
>>>>>>
>>>>>> In this patch series, we provide an implementation of the altp2m
>>>>>> subsystem for
>>>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>>>> providing
>>>>>> additional --alternate-- views on the guest's physical memory by
>>>>>> means of the
>>>>>> ARM 2nd stage translation mechanism. The patches introduce new
>>>>>> HVMOPs
>>>>>> and
>>>>>> extend the p2m subsystem. Also, we extend libxl to support altp2m on
>>>>>> ARM and
>>>>>> modify xen-access to test the suggested functionality.
>>>>>>
>>>>>> To be more precise, altp2m allows to create and switch to additional
>>>>>> p2m views
>>>>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>>>>> activated as
>>>>>> will through the provided HVMOPs. In this way, the active guest
>>>>>> instance in
>>>>>> question can seamlessly proceed execution without noticing that
>>>>>> anything has
>>>>>> changed. The prime scope of application of altp2m is Virtual Machine
>>>>>> Introspection, where guest systems are analyzed from the outside of
>>>>>> the VM.
>>>>>>
>>>>>> Altp2m can be activated by means of the guest control parameter
>>>>>> "altp2m" on x86
>>>>>> and ARM architectures.  The altp2m functionality by default can also
>>>>>> be used
>>>>>> from within the guest by design. For use-cases requiring purely
>>>>>> external access
>>>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>>>
>>>>> As said on the previous version, altp2m operation *should not* be
>>>>> exposed to ARM guest. Any design written for x86 may not fit exactly
>>>>> for ARM (and vice versa), you will need to explain why you think we
>>>>> should follow the same pattern.
>>>>
>>>> Sorry, but I am going to step in here and disagree.  All the x86
>>>> justifications for altp2m being accessible to guests apply equally to
>>>> ARM, as they are hardware independent.
>>>>
>>>> I realise you are maintainer, but the onus is on you to justify why
>>>> the
>>>> behaviour should be different between x86 and ARM, rather than
>>>> simply to
>>>> complain at it being the same.
>>>>
>>>> Naturally, technical issues about the details of the
>>>> implementation, or
>>>> the algorithms etc. are of course fine, but I don't see any plausible
>>>> reason why ARM should purposefully different from x86 in terms of
>>>> available functionality, and several good reasons why it should be the
>>>> same (least of all, feature parity across architectures).
>>>
>>> The question here, is how a guest could take advantage to access to
>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>> memaccess change, this is not yet the case on ARM.
>>
>> Does ARM have anything like #VE whereby an in-guest entity can receive
>> notification of violations?
>
> I am not entirely sure what is exactly the #VE. From my understanding,
> it use to report stage 2 violation to the guest, right? If so, I am
> not aware of any.

#VE is a newly specified CPU exception, precisely for reporting state 2
violations (in ARM terminology).  It works very much like a pagefault.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 12:00           ` Andrew Cooper
@ 2016-08-03 12:13             ` Julien Grall
  2016-08-03 12:18               ` Andrew Cooper
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 12:13 UTC (permalink / raw)
  To: Andrew Cooper, Sergej Proskurin, xen-devel, Stefano Stabellini,
	Steve Capper, Andre Przywara



On 03/08/16 13:00, Andrew Cooper wrote:
> On 03/08/16 12:53, Julien Grall wrote:
>> On 02/08/16 17:08, Andrew Cooper wrote:
>>> On 02/08/16 08:34, Julien Grall wrote:
>>>> Hi Andrew,
>>>>
>>>> On 02/08/2016 00:14, Andrew Cooper wrote:
>>>>> On 01/08/2016 19:15, Julien Grall wrote:
>>>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>>>>
>>>>>>> Hello all,
>>>>>>
>>>>>> Hello Sergej,
>>>>>>
>>>>>>> The following patch series can be found on Github[0] and is part
>>>>>>> of my
>>>>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>>>>>> project is
>>>>>>> managed by the organization The Honeynet Project. As part of GSoC, I
>>>>>>> am being
>>>>>>> supervised by the Xen developer Tamas K. Lengyel
>>>>>>> <tamas@tklengyel.com>, George
>>>>>>> D. Webster, and Steven Maresca.
>>>>>>>
>>>>>>> In this patch series, we provide an implementation of the altp2m
>>>>>>> subsystem for
>>>>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>>>>> providing
>>>>>>> additional --alternate-- views on the guest's physical memory by
>>>>>>> means of the
>>>>>>> ARM 2nd stage translation mechanism. The patches introduce new
>>>>>>> HVMOPs
>>>>>>> and
>>>>>>> extend the p2m subsystem. Also, we extend libxl to support altp2m on
>>>>>>> ARM and
>>>>>>> modify xen-access to test the suggested functionality.
>>>>>>>
>>>>>>> To be more precise, altp2m allows to create and switch to additional
>>>>>>> p2m views
>>>>>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>>>>>> activated as
>>>>>>> will through the provided HVMOPs. In this way, the active guest
>>>>>>> instance in
>>>>>>> question can seamlessly proceed execution without noticing that
>>>>>>> anything has
>>>>>>> changed. The prime scope of application of altp2m is Virtual Machine
>>>>>>> Introspection, where guest systems are analyzed from the outside of
>>>>>>> the VM.
>>>>>>>
>>>>>>> Altp2m can be activated by means of the guest control parameter
>>>>>>> "altp2m" on x86
>>>>>>> and ARM architectures.  The altp2m functionality by default can also
>>>>>>> be used
>>>>>>> from within the guest by design. For use-cases requiring purely
>>>>>>> external access
>>>>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>>>>
>>>>>> As said on the previous version, altp2m operation *should not* be
>>>>>> exposed to ARM guest. Any design written for x86 may not fit exactly
>>>>>> for ARM (and vice versa), you will need to explain why you think we
>>>>>> should follow the same pattern.
>>>>>
>>>>> Sorry, but I am going to step in here and disagree.  All the x86
>>>>> justifications for altp2m being accessible to guests apply equally to
>>>>> ARM, as they are hardware independent.
>>>>>
>>>>> I realise you are maintainer, but the onus is on you to justify why
>>>>> the
>>>>> behaviour should be different between x86 and ARM, rather than
>>>>> simply to
>>>>> complain at it being the same.
>>>>>
>>>>> Naturally, technical issues about the details of the
>>>>> implementation, or
>>>>> the algorithms etc. are of course fine, but I don't see any plausible
>>>>> reason why ARM should purposefully different from x86 in terms of
>>>>> available functionality, and several good reasons why it should be the
>>>>> same (least of all, feature parity across architectures).
>>>>
>>>> The question here, is how a guest could take advantage to access to
>>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>>> memaccess change, this is not yet the case on ARM.
>>>
>>> Does ARM have anything like #VE whereby an in-guest entity can receive
>>> notification of violations?
>>
>> I am not entirely sure what is exactly the #VE. From my understanding,
>> it use to report stage 2 violation to the guest, right? If so, I am
>> not aware of any.
>
> #VE is a newly specified CPU exception, precisely for reporting state 2
> violations (in ARM terminology).  It works very much like a pagefault.

Thank you for the explanation. We don't have any specific exception to 
report stage 2 (I guess EPT for x86 terminology) violations.

If the guest physical address does not belong to an emulated device or 
does not have an associated host address, the hypervisor will inject a 
data/prefetch abort to the guest.

Those aborts contains a fault status. For now it is always the same 
fault: debug fault on AArch32 and address size fault on AArch64. I don't 
think we can re-use one of the fault (see ARM D7-1949 in DDI 0487A.j for 
the list of fault code) to behave as #VE.

I guess the best would be an event channel for this purpose.

> ~Andrew
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 12:13             ` Julien Grall
@ 2016-08-03 12:18               ` Andrew Cooper
  2016-08-03 12:45                 ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Andrew Cooper @ 2016-08-03 12:18 UTC (permalink / raw)
  To: Julien Grall, Sergej Proskurin, xen-devel, Stefano Stabellini,
	Steve Capper, Andre Przywara

On 03/08/16 13:13, Julien Grall wrote:
>
>
> On 03/08/16 13:00, Andrew Cooper wrote:
>> On 03/08/16 12:53, Julien Grall wrote:
>>> On 02/08/16 17:08, Andrew Cooper wrote:
>>>> On 02/08/16 08:34, Julien Grall wrote:
>>>>> Hi Andrew,
>>>>>
>>>>> On 02/08/2016 00:14, Andrew Cooper wrote:
>>>>>> On 01/08/2016 19:15, Julien Grall wrote:
>>>>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>>>>>
>>>>>>>> Hello all,
>>>>>>>
>>>>>>> Hello Sergej,
>>>>>>>
>>>>>>>> The following patch series can be found on Github[0] and is part
>>>>>>>> of my
>>>>>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>>>>>>> project is
>>>>>>>> managed by the organization The Honeynet Project. As part of
>>>>>>>> GSoC, I
>>>>>>>> am being
>>>>>>>> supervised by the Xen developer Tamas K. Lengyel
>>>>>>>> <tamas@tklengyel.com>, George
>>>>>>>> D. Webster, and Steven Maresca.
>>>>>>>>
>>>>>>>> In this patch series, we provide an implementation of the altp2m
>>>>>>>> subsystem for
>>>>>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>>>>>> providing
>>>>>>>> additional --alternate-- views on the guest's physical memory by
>>>>>>>> means of the
>>>>>>>> ARM 2nd stage translation mechanism. The patches introduce new
>>>>>>>> HVMOPs
>>>>>>>> and
>>>>>>>> extend the p2m subsystem. Also, we extend libxl to support
>>>>>>>> altp2m on
>>>>>>>> ARM and
>>>>>>>> modify xen-access to test the suggested functionality.
>>>>>>>>
>>>>>>>> To be more precise, altp2m allows to create and switch to
>>>>>>>> additional
>>>>>>>> p2m views
>>>>>>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>>>>>>> activated as
>>>>>>>> will through the provided HVMOPs. In this way, the active guest
>>>>>>>> instance in
>>>>>>>> question can seamlessly proceed execution without noticing that
>>>>>>>> anything has
>>>>>>>> changed. The prime scope of application of altp2m is Virtual
>>>>>>>> Machine
>>>>>>>> Introspection, where guest systems are analyzed from the
>>>>>>>> outside of
>>>>>>>> the VM.
>>>>>>>>
>>>>>>>> Altp2m can be activated by means of the guest control parameter
>>>>>>>> "altp2m" on x86
>>>>>>>> and ARM architectures.  The altp2m functionality by default can
>>>>>>>> also
>>>>>>>> be used
>>>>>>>> from within the guest by design. For use-cases requiring purely
>>>>>>>> external access
>>>>>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>>>>>
>>>>>>> As said on the previous version, altp2m operation *should not* be
>>>>>>> exposed to ARM guest. Any design written for x86 may not fit
>>>>>>> exactly
>>>>>>> for ARM (and vice versa), you will need to explain why you think we
>>>>>>> should follow the same pattern.
>>>>>>
>>>>>> Sorry, but I am going to step in here and disagree.  All the x86
>>>>>> justifications for altp2m being accessible to guests apply
>>>>>> equally to
>>>>>> ARM, as they are hardware independent.
>>>>>>
>>>>>> I realise you are maintainer, but the onus is on you to justify why
>>>>>> the
>>>>>> behaviour should be different between x86 and ARM, rather than
>>>>>> simply to
>>>>>> complain at it being the same.
>>>>>>
>>>>>> Naturally, technical issues about the details of the
>>>>>> implementation, or
>>>>>> the algorithms etc. are of course fine, but I don't see any
>>>>>> plausible
>>>>>> reason why ARM should purposefully different from x86 in terms of
>>>>>> available functionality, and several good reasons why it should
>>>>>> be the
>>>>>> same (least of all, feature parity across architectures).
>>>>>
>>>>> The question here, is how a guest could take advantage to access to
>>>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>>>> memaccess change, this is not yet the case on ARM.
>>>>
>>>> Does ARM have anything like #VE whereby an in-guest entity can receive
>>>> notification of violations?
>>>
>>> I am not entirely sure what is exactly the #VE. From my understanding,
>>> it use to report stage 2 violation to the guest, right? If so, I am
>>> not aware of any.
>>
>> #VE is a newly specified CPU exception, precisely for reporting state 2
>> violations (in ARM terminology).  It works very much like a pagefault.
>
> Thank you for the explanation. We don't have any specific exception to
> report stage 2 (I guess EPT for x86 terminology) violations.

It is currently specific to Intel's EPT implementation, but there is
nothing in principle preventing AMD reusing the meaning for their NPT
implementation.

>
> If the guest physical address does not belong to an emulated device or
> does not have an associated host address, the hypervisor will inject a
> data/prefetch abort to the guest.

This is where x86 and ARM differ quite a bit.  For "areas which don't
exist", the default is to discard writes and reads return ~0, rather
than to raise a fault with the software.

>
> Those aborts contains a fault status. For now it is always the same
> fault: debug fault on AArch32 and address size fault on AArch64. I
> don't think we can re-use one of the fault (see ARM D7-1949 in DDI
> 0487A.j for the list of fault code) to behave as #VE.
>
> I guess the best would be an event channel for this purpose.

Agreed.  If there is no hardware way of doing this, a PV way with event
channels should work fine.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 12:18               ` Andrew Cooper
@ 2016-08-03 12:45                 ` Sergej Proskurin
  2016-08-03 14:08                   ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-03 12:45 UTC (permalink / raw)
  To: xen-devel



On 08/03/2016 02:18 PM, Andrew Cooper wrote:
> On 03/08/16 13:13, Julien Grall wrote:
>>
>> On 03/08/16 13:00, Andrew Cooper wrote:
>>> On 03/08/16 12:53, Julien Grall wrote:
>>>> On 02/08/16 17:08, Andrew Cooper wrote:
>>>>> On 02/08/16 08:34, Julien Grall wrote:
>>>>>> Hi Andrew,
>>>>>>
>>>>>> On 02/08/2016 00:14, Andrew Cooper wrote:
>>>>>>> On 01/08/2016 19:15, Julien Grall wrote:
>>>>>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>>>>>> Hello all,
>>>>>>>> Hello Sergej,
>>>>>>>>
>>>>>>>>> The following patch series can be found on Github[0] and is part
>>>>>>>>> of my
>>>>>>>>> contribution to this year's Google Summer of Code (GSoC)[1]. My
>>>>>>>>> project is
>>>>>>>>> managed by the organization The Honeynet Project. As part of
>>>>>>>>> GSoC, I
>>>>>>>>> am being
>>>>>>>>> supervised by the Xen developer Tamas K. Lengyel
>>>>>>>>> <tamas@tklengyel.com>, George
>>>>>>>>> D. Webster, and Steven Maresca.
>>>>>>>>>
>>>>>>>>> In this patch series, we provide an implementation of the altp2m
>>>>>>>>> subsystem for
>>>>>>>>> ARM. Our implementation is based on the altp2m subsystem for x86,
>>>>>>>>> providing
>>>>>>>>> additional --alternate-- views on the guest's physical memory by
>>>>>>>>> means of the
>>>>>>>>> ARM 2nd stage translation mechanism. The patches introduce new
>>>>>>>>> HVMOPs
>>>>>>>>> and
>>>>>>>>> extend the p2m subsystem. Also, we extend libxl to support
>>>>>>>>> altp2m on
>>>>>>>>> ARM and
>>>>>>>>> modify xen-access to test the suggested functionality.
>>>>>>>>>
>>>>>>>>> To be more precise, altp2m allows to create and switch to
>>>>>>>>> additional
>>>>>>>>> p2m views
>>>>>>>>> (i.e. gfn to mfn mappings). These views can be manipulated and
>>>>>>>>> activated as
>>>>>>>>> will through the provided HVMOPs. In this way, the active guest
>>>>>>>>> instance in
>>>>>>>>> question can seamlessly proceed execution without noticing that
>>>>>>>>> anything has
>>>>>>>>> changed. The prime scope of application of altp2m is Virtual
>>>>>>>>> Machine
>>>>>>>>> Introspection, where guest systems are analyzed from the
>>>>>>>>> outside of
>>>>>>>>> the VM.
>>>>>>>>>
>>>>>>>>> Altp2m can be activated by means of the guest control parameter
>>>>>>>>> "altp2m" on x86
>>>>>>>>> and ARM architectures.  The altp2m functionality by default can
>>>>>>>>> also
>>>>>>>>> be used
>>>>>>>>> from within the guest by design. For use-cases requiring purely
>>>>>>>>> external access
>>>>>>>>> to altp2m, a custom XSM policy is necessary on both x86 and ARM.
>>>>>>>> As said on the previous version, altp2m operation *should not* be
>>>>>>>> exposed to ARM guest. Any design written for x86 may not fit
>>>>>>>> exactly
>>>>>>>> for ARM (and vice versa), you will need to explain why you think we
>>>>>>>> should follow the same pattern.
>>>>>>> Sorry, but I am going to step in here and disagree.  All the x86
>>>>>>> justifications for altp2m being accessible to guests apply
>>>>>>> equally to
>>>>>>> ARM, as they are hardware independent.
>>>>>>>
>>>>>>> I realise you are maintainer, but the onus is on you to justify why
>>>>>>> the
>>>>>>> behaviour should be different between x86 and ARM, rather than
>>>>>>> simply to
>>>>>>> complain at it being the same.
>>>>>>>
>>>>>>> Naturally, technical issues about the details of the
>>>>>>> implementation, or
>>>>>>> the algorithms etc. are of course fine, but I don't see any
>>>>>>> plausible
>>>>>>> reason why ARM should purposefully different from x86 in terms of
>>>>>>> available functionality, and several good reasons why it should
>>>>>>> be the
>>>>>>> same (least of all, feature parity across architectures).
>>>>>> The question here, is how a guest could take advantage to access to
>>>>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>>>>> memaccess change, this is not yet the case on ARM.
>>>>> Does ARM have anything like #VE whereby an in-guest entity can receive
>>>>> notification of violations?
>>>> I am not entirely sure what is exactly the #VE. From my understanding,
>>>> it use to report stage 2 violation to the guest, right? If so, I am
>>>> not aware of any.
>>> #VE is a newly specified CPU exception, precisely for reporting state 2
>>> violations (in ARM terminology).  It works very much like a pagefault.
>> Thank you for the explanation. We don't have any specific exception to
>> report stage 2 (I guess EPT for x86 terminology) violations.
> It is currently specific to Intel's EPT implementation, but there is
> nothing in principle preventing AMD reusing the meaning for their NPT
> implementation.
>
>> If the guest physical address does not belong to an emulated device or
>> does not have an associated host address, the hypervisor will inject a
>> data/prefetch abort to the guest.
> This is where x86 and ARM differ quite a bit.  For "areas which don't
> exist", the default is to discard writes and reads return ~0, rather
> than to raise a fault with the software.
>
>> Those aborts contains a fault status. For now it is always the same
>> fault: debug fault on AArch32 and address size fault on AArch64. I
>> don't think we can re-use one of the fault (see ARM D7-1949 in DDI
>> 0487A.j for the list of fault code) to behave as #VE.
>>
>> I guess the best would be an event channel for this purpose.
> Agreed.  If there is no hardware way of doing this, a PV way with event
> channels should work fine.
>
> ~Andrew
>

The interesting part about #VE is that it allows to handle certain
violations (currently limited to EPT violations -- future
implementations might introduce also further violations) inside of the
guest, without the need to explicitly trap into the VMM. Thus, #VE allow
switching of different memory views in-guest. Because of this, I also
agree that event channels would suffice in our case, since we do not
have sufficient hardware support on ARM and would need to trap into the
VMM anyway.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 12:45                 ` Sergej Proskurin
@ 2016-08-03 14:08                   ` Julien Grall
  2016-08-03 14:17                     ` Sergej Proskurin
  2016-08-03 16:01                     ` Tamas K Lengyel
  0 siblings, 2 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-03 14:08 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel, Andrew Cooper, Stefano Stabellini

Hello Sergej,

Please try to reply to all when answering on the ML. Otherwise the 
answer may be delayed/lost.

On 03/08/16 13:45, Sergej Proskurin wrote:
> The interesting part about #VE is that it allows to handle certain
> violations (currently limited to EPT violations -- future
> implementations might introduce also further violations) inside of the
> guest, without the need to explicitly trap into the VMM. Thus, #VE allow
> switching of different memory views in-guest. Because of this, I also
> agree that event channels would suffice in our case, since we do not
> have sufficient hardware support on ARM and would need to trap into the
> VMM anyway.

The cost of doing an hypercall on ARM is very small compare to x86 (~1/3 
of the number of x86 cycles) because we don't have to save all the state 
every time. So I am not convinced by the argument of limiting the number 
of trap to the hypervisor and allow a guest to play with altp2m on ARM.

I will have to see a concrete example before going forward with the 
event channel.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 14:08                   ` Julien Grall
@ 2016-08-03 14:17                     ` Sergej Proskurin
  2016-08-03 16:01                     ` Tamas K Lengyel
  1 sibling, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-03 14:17 UTC (permalink / raw)
  To: Julien Grall, xen-devel, Andrew Cooper, Stefano Stabellini

Hi Julien,


On 08/03/2016 04:08 PM, Julien Grall wrote:
> Hello Sergej,
>
> Please try to reply to all when answering on the ML. Otherwise the
> answer may be delayed/lost.
>

Right, sorry.

> On 03/08/16 13:45, Sergej Proskurin wrote:
>> The interesting part about #VE is that it allows to handle certain
>> violations (currently limited to EPT violations -- future
>> implementations might introduce also further violations) inside of the
>> guest, without the need to explicitly trap into the VMM. Thus, #VE allow
>> switching of different memory views in-guest. Because of this, I also
>> agree that event channels would suffice in our case, since we do not
>> have sufficient hardware support on ARM and would need to trap into the
>> VMM anyway.
>
> The cost of doing an hypercall on ARM is very small compare to x86
> (~1/3 of the number of x86 cycles) because we don't have to save all
> the state every time. So I am not convinced by the argument of
> limiting the number of trap to the hypervisor and allow a guest to
> play with altp2m on ARM.
>

I am not saying that we will be able to handle traps inside of the guest
-- simply because we don't have sufficient hardware support on ARM. What
I am trying to say is that an emulation of #VE is not required at this
point but could be implemented in the future by means of event channels,
if required.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 14:08                   ` Julien Grall
  2016-08-03 14:17                     ` Sergej Proskurin
@ 2016-08-03 16:01                     ` Tamas K Lengyel
  2016-08-03 16:24                       ` Julien Grall
  1 sibling, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-03 16:01 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Sergej Proskurin, Andrew Cooper,
	Stefano Stabellini, Xen-devel

On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hello Sergej,
>
> Please try to reply to all when answering on the ML. Otherwise the answer
> may be delayed/lost.
>
> On 03/08/16 13:45, Sergej Proskurin wrote:
>>
>> The interesting part about #VE is that it allows to handle certain
>> violations (currently limited to EPT violations -- future
>> implementations might introduce also further violations) inside of the
>> guest, without the need to explicitly trap into the VMM. Thus, #VE allow
>> switching of different memory views in-guest. Because of this, I also
>> agree that event channels would suffice in our case, since we do not
>> have sufficient hardware support on ARM and would need to trap into the
>> VMM anyway.
>
>
> The cost of doing an hypercall on ARM is very small compare to x86 (~1/3 of
> the number of x86 cycles) because we don't have to save all the state every
> time. So I am not convinced by the argument of limiting the number of trap
> to the hypervisor and allow a guest to play with altp2m on ARM.
>
> I will have to see a concrete example before going forward with the event
> channel.

It is out-of-scope for what we are trying to achieve with this series
at this point. The question at hand is really whether the atp2m switch
and gfn remapping ops should be exposed to the guest. Without #VE -
which we are not implementing - setting the mem_access settings from
within the guest doesn't make sense so restricting access there is
reasonable.

As I outlined, the switch and gfn remapping can have legitimate
use-cases by themselves without any mem_access bits involved. However,
it is not our use-case so we have no problem restricting access there
either. So the question is whether that's the right path to take here.
At this point I'm not sure there is agreement about it or not.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 16:01                     ` Tamas K Lengyel
@ 2016-08-03 16:24                       ` Julien Grall
  2016-08-03 16:42                         ` Tamas K Lengyel
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 16:24 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, Sergej Proskurin, Andrew Cooper,
	Stefano Stabellini, Xen-devel

Hi Tamas,

On 03/08/16 17:01, Tamas K Lengyel wrote:
> On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@arm.com> wrote:
>> Hello Sergej,
>>
>> Please try to reply to all when answering on the ML. Otherwise the answer
>> may be delayed/lost.
>>
>> On 03/08/16 13:45, Sergej Proskurin wrote:
>>>
>>> The interesting part about #VE is that it allows to handle certain
>>> violations (currently limited to EPT violations -- future
>>> implementations might introduce also further violations) inside of the
>>> guest, without the need to explicitly trap into the VMM. Thus, #VE allow
>>> switching of different memory views in-guest. Because of this, I also
>>> agree that event channels would suffice in our case, since we do not
>>> have sufficient hardware support on ARM and would need to trap into the
>>> VMM anyway.
>>
>>
>> The cost of doing an hypercall on ARM is very small compare to x86 (~1/3 of
>> the number of x86 cycles) because we don't have to save all the state every
>> time. So I am not convinced by the argument of limiting the number of trap
>> to the hypervisor and allow a guest to play with altp2m on ARM.
>>
>> I will have to see a concrete example before going forward with the event
>> channel.
>
> It is out-of-scope for what we are trying to achieve with this series
> at this point. The question at hand is really whether the atp2m switch
> and gfn remapping ops should be exposed to the guest. Without #VE -
> which we are not implementing - setting the mem_access settings from
> within the guest doesn't make sense so restricting access there is
> reasonable.
>
> As I outlined, the switch and gfn remapping can have legitimate
> use-cases by themselves without any mem_access bits involved. However,
> it is not our use-case so we have no problem restricting access there
> either. So the question is whether that's the right path to take here.
> At this point I'm not sure there is agreement about it or not.

Could you give a legitimate use case of gfn remapping from the guest? 
And explain how it would work with only this patch series.

 From my perspective, and after the numerous exchange in this thread, I 
do not think it is wise to expose this interface to the guest on ARM. 
The usage is very limited but increase the surface attack. So I will not 
ack a such choice, however I will not nack it.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 16:24                       ` Julien Grall
@ 2016-08-03 16:42                         ` Tamas K Lengyel
  2016-08-03 16:51                           ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-03 16:42 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Sergej Proskurin, Andrew Cooper,
	Stefano Stabellini, Xen-devel

On Wed, Aug 3, 2016 at 10:24 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hi Tamas,
>
>
> On 03/08/16 17:01, Tamas K Lengyel wrote:
>>
>> On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@arm.com> wrote:
>>>
>>> Hello Sergej,
>>>
>>> Please try to reply to all when answering on the ML. Otherwise the answer
>>> may be delayed/lost.
>>>
>>> On 03/08/16 13:45, Sergej Proskurin wrote:
>>>>
>>>>
>>>> The interesting part about #VE is that it allows to handle certain
>>>> violations (currently limited to EPT violations -- future
>>>> implementations might introduce also further violations) inside of the
>>>> guest, without the need to explicitly trap into the VMM. Thus, #VE allow
>>>> switching of different memory views in-guest. Because of this, I also
>>>> agree that event channels would suffice in our case, since we do not
>>>> have sufficient hardware support on ARM and would need to trap into the
>>>> VMM anyway.
>>>
>>>
>>>
>>> The cost of doing an hypercall on ARM is very small compare to x86 (~1/3
>>> of
>>> the number of x86 cycles) because we don't have to save all the state
>>> every
>>> time. So I am not convinced by the argument of limiting the number of
>>> trap
>>> to the hypervisor and allow a guest to play with altp2m on ARM.
>>>
>>> I will have to see a concrete example before going forward with the event
>>> channel.
>>
>>
>> It is out-of-scope for what we are trying to achieve with this series
>> at this point. The question at hand is really whether the atp2m switch
>> and gfn remapping ops should be exposed to the guest. Without #VE -
>> which we are not implementing - setting the mem_access settings from
>> within the guest doesn't make sense so restricting access there is
>> reasonable.
>>
>> As I outlined, the switch and gfn remapping can have legitimate
>> use-cases by themselves without any mem_access bits involved. However,
>> it is not our use-case so we have no problem restricting access there
>> either. So the question is whether that's the right path to take here.
>> At this point I'm not sure there is agreement about it or not.
>
>
> Could you give a legitimate use case of gfn remapping from the guest? And
> explain how it would work with only this patch series.
>
> From my perspective, and after the numerous exchange in this thread, I do
> not think it is wise to expose this interface to the guest on ARM. The usage
> is very limited but increase the surface attack. So I will not ack a such
> choice, however I will not nack it.
>

Since the interface would be available only for domains where they
were explicitly created with altp2m=1 flag set I think the exposure is
minimal.

As for a use-case, I don't have a real world example as it's not how
we use the system. But as I pointed out eairlier I could imagine the
gfn remapping be used to protect kernel memory areas against
information disclosure by only switching to the accessible altp2m view
when certain conditions are met. What I mean is that a certain gfn
could be remapped to a dummy mfn by default and only switched to the
accessible view when necessary. How much extra protection that would
add and under what condition is up for debate but IMHO it is a
legitimate experimental use - and altp2m is an experimental system.

Whether it's worth to have such an interface or not I'm not sure, I'm
OK with going either way on this, but since it's available on x86 I
think it would make sense to have feature parity - even if only
partially for now.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 16:42                         ` Tamas K Lengyel
@ 2016-08-03 16:51                           ` Julien Grall
  2016-08-03 17:30                             ` Andrew Cooper
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 16:51 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, Sergej Proskurin, Andrew Cooper,
	Stefano Stabellini, Xen-devel



On 03/08/16 17:42, Tamas K Lengyel wrote:
> On Wed, Aug 3, 2016 at 10:24 AM, Julien Grall <julien.grall@arm.com> wrote:
>> Hi Tamas,
>>
>>
>> On 03/08/16 17:01, Tamas K Lengyel wrote:
>>>
>>> On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@arm.com> wrote:
>>>>
>>>> Hello Sergej,
>>>>
>>>> Please try to reply to all when answering on the ML. Otherwise the answer
>>>> may be delayed/lost.
>>>>
>>>> On 03/08/16 13:45, Sergej Proskurin wrote:
>>>>>
>>>>>
>>>>> The interesting part about #VE is that it allows to handle certain
>>>>> violations (currently limited to EPT violations -- future
>>>>> implementations might introduce also further violations) inside of the
>>>>> guest, without the need to explicitly trap into the VMM. Thus, #VE allow
>>>>> switching of different memory views in-guest. Because of this, I also
>>>>> agree that event channels would suffice in our case, since we do not
>>>>> have sufficient hardware support on ARM and would need to trap into the
>>>>> VMM anyway.
>>>>
>>>>
>>>>
>>>> The cost of doing an hypercall on ARM is very small compare to x86 (~1/3
>>>> of
>>>> the number of x86 cycles) because we don't have to save all the state
>>>> every
>>>> time. So I am not convinced by the argument of limiting the number of
>>>> trap
>>>> to the hypervisor and allow a guest to play with altp2m on ARM.
>>>>
>>>> I will have to see a concrete example before going forward with the event
>>>> channel.
>>>
>>>
>>> It is out-of-scope for what we are trying to achieve with this series
>>> at this point. The question at hand is really whether the atp2m switch
>>> and gfn remapping ops should be exposed to the guest. Without #VE -
>>> which we are not implementing - setting the mem_access settings from
>>> within the guest doesn't make sense so restricting access there is
>>> reasonable.
>>>
>>> As I outlined, the switch and gfn remapping can have legitimate
>>> use-cases by themselves without any mem_access bits involved. However,
>>> it is not our use-case so we have no problem restricting access there
>>> either. So the question is whether that's the right path to take here.
>>> At this point I'm not sure there is agreement about it or not.
>>
>>
>> Could you give a legitimate use case of gfn remapping from the guest? And
>> explain how it would work with only this patch series.
>>
>> From my perspective, and after the numerous exchange in this thread, I do
>> not think it is wise to expose this interface to the guest on ARM. The usage
>> is very limited but increase the surface attack. So I will not ack a such
>> choice, however I will not nack it.
>>
>
> Since the interface would be available only for domains where they
> were explicitly created with altp2m=1 flag set I think the exposure is
> minimal.
>
> As for a use-case, I don't have a real world example as it's not how
> we use the system. But as I pointed out eairlier I could imagine the
> gfn remapping be used to protect kernel memory areas against
> information disclosure by only switching to the accessible altp2m view
> when certain conditions are met. What I mean is that a certain gfn
> could be remapped to a dummy mfn by default and only switched to the
> accessible view when necessary. How much extra protection that would
> add and under what condition is up for debate but IMHO it is a
> legitimate experimental use - and altp2m is an experimental system.

A such solution may give you a lots of headache with the cache.

>
> Whether it's worth to have such an interface or not I'm not sure, I'm
> OK with going either way on this, but since it's available on x86 I
> think it would make sense to have feature parity - even if only
> partially for now.

As I mentioned a couple of times, we do not introduce features on ARM 
just because they exists on x86. We introduce them after careful think 
about how they could benefits ARM and the usage.

Nothing prevents a follow-up series to allow the guest accessing altp2m 
operation by default because the interface is already there.

Stefano, do you have any opinions on this?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-01 17:10 ` [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
@ 2016-08-03 16:54   ` Julien Grall
  2016-08-04 16:01     ` Sergej Proskurin
  2016-08-09 19:16     ` Tamas K Lengyel
  0 siblings, 2 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-03 16:54 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> This commit moves the altp2m-related code from x86 to ARM. Functions
> that are no yet supported notify the caller or print a BUG message
> stating their absence.
>
> Also, the struct arch_domain is extended with the altp2m_active
> attribute, representing the current altp2m activity configuration of the
> domain.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
> v2: Removed altp2m command-line option: Guard through HVM_PARAM_ALTP2M.
>     Removed not used altp2m helper stubs in altp2m.h.
> ---
>  xen/arch/arm/hvm.c           | 79 ++++++++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/altp2m.h |  4 +--
>  xen/include/asm-arm/domain.h |  3 ++
>  3 files changed, 84 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index d999bde..eb524ae 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -32,6 +32,81 @@
>
>  #include <asm/hypercall.h>
>
> +#include <asm/altp2m.h>
> +
> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
> +{
> +    struct xen_hvm_altp2m_op a;
> +    struct domain *d = NULL;
> +    int rc = 0;
> +
> +    if ( copy_from_guest(&a, arg, 1) )
> +        return -EFAULT;
> +
> +    if ( a.pad1 || a.pad2 ||
> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
> +         (a.cmd > HVMOP_altp2m_change_gfn) )
> +        return -EINVAL;
> +
> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
> +        rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
> +
> +    if ( d == NULL )
> +        return -ESRCH;
> +
> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
> +         !d->arch.altp2m_active )

Why not using altp2m_active(d) here?

Also this check looks quite racy. What does prevent another CPU to 
disable altp2m at the same time? How the code would behave?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 03/25] arm/altp2m: Add struct vttbr.
  2016-08-01 17:10 ` [PATCH v2 03/25] arm/altp2m: Add struct vttbr Sergej Proskurin
@ 2016-08-03 17:04   ` Julien Grall
  2016-08-03 17:05     ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 17:04 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel

Hello Sergej,

Title: s/altp2m/p2m/

On 01/08/16 18:10, Sergej Proskurin wrote:
> The struct vttbr introduces a simple way to precisely access the
> individual fields of the vttbr.

I am not sure whether this is really helpful. You don't seem to take 
often advantage of those fields and the actual accesses don't seem 
necessary (I will comment on the usage).

> ---
>  xen/arch/arm/p2m.c              |  8 ++++----
>  xen/arch/arm/traps.c            |  2 +-
>  xen/include/asm-arm/p2m.h       |  2 +-
>  xen/include/asm-arm/processor.h | 16 ++++++++++++++++
>  4 files changed, 22 insertions(+), 6 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 40a0b80..cbc64a1 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -122,7 +122,7 @@ void p2m_restore_state(struct vcpu *n)
>      WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>      isb();
>
> -    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
> +    WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
>      isb();
>
>      if ( is_32bit_domain(n->domain) )
> @@ -147,10 +147,10 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
>       * VMID. So switch to the VTTBR of a given P2M if different.
>       */
>      ovttbr = READ_SYSREG64(VTTBR_EL2);
> -    if ( ovttbr != p2m->vttbr )
> +    if ( ovttbr != p2m->vttbr.vttbr )
>      {
>          local_irq_save(flags);
> -        WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
> +        WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
>          isb();
>      }
>
> @@ -1293,7 +1293,7 @@ static int p2m_alloc_table(struct domain *d)
>
>      p2m->root = page;
>
> -    p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
> +    p2m->vttbr.vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
>
>      /*
>       * Make sure that all TLBs corresponding to the new VMID are flushed
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 06f06e3..12be7c9 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -881,7 +881,7 @@ void vcpu_show_registers(const struct vcpu *v)
>      ctxt.ifsr32_el2 = v->arch.ifsr;
>  #endif
>
> -    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
> +    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr.vttbr;
>
>      _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1, v);
>  }
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 53c4d78..5c7cd1a 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -33,7 +33,7 @@ struct p2m_domain {
>      uint8_t vmid;
>
>      /* Current Translation Table Base Register for the p2m */
> -    uint64_t vttbr;
> +    struct vttbr vttbr;
>
>      /*
>       * Highest guest frame that's ever been mapped in the p2m
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index 15bf890..f8ca18c 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -529,6 +529,22 @@ union hsr {
>
>
>  };
> +
> +/* VTTBR: Virtualization Translation Table Base Register */
> +struct vttbr {
> +    union {
> +        struct {
> +            u64 baddr :40, /* variable res0: from 0-(x-1) bit */

As mentioned on the previous series, this field is 48 bits for ARMv8 
(see ARM D7.2.102 in DDI 0487A.j).

> +                res1  :8,
> +                vmid  :8,
> +                res2  :8;
> +        };
> +        u64 vttbr;
> +    };
> +};
> +
> +#define INVALID_VTTBR (0UL)
> +
>  #endif
>
>  /* HSR.EC == HSR_CP{15,14,10}_32 */
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 03/25] arm/altp2m: Add struct vttbr.
  2016-08-03 17:04   ` Julien Grall
@ 2016-08-03 17:05     ` Julien Grall
  2016-08-04 16:11       ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 17:05 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

(CC Stefano)

On 03/08/16 18:04, Julien Grall wrote:
> Hello Sergej,
>
> Title: s/altp2m/p2m/
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> The struct vttbr introduces a simple way to precisely access the
>> individual fields of the vttbr.
>
> I am not sure whether this is really helpful. You don't seem to take
> often advantage of those fields and the actual accesses don't seem
> necessary (I will comment on the usage).
>
>> ---
>>  xen/arch/arm/p2m.c              |  8 ++++----
>>  xen/arch/arm/traps.c            |  2 +-
>>  xen/include/asm-arm/p2m.h       |  2 +-
>>  xen/include/asm-arm/processor.h | 16 ++++++++++++++++
>>  4 files changed, 22 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 40a0b80..cbc64a1 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -122,7 +122,7 @@ void p2m_restore_state(struct vcpu *n)
>>      WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>>      isb();
>>
>> -    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>> +    WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
>>      isb();
>>
>>      if ( is_32bit_domain(n->domain) )
>> @@ -147,10 +147,10 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
>>       * VMID. So switch to the VTTBR of a given P2M if different.
>>       */
>>      ovttbr = READ_SYSREG64(VTTBR_EL2);
>> -    if ( ovttbr != p2m->vttbr )
>> +    if ( ovttbr != p2m->vttbr.vttbr )
>>      {
>>          local_irq_save(flags);
>> -        WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>> +        WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
>>          isb();
>>      }
>>
>> @@ -1293,7 +1293,7 @@ static int p2m_alloc_table(struct domain *d)
>>
>>      p2m->root = page;
>>
>> -    p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid &
>> 0xff) << 48;
>> +    p2m->vttbr.vttbr = page_to_maddr(p2m->root) |
>> ((uint64_t)p2m->vmid & 0xff) << 48;
>>
>>      /*
>>       * Make sure that all TLBs corresponding to the new VMID are flushed
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 06f06e3..12be7c9 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -881,7 +881,7 @@ void vcpu_show_registers(const struct vcpu *v)
>>      ctxt.ifsr32_el2 = v->arch.ifsr;
>>  #endif
>>
>> -    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
>> +    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr.vttbr;
>>
>>      _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1,
>> v);
>>  }
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 53c4d78..5c7cd1a 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -33,7 +33,7 @@ struct p2m_domain {
>>      uint8_t vmid;
>>
>>      /* Current Translation Table Base Register for the p2m */
>> -    uint64_t vttbr;
>> +    struct vttbr vttbr;
>>
>>      /*
>>       * Highest guest frame that's ever been mapped in the p2m
>> diff --git a/xen/include/asm-arm/processor.h
>> b/xen/include/asm-arm/processor.h
>> index 15bf890..f8ca18c 100644
>> --- a/xen/include/asm-arm/processor.h
>> +++ b/xen/include/asm-arm/processor.h
>> @@ -529,6 +529,22 @@ union hsr {
>>
>>
>>  };
>> +
>> +/* VTTBR: Virtualization Translation Table Base Register */
>> +struct vttbr {
>> +    union {
>> +        struct {
>> +            u64 baddr :40, /* variable res0: from 0-(x-1) bit */
>
> As mentioned on the previous series, this field is 48 bits for ARMv8
> (see ARM D7.2.102 in DDI 0487A.j).
>
>> +                res1  :8,
>> +                vmid  :8,
>> +                res2  :8;
>> +        };
>> +        u64 vttbr;
>> +    };
>> +};
>> +
>> +#define INVALID_VTTBR (0UL)
>> +
>>  #endif
>>
>>  /* HSR.EC == HSR_CP{15,14,10}_32 */
>>
>
> Regards,
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 16:51                           ` Julien Grall
@ 2016-08-03 17:30                             ` Andrew Cooper
  2016-08-03 17:43                               ` Tamas K Lengyel
  0 siblings, 1 reply; 159+ messages in thread
From: Andrew Cooper @ 2016-08-03 17:30 UTC (permalink / raw)
  To: Julien Grall, Tamas K Lengyel
  Cc: George Dunlap, Sergej Proskurin, Stefano Stabellini, Xen-devel

On 03/08/16 17:51, Julien Grall wrote:
>
>
> On 03/08/16 17:42, Tamas K Lengyel wrote:
>> On Wed, Aug 3, 2016 at 10:24 AM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>> Hi Tamas,
>>>
>>>
>>> On 03/08/16 17:01, Tamas K Lengyel wrote:
>>>>
>>>> On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@arm.com>
>>>> wrote:
>>>>>
>>>>> Hello Sergej,
>>>>>
>>>>> Please try to reply to all when answering on the ML. Otherwise the
>>>>> answer
>>>>> may be delayed/lost.
>>>>>
>>>>> On 03/08/16 13:45, Sergej Proskurin wrote:
>>>>>>
>>>>>>
>>>>>> The interesting part about #VE is that it allows to handle certain
>>>>>> violations (currently limited to EPT violations -- future
>>>>>> implementations might introduce also further violations) inside
>>>>>> of the
>>>>>> guest, without the need to explicitly trap into the VMM. Thus,
>>>>>> #VE allow
>>>>>> switching of different memory views in-guest. Because of this, I
>>>>>> also
>>>>>> agree that event channels would suffice in our case, since we do not
>>>>>> have sufficient hardware support on ARM and would need to trap
>>>>>> into the
>>>>>> VMM anyway.
>>>>>
>>>>>
>>>>>
>>>>> The cost of doing an hypercall on ARM is very small compare to x86
>>>>> (~1/3
>>>>> of
>>>>> the number of x86 cycles) because we don't have to save all the state
>>>>> every
>>>>> time. So I am not convinced by the argument of limiting the number of
>>>>> trap
>>>>> to the hypervisor and allow a guest to play with altp2m on ARM.
>>>>>
>>>>> I will have to see a concrete example before going forward with
>>>>> the event
>>>>> channel.
>>>>
>>>>
>>>> It is out-of-scope for what we are trying to achieve with this series
>>>> at this point. The question at hand is really whether the atp2m switch
>>>> and gfn remapping ops should be exposed to the guest. Without #VE -
>>>> which we are not implementing - setting the mem_access settings from
>>>> within the guest doesn't make sense so restricting access there is
>>>> reasonable.
>>>>
>>>> As I outlined, the switch and gfn remapping can have legitimate
>>>> use-cases by themselves without any mem_access bits involved. However,
>>>> it is not our use-case so we have no problem restricting access there
>>>> either. So the question is whether that's the right path to take here.
>>>> At this point I'm not sure there is agreement about it or not.
>>>
>>>
>>> Could you give a legitimate use case of gfn remapping from the
>>> guest? And
>>> explain how it would work with only this patch series.
>>>
>>> From my perspective, and after the numerous exchange in this thread,
>>> I do
>>> not think it is wise to expose this interface to the guest on ARM.
>>> The usage
>>> is very limited but increase the surface attack. So I will not ack a
>>> such
>>> choice, however I will not nack it.
>>>
>>
>> Since the interface would be available only for domains where they
>> were explicitly created with altp2m=1 flag set I think the exposure is
>> minimal.
>>
>> As for a use-case, I don't have a real world example as it's not how
>> we use the system. But as I pointed out eairlier I could imagine the
>> gfn remapping be used to protect kernel memory areas against
>> information disclosure by only switching to the accessible altp2m view
>> when certain conditions are met. What I mean is that a certain gfn
>> could be remapped to a dummy mfn by default and only switched to the
>> accessible view when necessary. How much extra protection that would
>> add and under what condition is up for debate but IMHO it is a
>> legitimate experimental use - and altp2m is an experimental system.
>
> A such solution may give you a lots of headache with the cache.
>
>>
>> Whether it's worth to have such an interface or not I'm not sure, I'm
>> OK with going either way on this, but since it's available on x86 I
>> think it would make sense to have feature parity - even if only
>> partially for now.
>
> As I mentioned a couple of times, we do not introduce features on ARM
> just because they exists on x86. We introduce them after careful think
> about how they could benefits ARM and the usage.
>
> Nothing prevents a follow-up series to allow the guest accessing
> altp2m operation by default because the interface is already there.
>
> Stefano, do you have any opinions on this?

From my point of view, feature parity with x86 is only important if the
feature is equally capable, and this thread has shown that this is not
the case.

IMO, the choice is between:

1) Don't expose altp2m to guests, or
2) Make a para-virtual version of #VE for ARM guests, and expose the
full guest interface.

I am not fussed either way, but with my Security Team hat on, exposing
half an interface which can't usefully be used had has no current
usecase is a recipe for bugs with an XSA/CVE attached to them, and
therefore extra paperwork for me or someone else to do.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions.
  2016-08-01 17:10 ` [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions Sergej Proskurin
@ 2016-08-03 17:40   ` Julien Grall
  2016-08-05  7:26     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 17:40 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

Title: s/altp2m/p2m/ and please drop the full stop.

On 01/08/16 18:10, Sergej Proskurin wrote:
> This commit pulls out generic init/teardown functionality out of
> p2m_init and p2m_teardown into p2m_init_one, p2m_free_one, and
> p2m_flush_table functions.  This allows our future implementation to
> reuse existing code for the initialization/teardown of altp2m views.

Please avoid to mix-up code movement and new additions. This makes the 
code more difficult to review.

Also, you don't mention the new changes in the commit message.

After reading the patch, it should really be divided and explain why you 
split like that.

>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
> v2: Added the function p2m_flush_table to the previous version.
> ---
>  xen/arch/arm/p2m.c        | 74 +++++++++++++++++++++++++++++++++++++----------
>  xen/include/asm-arm/p2m.h | 11 +++++++
>  2 files changed, 70 insertions(+), 15 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index cbc64a1..17f3299 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1360,50 +1360,94 @@ static void p2m_free_vmid(struct domain *d)
>      spin_unlock(&vmid_alloc_lock);
>  }
>
> -void p2m_teardown(struct domain *d)
> +/* Reset this p2m table to be empty */
> +void p2m_flush_table(struct p2m_domain *p2m)

Any function exported should have its prototype in an header within the 
same patch.

>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
> -    struct page_info *pg;
> +    struct page_info *page, *pg;
> +    unsigned int i;
> +
> +    page = p2m->root;


This function can be called with p2m->root equal to NULL. (see the check 
in p2m_free_one.

> +
> +    /* Clear all concatenated first level pages */
> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
> +        clear_and_clean_page(page + i);
>
> +    /* Free the rest of the trie pages back to the paging pool */
>      while ( (pg = page_list_remove_head(&p2m->pages)) )
>          free_domheap_page(pg);
> +}
> +
> +static inline void p2m_free_one(struct p2m_domain *p2m)

Why inline here? Also, it seems that you export the function later. Why 
don't you do it here?

Finally, I think this function should be rename p2m_teardown_one to 
match the callers' name.

> +{
> +    p2m_flush_table(p2m);
> +
> +    /* Free VMID and reset VTTBR */
> +    p2m_free_vmid(p2m->domain);

Why do you move the call to p2m_free_vmid?

> +    p2m->vttbr.vttbr = INVALID_VTTBR;

Why do you reset vttbr, the p2m will never be used afterwards.

>
>      if ( p2m->root )
>          free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>
>      p2m->root = NULL;
>
> -    p2m_free_vmid(d);
> -
>      radix_tree_destroy(&p2m->mem_access_settings, NULL);
>  }
>
> -int p2m_init(struct domain *d)
> +int p2m_init_one(struct domain *d, struct p2m_domain *p2m)

Any function exported should have its prototype in an header within the 
same patch.

>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
>      int rc = 0;
>
>      rwlock_init(&p2m->lock);
>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>
> -    p2m->vmid = INVALID_VMID;
> -

Why this is dropped?

>      rc = p2m_alloc_vmid(d);
>      if ( rc != 0 )
>          return rc;
>
> -    p2m->max_mapped_gfn = _gfn(0);
> -    p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
> -
> -    p2m->default_access = p2m_access_rwx;
> +    p2m->domain = d;
> +    p2m->access_required = false;
>      p2m->mem_access_enabled = false;
> +    p2m->default_access = p2m_access_rwx;
> +    p2m->root = NULL;
> +    p2m->max_mapped_gfn = _gfn(0);
> +    p2m->lowest_mapped_gfn = INVALID_GFN;

Please don't move code when it is not necessary. This make the code 
review more difficult to read.

> +    p2m->vttbr.vttbr = INVALID_VTTBR;
>      radix_tree_init(&p2m->mem_access_settings);
>
> -    rc = p2m_alloc_table(d);
> -

The function p2m_init_one should fully initialize the p2m (i.e allocate 
the table).

Why altp2m_destroy_by_id don't free the p2m entirely?
This would simply a lot this series and avoid to spread p2m 
initialization everywhere.

>      return rc;
>  }
>
> +static void p2m_teardown_hostp2m(struct domain *d)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +    p2m_free_one(p2m);
> +}
> +
> +void p2m_teardown(struct domain *d)
> +{
> +    p2m_teardown_hostp2m(d);
> +}
> +
> +static int p2m_init_hostp2m(struct domain *d)
> +{
> +    int rc;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +    p2m->p2m_class = p2m_host;
> +
> +    rc = p2m_init_one(d, p2m);
> +    if ( rc )
> +        return rc;
> +
> +    return p2m_alloc_table(d);
> +}
> +
> +int p2m_init(struct domain *d)
> +{
> +    return p2m_init_hostp2m(d);
> +}
> +
>  int relinquish_p2m_mapping(struct domain *d)
>  {
>      struct p2m_domain *p2m = &d->arch.p2m;
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 5c7cd1a..1f9c370 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -18,6 +18,11 @@ struct domain;
>
>  extern void memory_type_changed(struct domain *);
>
> +typedef enum {
> +    p2m_host,
> +    p2m_alternate,
> +} p2m_class_t;
> +

This addition should really be in a separate patch.

>  /* Per-p2m-table state */
>  struct p2m_domain {
>      /* Lock that protects updates to the p2m */
> @@ -78,6 +83,12 @@ struct p2m_domain {
>       * enough available bits to store this information.
>       */
>      struct radix_tree_root mem_access_settings;
> +
> +    /* Choose between: host/alternate */
> +    p2m_class_t p2m_class;
> +
> +    /* Back pointer to domain */
> +    struct domain *domain;

Same here. With justification why we want it.

>  };
>
>  /*
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 17:30                             ` Andrew Cooper
@ 2016-08-03 17:43                               ` Tamas K Lengyel
  2016-08-03 17:45                                 ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-03 17:43 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: George Dunlap, Sergej Proskurin, Julien Grall,
	Stefano Stabellini, Xen-devel

On Wed, Aug 3, 2016 at 11:30 AM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 03/08/16 17:51, Julien Grall wrote:
>>
>>
>> On 03/08/16 17:42, Tamas K Lengyel wrote:
>>> On Wed, Aug 3, 2016 at 10:24 AM, Julien Grall <julien.grall@arm.com>
>>> wrote:
>>>> Hi Tamas,
>>>>
>>>>
>>>> On 03/08/16 17:01, Tamas K Lengyel wrote:
>>>>>
>>>>> On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@arm.com>
>>>>> wrote:
>>>>>>
>>>>>> Hello Sergej,
>>>>>>
>>>>>> Please try to reply to all when answering on the ML. Otherwise the
>>>>>> answer
>>>>>> may be delayed/lost.
>>>>>>
>>>>>> On 03/08/16 13:45, Sergej Proskurin wrote:
>>>>>>>
>>>>>>>
>>>>>>> The interesting part about #VE is that it allows to handle certain
>>>>>>> violations (currently limited to EPT violations -- future
>>>>>>> implementations might introduce also further violations) inside
>>>>>>> of the
>>>>>>> guest, without the need to explicitly trap into the VMM. Thus,
>>>>>>> #VE allow
>>>>>>> switching of different memory views in-guest. Because of this, I
>>>>>>> also
>>>>>>> agree that event channels would suffice in our case, since we do not
>>>>>>> have sufficient hardware support on ARM and would need to trap
>>>>>>> into the
>>>>>>> VMM anyway.
>>>>>>
>>>>>>
>>>>>>
>>>>>> The cost of doing an hypercall on ARM is very small compare to x86
>>>>>> (~1/3
>>>>>> of
>>>>>> the number of x86 cycles) because we don't have to save all the state
>>>>>> every
>>>>>> time. So I am not convinced by the argument of limiting the number of
>>>>>> trap
>>>>>> to the hypervisor and allow a guest to play with altp2m on ARM.
>>>>>>
>>>>>> I will have to see a concrete example before going forward with
>>>>>> the event
>>>>>> channel.
>>>>>
>>>>>
>>>>> It is out-of-scope for what we are trying to achieve with this series
>>>>> at this point. The question at hand is really whether the atp2m switch
>>>>> and gfn remapping ops should be exposed to the guest. Without #VE -
>>>>> which we are not implementing - setting the mem_access settings from
>>>>> within the guest doesn't make sense so restricting access there is
>>>>> reasonable.
>>>>>
>>>>> As I outlined, the switch and gfn remapping can have legitimate
>>>>> use-cases by themselves without any mem_access bits involved. However,
>>>>> it is not our use-case so we have no problem restricting access there
>>>>> either. So the question is whether that's the right path to take here.
>>>>> At this point I'm not sure there is agreement about it or not.
>>>>
>>>>
>>>> Could you give a legitimate use case of gfn remapping from the
>>>> guest? And
>>>> explain how it would work with only this patch series.
>>>>
>>>> From my perspective, and after the numerous exchange in this thread,
>>>> I do
>>>> not think it is wise to expose this interface to the guest on ARM.
>>>> The usage
>>>> is very limited but increase the surface attack. So I will not ack a
>>>> such
>>>> choice, however I will not nack it.
>>>>
>>>
>>> Since the interface would be available only for domains where they
>>> were explicitly created with altp2m=1 flag set I think the exposure is
>>> minimal.
>>>
>>> As for a use-case, I don't have a real world example as it's not how
>>> we use the system. But as I pointed out eairlier I could imagine the
>>> gfn remapping be used to protect kernel memory areas against
>>> information disclosure by only switching to the accessible altp2m view
>>> when certain conditions are met. What I mean is that a certain gfn
>>> could be remapped to a dummy mfn by default and only switched to the
>>> accessible view when necessary. How much extra protection that would
>>> add and under what condition is up for debate but IMHO it is a
>>> legitimate experimental use - and altp2m is an experimental system.
>>
>> A such solution may give you a lots of headache with the cache.
>>
>>>
>>> Whether it's worth to have such an interface or not I'm not sure, I'm
>>> OK with going either way on this, but since it's available on x86 I
>>> think it would make sense to have feature parity - even if only
>>> partially for now.
>>
>> As I mentioned a couple of times, we do not introduce features on ARM
>> just because they exists on x86. We introduce them after careful think
>> about how they could benefits ARM and the usage.
>>
>> Nothing prevents a follow-up series to allow the guest accessing
>> altp2m operation by default because the interface is already there.
>>
>> Stefano, do you have any opinions on this?
>
> From my point of view, feature parity with x86 is only important if the
> feature is equally capable, and this thread has shown that this is not
> the case.
>
> IMO, the choice is between:
>
> 1) Don't expose altp2m to guests, or
> 2) Make a para-virtual version of #VE for ARM guests, and expose the
> full guest interface.
>
> I am not fussed either way, but with my Security Team hat on, exposing
> half an interface which can't usefully be used had has no current
> usecase is a recipe for bugs with an XSA/CVE attached to them, and
> therefore extra paperwork for me or someone else to do.
>

Well, if there are latent XSA/CVE issues with just half the interface
I don't see how doing the full interface would avoid that. But fair
enough, if Stefano agrees we can close this issue and just introduce a
new set of domctl's.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 17:43                               ` Tamas K Lengyel
@ 2016-08-03 17:45                                 ` Julien Grall
  2016-08-03 17:51                                   ` Tamas K Lengyel
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 17:45 UTC (permalink / raw)
  To: Tamas K Lengyel, Andrew Cooper
  Cc: George Dunlap, Sergej Proskurin, Stefano Stabellini, Xen-devel



On 03/08/16 18:43, Tamas K Lengyel wrote:
> On Wed, Aug 3, 2016 at 11:30 AM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 03/08/16 17:51, Julien Grall wrote:
>>>
>>>
>>> On 03/08/16 17:42, Tamas K Lengyel wrote:
>>>> On Wed, Aug 3, 2016 at 10:24 AM, Julien Grall <julien.grall@arm.com>
>>>> wrote:
>>>>> Hi Tamas,
>>>>>
>>>>>
>>>>> On 03/08/16 17:01, Tamas K Lengyel wrote:
>>>>>>
>>>>>> On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@arm.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> Hello Sergej,
>>>>>>>
>>>>>>> Please try to reply to all when answering on the ML. Otherwise the
>>>>>>> answer
>>>>>>> may be delayed/lost.
>>>>>>>
>>>>>>> On 03/08/16 13:45, Sergej Proskurin wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> The interesting part about #VE is that it allows to handle certain
>>>>>>>> violations (currently limited to EPT violations -- future
>>>>>>>> implementations might introduce also further violations) inside
>>>>>>>> of the
>>>>>>>> guest, without the need to explicitly trap into the VMM. Thus,
>>>>>>>> #VE allow
>>>>>>>> switching of different memory views in-guest. Because of this, I
>>>>>>>> also
>>>>>>>> agree that event channels would suffice in our case, since we do not
>>>>>>>> have sufficient hardware support on ARM and would need to trap
>>>>>>>> into the
>>>>>>>> VMM anyway.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> The cost of doing an hypercall on ARM is very small compare to x86
>>>>>>> (~1/3
>>>>>>> of
>>>>>>> the number of x86 cycles) because we don't have to save all the state
>>>>>>> every
>>>>>>> time. So I am not convinced by the argument of limiting the number of
>>>>>>> trap
>>>>>>> to the hypervisor and allow a guest to play with altp2m on ARM.
>>>>>>>
>>>>>>> I will have to see a concrete example before going forward with
>>>>>>> the event
>>>>>>> channel.
>>>>>>
>>>>>>
>>>>>> It is out-of-scope for what we are trying to achieve with this series
>>>>>> at this point. The question at hand is really whether the atp2m switch
>>>>>> and gfn remapping ops should be exposed to the guest. Without #VE -
>>>>>> which we are not implementing - setting the mem_access settings from
>>>>>> within the guest doesn't make sense so restricting access there is
>>>>>> reasonable.
>>>>>>
>>>>>> As I outlined, the switch and gfn remapping can have legitimate
>>>>>> use-cases by themselves without any mem_access bits involved. However,
>>>>>> it is not our use-case so we have no problem restricting access there
>>>>>> either. So the question is whether that's the right path to take here.
>>>>>> At this point I'm not sure there is agreement about it or not.
>>>>>
>>>>>
>>>>> Could you give a legitimate use case of gfn remapping from the
>>>>> guest? And
>>>>> explain how it would work with only this patch series.
>>>>>
>>>>> From my perspective, and after the numerous exchange in this thread,
>>>>> I do
>>>>> not think it is wise to expose this interface to the guest on ARM.
>>>>> The usage
>>>>> is very limited but increase the surface attack. So I will not ack a
>>>>> such
>>>>> choice, however I will not nack it.
>>>>>
>>>>
>>>> Since the interface would be available only for domains where they
>>>> were explicitly created with altp2m=1 flag set I think the exposure is
>>>> minimal.
>>>>
>>>> As for a use-case, I don't have a real world example as it's not how
>>>> we use the system. But as I pointed out eairlier I could imagine the
>>>> gfn remapping be used to protect kernel memory areas against
>>>> information disclosure by only switching to the accessible altp2m view
>>>> when certain conditions are met. What I mean is that a certain gfn
>>>> could be remapped to a dummy mfn by default and only switched to the
>>>> accessible view when necessary. How much extra protection that would
>>>> add and under what condition is up for debate but IMHO it is a
>>>> legitimate experimental use - and altp2m is an experimental system.
>>>
>>> A such solution may give you a lots of headache with the cache.
>>>
>>>>
>>>> Whether it's worth to have such an interface or not I'm not sure, I'm
>>>> OK with going either way on this, but since it's available on x86 I
>>>> think it would make sense to have feature parity - even if only
>>>> partially for now.
>>>
>>> As I mentioned a couple of times, we do not introduce features on ARM
>>> just because they exists on x86. We introduce them after careful think
>>> about how they could benefits ARM and the usage.
>>>
>>> Nothing prevents a follow-up series to allow the guest accessing
>>> altp2m operation by default because the interface is already there.
>>>
>>> Stefano, do you have any opinions on this?
>>
>> From my point of view, feature parity with x86 is only important if the
>> feature is equally capable, and this thread has shown that this is not
>> the case.
>>
>> IMO, the choice is between:
>>
>> 1) Don't expose altp2m to guests, or
>> 2) Make a para-virtual version of #VE for ARM guests, and expose the
>> full guest interface.
>>
>> I am not fussed either way, but with my Security Team hat on, exposing
>> half an interface which can't usefully be used had has no current
>> usecase is a recipe for bugs with an XSA/CVE attached to them, and
>> therefore extra paperwork for me or someone else to do.
>>
>
> Well, if there are latent XSA/CVE issues with just half the interface
> I don't see how doing the full interface would avoid that. But fair
> enough, if Stefano agrees we can close this issue and just introduce a
> new set of domctl's.

The whole discussion of this series was to defer the exposition of 
altp2m HVMOP to the guest until we find a usage. I.e a simple:

xsm_hvm_altp2m_op(XSM_PRIV/XSM_DM_PRIV, d);

So why do you want to re-invent a new interface here?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 17:45                                 ` Julien Grall
@ 2016-08-03 17:51                                   ` Tamas K Lengyel
  2016-08-03 17:56                                     ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-03 17:51 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Stefano Stabellini, Xen-devel,
	Sergej Proskurin

On Wed, Aug 3, 2016 at 11:45 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 03/08/16 18:43, Tamas K Lengyel wrote:
>>
>> On Wed, Aug 3, 2016 at 11:30 AM, Andrew Cooper
>> <andrew.cooper3@citrix.com> wrote:
>>>
>>> On 03/08/16 17:51, Julien Grall wrote:
>>>>
>>>>
>>>>
>>>> On 03/08/16 17:42, Tamas K Lengyel wrote:
>>>>>
>>>>> On Wed, Aug 3, 2016 at 10:24 AM, Julien Grall <julien.grall@arm.com>
>>>>> wrote:
>>>>>>
>>>>>> Hi Tamas,
>>>>>>
>>>>>>
>>>>>> On 03/08/16 17:01, Tamas K Lengyel wrote:
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@arm.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> Hello Sergej,
>>>>>>>>
>>>>>>>> Please try to reply to all when answering on the ML. Otherwise the
>>>>>>>> answer
>>>>>>>> may be delayed/lost.
>>>>>>>>
>>>>>>>> On 03/08/16 13:45, Sergej Proskurin wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> The interesting part about #VE is that it allows to handle certain
>>>>>>>>> violations (currently limited to EPT violations -- future
>>>>>>>>> implementations might introduce also further violations) inside
>>>>>>>>> of the
>>>>>>>>> guest, without the need to explicitly trap into the VMM. Thus,
>>>>>>>>> #VE allow
>>>>>>>>> switching of different memory views in-guest. Because of this, I
>>>>>>>>> also
>>>>>>>>> agree that event channels would suffice in our case, since we do
>>>>>>>>> not
>>>>>>>>> have sufficient hardware support on ARM and would need to trap
>>>>>>>>> into the
>>>>>>>>> VMM anyway.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> The cost of doing an hypercall on ARM is very small compare to x86
>>>>>>>> (~1/3
>>>>>>>> of
>>>>>>>> the number of x86 cycles) because we don't have to save all the
>>>>>>>> state
>>>>>>>> every
>>>>>>>> time. So I am not convinced by the argument of limiting the number
>>>>>>>> of
>>>>>>>> trap
>>>>>>>> to the hypervisor and allow a guest to play with altp2m on ARM.
>>>>>>>>
>>>>>>>> I will have to see a concrete example before going forward with
>>>>>>>> the event
>>>>>>>> channel.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> It is out-of-scope for what we are trying to achieve with this series
>>>>>>> at this point. The question at hand is really whether the atp2m
>>>>>>> switch
>>>>>>> and gfn remapping ops should be exposed to the guest. Without #VE -
>>>>>>> which we are not implementing - setting the mem_access settings from
>>>>>>> within the guest doesn't make sense so restricting access there is
>>>>>>> reasonable.
>>>>>>>
>>>>>>> As I outlined, the switch and gfn remapping can have legitimate
>>>>>>> use-cases by themselves without any mem_access bits involved.
>>>>>>> However,
>>>>>>> it is not our use-case so we have no problem restricting access there
>>>>>>> either. So the question is whether that's the right path to take
>>>>>>> here.
>>>>>>> At this point I'm not sure there is agreement about it or not.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Could you give a legitimate use case of gfn remapping from the
>>>>>> guest? And
>>>>>> explain how it would work with only this patch series.
>>>>>>
>>>>>> From my perspective, and after the numerous exchange in this thread,
>>>>>> I do
>>>>>> not think it is wise to expose this interface to the guest on ARM.
>>>>>> The usage
>>>>>> is very limited but increase the surface attack. So I will not ack a
>>>>>> such
>>>>>> choice, however I will not nack it.
>>>>>>
>>>>>
>>>>> Since the interface would be available only for domains where they
>>>>> were explicitly created with altp2m=1 flag set I think the exposure is
>>>>> minimal.
>>>>>
>>>>> As for a use-case, I don't have a real world example as it's not how
>>>>> we use the system. But as I pointed out eairlier I could imagine the
>>>>> gfn remapping be used to protect kernel memory areas against
>>>>> information disclosure by only switching to the accessible altp2m view
>>>>> when certain conditions are met. What I mean is that a certain gfn
>>>>> could be remapped to a dummy mfn by default and only switched to the
>>>>> accessible view when necessary. How much extra protection that would
>>>>> add and under what condition is up for debate but IMHO it is a
>>>>> legitimate experimental use - and altp2m is an experimental system.
>>>>
>>>>
>>>> A such solution may give you a lots of headache with the cache.
>>>>
>>>>>
>>>>> Whether it's worth to have such an interface or not I'm not sure, I'm
>>>>> OK with going either way on this, but since it's available on x86 I
>>>>> think it would make sense to have feature parity - even if only
>>>>> partially for now.
>>>>
>>>>
>>>> As I mentioned a couple of times, we do not introduce features on ARM
>>>> just because they exists on x86. We introduce them after careful think
>>>> about how they could benefits ARM and the usage.
>>>>
>>>> Nothing prevents a follow-up series to allow the guest accessing
>>>> altp2m operation by default because the interface is already there.
>>>>
>>>> Stefano, do you have any opinions on this?
>>>
>>>
>>> From my point of view, feature parity with x86 is only important if the
>>> feature is equally capable, and this thread has shown that this is not
>>> the case.
>>>
>>> IMO, the choice is between:
>>>
>>> 1) Don't expose altp2m to guests, or
>>> 2) Make a para-virtual version of #VE for ARM guests, and expose the
>>> full guest interface.
>>>
>>> I am not fussed either way, but with my Security Team hat on, exposing
>>> half an interface which can't usefully be used had has no current
>>> usecase is a recipe for bugs with an XSA/CVE attached to them, and
>>> therefore extra paperwork for me or someone else to do.
>>>
>>
>> Well, if there are latent XSA/CVE issues with just half the interface
>> I don't see how doing the full interface would avoid that. But fair
>> enough, if Stefano agrees we can close this issue and just introduce a
>> new set of domctl's.
>
>
> The whole discussion of this series was to defer the exposition of altp2m
> HVMOP to the guest until we find a usage. I.e a simple:
>
> xsm_hvm_altp2m_op(XSM_PRIV/XSM_DM_PRIV, d);
>
> So why do you want to re-invent a new interface here?

I guess I misinterpreted your request of not having this interface
exposed to the guest. If we are fine with exposing the interface to
the guest but having XSM manage whether it's allowed by default I'm
certainly OK with that.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 17:51                                   ` Tamas K Lengyel
@ 2016-08-03 17:56                                     ` Julien Grall
  2016-08-03 18:11                                       ` Tamas K Lengyel
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 17:56 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, Andrew Cooper, Stefano Stabellini, Xen-devel,
	Sergej Proskurin



On 03/08/16 18:51, Tamas K Lengyel wrote:
> On Wed, Aug 3, 2016 at 11:45 AM, Julien Grall <julien.grall@arm.com> wrote:
>> The whole discussion of this series was to defer the exposition of altp2m
>> HVMOP to the guest until we find a usage. I.e a simple:
>>
>> xsm_hvm_altp2m_op(XSM_PRIV/XSM_DM_PRIV, d);
>>
>> So why do you want to re-invent a new interface here?
>
> I guess I misinterpreted your request of not having this interface
> exposed to the guest. If we are fine with exposing the interface to
> the guest but having XSM manage whether it's allowed by default I'm
> certainly OK with that.

By default the interface will not be exposed to the guest. 
XSM_PRIV/XSM_DM_PRIV only allow a privileged domain or a device model 
domain to use the interface. The guest will not be enabled to access it.

If the user decide to allow a guest accessing altp2m op with XSM, then I 
don't think it our business if a security issue is exposed.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 05/25] arm/altp2m: Rename and extend p2m_alloc_table.
  2016-08-01 17:10 ` [PATCH v2 05/25] arm/altp2m: Rename and extend p2m_alloc_table Sergej Proskurin
@ 2016-08-03 17:57   ` Julien Grall
  2016-08-06  8:57     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 17:57 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

Title: s/altp2m/p2m/

On 01/08/16 18:10, Sergej Proskurin wrote:
> The initially named function "p2m_alloc_table" allocated pages solely
> required for the host's p2m. The new implementation leaves p2m
> allocation related parts inside of this function to generally initialize
> p2m/altp2m tables. This is done generically, as the host's p2m and
> altp2m tables are allocated similarly. Since this function will be used
> by the altp2m initialization routines, it is not made static. In
> addition, this commit provides the overlay function "p2m_table_init"
> that is used for the host's p2m allocation/initialization.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
> v2: Removed altp2m table initialization from "p2m_table_init".
> ---
>  xen/arch/arm/p2m.c | 29 +++++++++++++++++++++++------
>  1 file changed, 23 insertions(+), 6 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 17f3299..63fe3d9 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1277,11 +1277,11 @@ void guest_physmap_remove_page(struct domain *d,
>      p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
>  }
>
> -static int p2m_alloc_table(struct domain *d)
> +int p2m_alloc_table(struct p2m_domain *p2m)

The function p2m_alloc_table should not be exposed outside of the p2m. I 
explained it why in the previous patch.

>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
> -    struct page_info *page;
>      unsigned int i;
> +    struct page_info *page;
> +    struct vttbr *vttbr = &p2m->vttbr;
>
>      page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
>      if ( page == NULL )
> @@ -1293,11 +1293,28 @@ static int p2m_alloc_table(struct domain *d)
>
>      p2m->root = page;
>
> -    p2m->vttbr.vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
> +    /* Initialize the VTTBR associated with the allocated p2m table. */
> +    vttbr->vttbr = 0;
> +    vttbr->vmid = p2m->vmid & 0xff;
> +    vttbr->baddr = page_to_maddr(p2m->root);

This change does not belong to this patch. If we want to use VTTBR, it 
should be in patch #3.

> +
> +    return 0;
> +}
> +
> +static int p2m_table_init(struct domain *d)
> +{
> +    int rc;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +    rc = p2m_alloc_table(p2m);
> +    if ( rc != 0 )
> +        return -ENOMEM;
> +
> +    d->arch.altp2m_active = false;

Please avoid to spread the change for altp2m everywhere. The addition of 
altp2m in the p2m code should really be a single patch.

>
>      /*
>       * Make sure that all TLBs corresponding to the new VMID are flushed
> -     * before using it
> +     * before using it.
>       */
>      p2m_flush_tlb(p2m);
>
> @@ -1440,7 +1457,7 @@ static int p2m_init_hostp2m(struct domain *d)
>      if ( rc )
>          return rc;
>
> -    return p2m_alloc_table(d);
> +    return p2m_table_init(d);
>  }
>
>  int p2m_init(struct domain *d)
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 06/25] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-08-01 17:10 ` [PATCH v2 06/25] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
@ 2016-08-03 18:02   ` Julien Grall
  2016-08-06  9:00     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 18:02 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

Title: s/altp2m/p2m/ and please remove the full stop.

Also this is not really a cosmetic change.

On 01/08/16 18:10, Sergej Proskurin wrote:
> This commit changes the prototype of the following functions:
> - p2m_alloc_vmid
> - p2m_free_vmid
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 63fe3d9..ff9c0d1 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1337,11 +1337,10 @@ void p2m_vmid_allocator_init(void)
>      set_bit(INVALID_VMID, vmid_mask);
>  }
>
> -static int p2m_alloc_vmid(struct domain *d)
> +static int p2m_alloc_vmid(struct p2m_domain *p2m)

I am not a big fan of this interface change. I would much prefer if the 
VMID was returned and set to p2m->vmid by the caller.

So this would return either the VMID or INVALID_VMID.

>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
> -
>      int rc, nr;
> +    struct domain *d = p2m->domain;
>
>      spin_lock(&vmid_alloc_lock);
>
> @@ -1367,9 +1366,8 @@ out:
>      return rc;
>  }
>
> -static void p2m_free_vmid(struct domain *d)
> +static void p2m_free_vmid(struct p2m_domain *p2m)

Likewise here. It would be better if the function is taking a vmid in 
parameter.

>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
>      spin_lock(&vmid_alloc_lock);
>      if ( p2m->vmid != INVALID_VMID )
>          clear_bit(p2m->vmid, vmid_mask);
> @@ -1399,7 +1397,7 @@ static inline void p2m_free_one(struct p2m_domain *p2m)
>      p2m_flush_table(p2m);
>
>      /* Free VMID and reset VTTBR */
> -    p2m_free_vmid(p2m->domain);
> +    p2m_free_vmid(p2m);
>      p2m->vttbr.vttbr = INVALID_VTTBR;
>
>      if ( p2m->root )
> @@ -1417,7 +1415,7 @@ int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
>      rwlock_init(&p2m->lock);
>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>
> -    rc = p2m_alloc_vmid(d);
> +    rc = p2m_alloc_vmid(p2m);
>      if ( rc != 0 )
>          return rc;
>
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 17:56                                     ` Julien Grall
@ 2016-08-03 18:11                                       ` Tamas K Lengyel
  2016-08-03 18:16                                         ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-03 18:11 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Stefano Stabellini, Xen-devel,
	Sergej Proskurin

On Wed, Aug 3, 2016 at 11:56 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 03/08/16 18:51, Tamas K Lengyel wrote:
>>
>> On Wed, Aug 3, 2016 at 11:45 AM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>> The whole discussion of this series was to defer the exposition of altp2m
>>> HVMOP to the guest until we find a usage. I.e a simple:
>>>
>>> xsm_hvm_altp2m_op(XSM_PRIV/XSM_DM_PRIV, d);
>>>
>>> So why do you want to re-invent a new interface here?
>>
>>
>> I guess I misinterpreted your request of not having this interface
>> exposed to the guest. If we are fine with exposing the interface to
>> the guest but having XSM manage whether it's allowed by default I'm
>> certainly OK with that.
>
>
> By default the interface will not be exposed to the guest.
> XSM_PRIV/XSM_DM_PRIV only allow a privileged domain or a device model domain
> to use the interface. The guest will not be enabled to access it.

Yes. I guess our terminology differs about what we mean by "exposed".
In my book if the interface is available to the guest but access
attempts are denied by XSM that means the interface is exposed but
restricted.

>
> If the user decide to allow a guest accessing altp2m op with XSM, then I
> don't think it our business if a security issue is exposed.
>

I agree.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines.
  2016-08-01 17:10 ` [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
@ 2016-08-03 18:12   ` Julien Grall
  2016-08-05  6:53     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 18:12 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> The p2m initialization now invokes initialization routines responsible
> for the allocation and initialization of altp2m structures. The same
> applies to teardown routines. The functionality has been adopted from
> the x86 altp2m implementation.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
> v2: Shared code between host/altp2m init/teardown functions.
>     Added conditional init/teardown of altp2m.
>     Altp2m related functions are moved to altp2m.c
> ---
>  xen/arch/arm/Makefile        |  1 +
>  xen/arch/arm/altp2m.c        | 71 ++++++++++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/p2m.c           | 28 +++++++++++++----
>  xen/include/asm-arm/altp2m.h |  6 ++++
>  xen/include/asm-arm/domain.h |  4 +++
>  xen/include/asm-arm/p2m.h    |  5 ++++
>  6 files changed, 110 insertions(+), 5 deletions(-)
>  create mode 100644 xen/arch/arm/altp2m.c
>
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 23aaf52..4a7f660 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -5,6 +5,7 @@ subdir-$(CONFIG_ARM_64) += efi
>  subdir-$(CONFIG_ACPI) += acpi
>
>  obj-$(CONFIG_ALTERNATIVE) += alternative.o
> +obj-y += altp2m.o
>  obj-y += bootfdt.o
>  obj-y += cpu.o
>  obj-y += cpuerrata.o
> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
> new file mode 100644
> index 0000000..abbd39a
> --- /dev/null
> +++ b/xen/arch/arm/altp2m.c
> @@ -0,0 +1,71 @@
> +/*
> + * arch/arm/altp2m.c
> + *
> + * Alternate p2m
> + * Copyright (c) 2016 Sergej Proskurin <proskurin@sec.in.tum.de>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License, version 2,
> + * as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT ANY
> + * WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
> + * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
> + * details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <asm/p2m.h>
> +#include <asm/altp2m.h>
> +
> +int altp2m_init(struct domain *d)
> +{
> +    unsigned int i;
> +
> +    spin_lock_init(&d->arch.altp2m_lock);
> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        d->arch.altp2m_p2m[i] = NULL;
> +        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;

I don't think altp2m_vttbr is useful. There is no real performance 
impact to free the whole altp2m if the altp2m is destroyed (see 
altp2m_destroy_by_id) and re-allocated afterwards.

The code will actually much simpler. With this solution you will be able 
to detect if an altp2m is available by testin altp2m_p2m[i] is NULL.

> +    }
> +
> +    return 0;
> +}
> +
> +void altp2m_teardown(struct domain *d)
> +{
> +    unsigned int i;
> +    struct p2m_domain *p2m;
> +
> +    altp2m_lock(d);

The lock is not necessary here.

> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        if ( !d->arch.altp2m_p2m[i] )
> +            continue;
> +
> +        p2m = d->arch.altp2m_p2m[i];
> +        p2m_free_one(p2m);
> +        xfree(p2m);
> +
> +        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
> +        d->arch.altp2m_p2m[i] = NULL;

The domain will never be used afterward, so there is no point to set 
altp2m_vttbr and altp2m_p2m.

> +    }
> +
> +    d->arch.altp2m_active = false;

Ditto.

> +
> +    altp2m_unlock(d);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index ff9c0d1..29ec5e5 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -14,6 +14,8 @@
>  #include <asm/hardirq.h>
>  #include <asm/page.h>
>
> +#include <asm/altp2m.h>
> +
>  #ifdef CONFIG_ARM_64
>  static unsigned int __read_mostly p2m_root_order;
>  static unsigned int __read_mostly p2m_root_level;
> @@ -1392,7 +1394,7 @@ void p2m_flush_table(struct p2m_domain *p2m)
>          free_domheap_page(pg);
>  }
>
> -static inline void p2m_free_one(struct p2m_domain *p2m)
> +void p2m_free_one(struct p2m_domain *p2m)

Please expose p2m_free_one in patch #4.

>  {
>      p2m_flush_table(p2m);
>
> @@ -1415,9 +1417,13 @@ int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
>      rwlock_init(&p2m->lock);
>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>
> -    rc = p2m_alloc_vmid(p2m);
> -    if ( rc != 0 )
> -        return rc;
> +    /* Reused altp2m views keep their VMID. */
> +    if ( p2m->vmid == INVALID_VMID )
> +    {
> +        rc = p2m_alloc_vmid(p2m);
> +        if ( rc != 0 )
> +            return rc;
> +    }

My suggestion above will avoid this kind of hack.

>
>      p2m->domain = d;
>      p2m->access_required = false;
> @@ -1441,6 +1447,9 @@ static void p2m_teardown_hostp2m(struct domain *d)
>
>  void p2m_teardown(struct domain *d)
>  {
> +    if ( altp2m_enabled(d) )
> +        altp2m_teardown(d);
> +
>      p2m_teardown_hostp2m(d);
>  }
>
> @@ -1460,7 +1469,16 @@ static int p2m_init_hostp2m(struct domain *d)
>
>  int p2m_init(struct domain *d)
>  {
> -    return p2m_init_hostp2m(d);
> +    int rc;
> +
> +    rc = p2m_init_hostp2m(d);
> +    if ( rc )
> +        return rc;
> +
> +    if ( altp2m_enabled(d) )

I am a bit skeptical that you can fully use altp2m with this check. 
p2m_init is created at the very beginning when the domain is allocated. 
So the HVM_PARAM_ALTP2M will not be set.

> +        rc = altp2m_init(d);
> +
> +    return rc;
>  }
>
>  int relinquish_p2m_mapping(struct domain *d)
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index d47b249..79ea66b 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -22,6 +22,9 @@
>
>  #include <xen/sched.h>
>
> +#define altp2m_lock(d)    spin_lock(&(d)->arch.altp2m_lock)
> +#define altp2m_unlock(d)  spin_unlock(&(d)->arch.altp2m_lock)
> +
>  #define altp2m_enabled(d) ((d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M])
>
>  /* Alternate p2m on/off per domain */
> @@ -38,4 +41,7 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
>      return 0;
>  }
>
> +int altp2m_init(struct domain *d);
> +void altp2m_teardown(struct domain *d);
> +
>  #endif /* __ASM_ARM_ALTP2M_H */
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index cc4bda0..3c25ea5 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -129,6 +129,10 @@ struct arch_domain
>
>      /* altp2m: allow multiple copies of host p2m */
>      bool_t altp2m_active;
> +    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
> +    uint64_t altp2m_vttbr[MAX_ALTP2M];
> +    /* Covers access to members of the struct altp2m. */

I cannot find any "struct altp2m" in the code.

> +    spinlock_t altp2m_lock;
>  }  __cacheline_aligned;
>
>  struct arch_vcpu
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 1f9c370..24a1f61 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -9,6 +9,8 @@
>  #include <xen/p2m-common.h>
>  #include <public/memory.h>
>
> +#define MAX_ALTP2M 10           /* ARM might contain an arbitrary number of
> +                                   altp2m views. */
>  #define paddr_bits PADDR_BITS
>
>  /* Holds the bit size of IPAs in p2m tables.  */
> @@ -212,6 +214,9 @@ void guest_physmap_remove_page(struct domain *d,
>
>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>
> +/* Release resources held by the p2m structure. */
> +void p2m_free_one(struct p2m_domain *p2m);
> +
>  /*
>   * Populate-on-demand
>   */
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 18:11                                       ` Tamas K Lengyel
@ 2016-08-03 18:16                                         ` Julien Grall
  2016-08-03 18:21                                           ` Tamas K Lengyel
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 18:16 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, Andrew Cooper, Stefano Stabellini, Xen-devel,
	Sergej Proskurin



On 03/08/16 19:11, Tamas K Lengyel wrote:
> On Wed, Aug 3, 2016 at 11:56 AM, Julien Grall <julien.grall@arm.com> wrote:
>>
>>
>> On 03/08/16 18:51, Tamas K Lengyel wrote:
>>>
>>> On Wed, Aug 3, 2016 at 11:45 AM, Julien Grall <julien.grall@arm.com>
>>> wrote:
>>>>
>>>> The whole discussion of this series was to defer the exposition of altp2m
>>>> HVMOP to the guest until we find a usage. I.e a simple:
>>>>
>>>> xsm_hvm_altp2m_op(XSM_PRIV/XSM_DM_PRIV, d);
>>>>
>>>> So why do you want to re-invent a new interface here?
>>>
>>>
>>> I guess I misinterpreted your request of not having this interface
>>> exposed to the guest. If we are fine with exposing the interface to
>>> the guest but having XSM manage whether it's allowed by default I'm
>>> certainly OK with that.
>>
>>
>> By default the interface will not be exposed to the guest.
>> XSM_PRIV/XSM_DM_PRIV only allow a privileged domain or a device model domain
>> to use the interface. The guest will not be enabled to access it.
>
> Yes. I guess our terminology differs about what we mean by "exposed".
> In my book if the interface is available to the guest but access
> attempts are denied by XSM that means the interface is exposed but
> restricted.

Although the behavior is very different compare to what x86 does. By 
default the guest will be able to play with altp2m.

That was always my point, hence why I said there will be no issue to 
allow a guest accessing altp2m later on.

Andrew, is that what you had in mind?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 18:16                                         ` Julien Grall
@ 2016-08-03 18:21                                           ` Tamas K Lengyel
  2016-08-04 11:13                                             ` George Dunlap
  0 siblings, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-03 18:21 UTC (permalink / raw)
  To: Julien Grall
  Cc: George Dunlap, Andrew Cooper, Stefano Stabellini, Xen-devel,
	Sergej Proskurin

On Wed, Aug 3, 2016 at 12:16 PM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 03/08/16 19:11, Tamas K Lengyel wrote:
>>
>> On Wed, Aug 3, 2016 at 11:56 AM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>>
>>>
>>> On 03/08/16 18:51, Tamas K Lengyel wrote:
>>>>
>>>>
>>>> On Wed, Aug 3, 2016 at 11:45 AM, Julien Grall <julien.grall@arm.com>
>>>> wrote:
>>>>>
>>>>>
>>>>> The whole discussion of this series was to defer the exposition of
>>>>> altp2m
>>>>> HVMOP to the guest until we find a usage. I.e a simple:
>>>>>
>>>>> xsm_hvm_altp2m_op(XSM_PRIV/XSM_DM_PRIV, d);
>>>>>
>>>>> So why do you want to re-invent a new interface here?
>>>>
>>>>
>>>>
>>>> I guess I misinterpreted your request of not having this interface
>>>> exposed to the guest. If we are fine with exposing the interface to
>>>> the guest but having XSM manage whether it's allowed by default I'm
>>>> certainly OK with that.
>>>
>>>
>>>
>>> By default the interface will not be exposed to the guest.
>>> XSM_PRIV/XSM_DM_PRIV only allow a privileged domain or a device model
>>> domain
>>> to use the interface. The guest will not be enabled to access it.
>>
>>
>> Yes. I guess our terminology differs about what we mean by "exposed".
>> In my book if the interface is available to the guest but access
>> attempts are denied by XSM that means the interface is exposed but
>> restricted.
>
>
> Although the behavior is very different compare to what x86 does. By default
> the guest will be able to play with altp2m.

Personally my life would be a lot easier on x86 too if the default XSM
behavior was external-use only for altp2m. Or if at last XSM was
turned on by default..

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-08-01 17:10 ` [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
@ 2016-08-03 18:41   ` Julien Grall
  2016-08-06  9:03     ` Sergej Proskurin
  2016-08-06  9:36     ` Sergej Proskurin
  0 siblings, 2 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-03 18:41 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
> specific domain. This commit adopts the x86
> HVMOP_altp2m_set_domain_state implementation. Note that the function
> altp2m_flush is currently implemented in form of a stub.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
> v2: Dynamically allocate memory for altp2m views only when needed.
>     Move altp2m related helpers to altp2m.c.
>     p2m_flush_tlb is made publicly accessible.
> ---
>  xen/arch/arm/altp2m.c          | 116 +++++++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/hvm.c             |  34 +++++++++++-
>  xen/arch/arm/p2m.c             |   2 +-
>  xen/include/asm-arm/altp2m.h   |  15 ++++++
>  xen/include/asm-arm/domain.h   |   9 ++++
>  xen/include/asm-arm/flushtlb.h |   4 ++
>  xen/include/asm-arm/p2m.h      |  11 ++++
>  7 files changed, 189 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
> index abbd39a..767f233 100644
> --- a/xen/arch/arm/altp2m.c
> +++ b/xen/arch/arm/altp2m.c
> @@ -19,6 +19,122 @@
>
>  #include <asm/p2m.h>
>  #include <asm/altp2m.h>
> +#include <asm/flushtlb.h>
> +
> +struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
> +{
> +    unsigned int index = vcpu_altp2m(v).p2midx;
> +
> +    if ( index == INVALID_ALTP2M )
> +        return NULL;
> +
> +    BUG_ON(index >= MAX_ALTP2M);
> +
> +    return v->domain->arch.altp2m_p2m[index];
> +}
> +
> +static void altp2m_vcpu_reset(struct vcpu *v)
> +{
> +    struct altp2mvcpu *av = &vcpu_altp2m(v);
> +
> +    av->p2midx = INVALID_ALTP2M;
> +}
> +
> +void altp2m_vcpu_initialise(struct vcpu *v)
> +{
> +    if ( v != current )
> +        vcpu_pause(v);
> +
> +    altp2m_vcpu_reset(v);

I don't understand why you call altp2m_vcpu_reset which will set p2midx 
to INVALID_ALTP2M but a line after you set to 0.

> +    vcpu_altp2m(v).p2midx = 0;
> +    atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
> +
> +    if ( v != current )
> +        vcpu_unpause(v);
> +}
> +
> +void altp2m_vcpu_destroy(struct vcpu *v)
> +{
> +    struct p2m_domain *p2m;
> +
> +    if ( v != current )
> +        vcpu_pause(v);
> +
> +    if ( (p2m = altp2m_get_altp2m(v)) )
> +        atomic_dec(&p2m->active_vcpus);
> +
> +    altp2m_vcpu_reset(v);
> +
> +    if ( v != current )
> +        vcpu_unpause(v);
> +}
> +
> +static int altp2m_init_helper(struct domain *d, unsigned int idx)
> +{
> +    int rc;
> +    struct p2m_domain *p2m = d->arch.altp2m_p2m[idx];
> +
> +    if ( p2m == NULL )
> +    {
> +        /* Allocate a new, zeroed altp2m view. */
> +        p2m = xzalloc(struct p2m_domain);
> +        if ( p2m == NULL)
> +        {
> +            rc = -ENOMEM;
> +            goto err;
> +        }
> +    }

Why don't you re-allocate the p2m from scratch?

> +
> +    /* Initialize the new altp2m view. */
> +    rc = p2m_init_one(d, p2m);
> +    if ( rc )
> +        goto err;
> +
> +    /* Allocate a root table for the altp2m view. */
> +    rc = p2m_alloc_table(p2m);
> +    if ( rc )
> +        goto err;
> +
> +    p2m->p2m_class = p2m_alternate;
> +    p2m->access_required = 1;

Please use true here. Although, I am not sure why you want to enable the 
access by default.

> +    _atomic_set(&p2m->active_vcpus, 0);
> +
> +    d->arch.altp2m_p2m[idx] = p2m;
> +    d->arch.altp2m_vttbr[idx] = p2m->vttbr.vttbr;
> +
> +    /*
> +     * Make sure that all TLBs corresponding to the current VMID are flushed
> +     * before using it.
> +     */
> +    p2m_flush_tlb(p2m);
> +
> +    return rc;
> +
> +err:
> +    if ( p2m )
> +        xfree(p2m);
> +
> +    d->arch.altp2m_p2m[idx] = NULL;
> +
> +    return rc;
> +}
> +
> +int altp2m_init_by_id(struct domain *d, unsigned int idx)
> +{
> +    int rc = -EINVAL;
> +
> +    if ( idx >= MAX_ALTP2M )
> +        return rc;
> +
> +    altp2m_lock(d);
> +
> +    if ( d->arch.altp2m_vttbr[idx] == INVALID_VTTBR )
> +        rc = altp2m_init_helper(d, idx);
> +
> +    altp2m_unlock(d);
> +
> +    return rc;
> +}
>
>  int altp2m_init(struct domain *d)
>  {
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index 01a3243..78370c6 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -80,8 +80,40 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>          break;
>
>      case HVMOP_altp2m_set_domain_state:
> -        rc = -EOPNOTSUPP;
> +    {
> +        struct vcpu *v;
> +        bool_t ostate;
> +
> +        if ( !altp2m_enabled(d) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +
> +        ostate = d->arch.altp2m_active;
> +        d->arch.altp2m_active = !!a.u.domain_state.state;
> +
> +        /* If the alternate p2m state has changed, handle appropriately */
> +        if ( (d->arch.altp2m_active != ostate) &&
> +             (ostate || !(rc = altp2m_init_by_id(d, 0))) )
> +        {
> +            for_each_vcpu( d, v )
> +            {
> +                if ( !ostate )
> +                    altp2m_vcpu_initialise(v);
> +                else
> +                    altp2m_vcpu_destroy(v);
> +            }

The implementation of this hvmop param looks racy to me. What does 
prevent to CPU running in this function at the same time? One will 
destroy, whilst the other one will initialize.

It might even be possible to have both doing the initialization because 
there is no synchronization barrier for altp2m_active.

> +
> +            /*
> +             * The altp2m_active state has been deactivated. It is now safe to
> +             * flush all altp2m views -- including altp2m[0].
> +             */
> +            if ( ostate )
> +                altp2m_flush(d);

The function altp2m_flush is defined afterwards (in patch #9). Please 
make sure that all the patches compile one by one.

> +        }
>          break;
> +    }
>
>      case HVMOP_altp2m_vcpu_enable_notify:
>          rc = -EOPNOTSUPP;
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 29ec5e5..8afea11 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -139,7 +139,7 @@ void p2m_restore_state(struct vcpu *n)
>      isb();
>  }
>
> -static void p2m_flush_tlb(struct p2m_domain *p2m)
> +void p2m_flush_tlb(struct p2m_domain *p2m)

This should ideally be in a separate patch.

>  {
>      unsigned long flags = 0;
>      uint64_t ovttbr;
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index 79ea66b..a33c740 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -22,6 +22,8 @@
>
>  #include <xen/sched.h>
>
> +#define INVALID_ALTP2M    0xffff
> +
>  #define altp2m_lock(d)    spin_lock(&(d)->arch.altp2m_lock)
>  #define altp2m_unlock(d)  spin_unlock(&(d)->arch.altp2m_lock)
>
> @@ -44,4 +46,17 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
>  int altp2m_init(struct domain *d);
>  void altp2m_teardown(struct domain *d);
>
> +void altp2m_vcpu_initialise(struct vcpu *v);
> +void altp2m_vcpu_destroy(struct vcpu *v);
> +
> +/* Make a specific alternate p2m valid. */
> +int altp2m_init_by_id(struct domain *d,
> +                      unsigned int idx);
> +
> +/* Flush all the alternate p2m's for a domain */
> +static inline void altp2m_flush(struct domain *d)
> +{
> +    /* Not yet implemented. */
> +}
> +
>  #endif /* __ASM_ARM_ALTP2M_H */
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 3c25ea5..63a9650 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -135,6 +135,12 @@ struct arch_domain
>      spinlock_t altp2m_lock;
>  }  __cacheline_aligned;
>
> +struct altp2mvcpu {
> +    uint16_t p2midx; /* alternate p2m index */
> +};
> +
> +#define vcpu_altp2m(v) ((v)->arch.avcpu)


Please avoid to have half of altp2m  defined in altp2m.h and the other 
half in domain.h.

> +
>  struct arch_vcpu
>  {
>      struct {
> @@ -264,6 +270,9 @@ struct arch_vcpu
>      struct vtimer phys_timer;
>      struct vtimer virt_timer;
>      bool_t vtimer_initialized;
> +
> +    /* Alternate p2m context */
> +    struct altp2mvcpu avcpu;
>  }  __cacheline_aligned;
>
>  void vcpu_show_execution_state(struct vcpu *);
> diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
> index 329fbb4..57c3c34 100644
> --- a/xen/include/asm-arm/flushtlb.h
> +++ b/xen/include/asm-arm/flushtlb.h
> @@ -2,6 +2,7 @@
>  #define __ASM_ARM_FLUSHTLB_H__
>
>  #include <xen/cpumask.h>
> +#include <asm/p2m.h>
>
>  /*
>   * Filter the given set of CPUs, removing those that definitely flushed their
> @@ -25,6 +26,9 @@ do {                                                                    \
>  /* Flush specified CPUs' TLBs */
>  void flush_tlb_mask(const cpumask_t *mask);
>
> +/* Flush CPU's TLBs for the specified domain */
> +void p2m_flush_tlb(struct p2m_domain *p2m);
> +

This function should be declared in p2m.h and not flushtlb.h.

>  #endif /* __ASM_ARM_FLUSHTLB_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 24a1f61..f13f285 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -9,6 +9,8 @@
>  #include <xen/p2m-common.h>
>  #include <public/memory.h>
>
> +#include <asm/atomic.h>
> +
>  #define MAX_ALTP2M 10           /* ARM might contain an arbitrary number of
>                                     altp2m views. */
>  #define paddr_bits PADDR_BITS
> @@ -86,6 +88,9 @@ struct p2m_domain {
>       */
>      struct radix_tree_root mem_access_settings;
>
> +    /* Alternate p2m: count of vcpu's currently using this p2m. */
> +    atomic_t active_vcpus;
> +
>      /* Choose between: host/alternate */
>      p2m_class_t p2m_class;
>
> @@ -214,6 +219,12 @@ void guest_physmap_remove_page(struct domain *d,
>
>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>
> +/* Allocates page table for a p2m. */
> +int p2m_alloc_table(struct p2m_domain *p2m);
> +
> +/* Initialize the p2m structure. */
> +int p2m_init_one(struct domain *d, struct p2m_domain *p2m);

These declarations belong to the patch that exported them not. Not here.

> +
>  /* Release resources held by the p2m structure. */
>  void p2m_free_one(struct p2m_domain *p2m);
>
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 09/25] arm/altp2m: Add altp2m table flushing routine.
  2016-08-01 17:10 ` [PATCH v2 09/25] arm/altp2m: Add altp2m table flushing routine Sergej Proskurin
@ 2016-08-03 18:44   ` Julien Grall
  2016-08-06  9:45     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 18:44 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> The current implementation differentiates between flushing and
> destroying altp2m views. This commit adds the function altp2m_flush,
> which allows to flush all or individual altp2m views without destroying
> the entire table. In this way, altp2m views can be reused at a later
> point in time.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
> v2: Pages in p2m->pages are not cleared in p2m_flush_table anymore.
>     VMID is freed in p2m_free_one.
>     Cosmetic fixes.
> ---
>  xen/arch/arm/altp2m.c        | 38 ++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/altp2m.h |  5 +----
>  xen/include/asm-arm/p2m.h    |  3 +++
>  3 files changed, 42 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
> index 767f233..e73424c 100644
> --- a/xen/arch/arm/altp2m.c
> +++ b/xen/arch/arm/altp2m.c
> @@ -151,6 +151,44 @@ int altp2m_init(struct domain *d)
>      return 0;
>  }
>
> +void altp2m_flush(struct domain *d)
> +{
> +    unsigned int i;
> +    struct p2m_domain *p2m;
> +
> +    /*
> +     * If altp2m is active, we are not allowed to flush altp2m[0]. This special
> +     * view is considered as the hostp2m as long as altp2m is active.
> +     */
> +    ASSERT(!altp2m_active(d));

Because of the race condition I mentioned in the previous patch (#8), 
this ASSERT may be hit randomly.

> +
> +    altp2m_lock(d);
> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        if ( d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
> +            continue;
> +
> +        p2m = d->arch.altp2m_p2m[i];
> +
> +        read_lock(&p2m->lock);

You should use p2m_lock rather than re-inventing your own. If p2m_*lock 
helpers need to be exposed, then expose them.

Also, this should be a write_lock otherwise someone else could access at 
the same time.

> +
> +        p2m_flush_table(p2m);
> +
> +        /*
> +         * Reset VTTBR.
> +         *
> +         * Note that VMID is not freed so that it can be reused later.
> +         */
> +        p2m->vttbr.vttbr = INVALID_VTTBR;
> +        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
> +
> +        read_unlock(&p2m->lock);

I would much prefer if the p2m is fully destroyed rather than 
re-initialized.

> +    }
> +
> +    altp2m_unlock(d);
> +}
> +
>  void altp2m_teardown(struct domain *d)
>  {
>      unsigned int i;
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index a33c740..3ba82a8 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -54,9 +54,6 @@ int altp2m_init_by_id(struct domain *d,
>                        unsigned int idx);
>
>  /* Flush all the alternate p2m's for a domain */
> -static inline void altp2m_flush(struct domain *d)
> -{
> -    /* Not yet implemented. */
> -}
> +void altp2m_flush(struct domain *d);
>
>  #endif /* __ASM_ARM_ALTP2M_H */
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index f13f285..32326cb 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -222,6 +222,9 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>  /* Allocates page table for a p2m. */
>  int p2m_alloc_table(struct p2m_domain *p2m);
>
> +/* Flushes the page table held by the p2m. */
> +void p2m_flush_table(struct p2m_domain *p2m);
> +
>  /* Initialize the p2m structure. */
>  int p2m_init_one(struct domain *d, struct p2m_domain *p2m);
>
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 10/25] arm/altp2m: Add HVMOP_altp2m_create_p2m.
  2016-08-01 17:10 ` [PATCH v2 10/25] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
@ 2016-08-03 18:48   ` Julien Grall
  2016-08-06  9:46     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-03 18:48 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
> v2: Cosmetic fixes.
> ---
>  xen/arch/arm/altp2m.c        | 23 +++++++++++++++++++++++
>  xen/arch/arm/hvm.c           |  3 ++-
>  xen/include/asm-arm/altp2m.h |  4 ++++
>  3 files changed, 29 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
> index e73424c..c22d2e4 100644
> --- a/xen/arch/arm/altp2m.c
> +++ b/xen/arch/arm/altp2m.c
> @@ -136,6 +136,29 @@ int altp2m_init_by_id(struct domain *d, unsigned int idx)
>      return rc;
>  }
>
> +int altp2m_init_next(struct domain *d, uint16_t *idx)
> +{
> +    int rc = -EINVAL;
> +    unsigned int i;
> +
> +    altp2m_lock(d);
> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        if ( d->arch.altp2m_vttbr[i] != INVALID_VTTBR )
> +            continue;
> +
> +        rc = altp2m_init_helper(d, i);
> +        *idx = (uint16_t) i;

The cast is not necessary. You could make i uint16_t.

> +
> +        break;
> +    }
> +
> +    altp2m_unlock(d);
> +
> +    return rc;
> +}
> +
>  int altp2m_init(struct domain *d)
>  {
>      unsigned int i;
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index 78370c6..063a06b 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -120,7 +120,8 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>          break;
>
>      case HVMOP_altp2m_create_p2m:
> -        rc = -EOPNOTSUPP;
> +        if ( !(rc = altp2m_init_next(d, &a.u.view.view)) )
> +            rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
>          break;
>
>      case HVMOP_altp2m_destroy_p2m:
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index 3ba82a8..3ecae27 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -53,6 +53,10 @@ void altp2m_vcpu_destroy(struct vcpu *v);
>  int altp2m_init_by_id(struct domain *d,
>                        unsigned int idx);
>
> +/* Find an available alternate p2m and make it valid */

The comment and the implementation don't match the name of the function. 
I would rename the function altp2m_find_available or something similar.

> +int altp2m_init_next(struct domain *d,
> +                     uint16_t *idx);
> +
>  /* Flush all the alternate p2m's for a domain */
>  void altp2m_flush(struct domain *d);
>
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-03 18:21                                           ` Tamas K Lengyel
@ 2016-08-04 11:13                                             ` George Dunlap
  2016-08-08  4:44                                               ` Tamas K Lengyel
  0 siblings, 1 reply; 159+ messages in thread
From: George Dunlap @ 2016-08-04 11:13 UTC (permalink / raw)
  To: Tamas K Lengyel, Julien Grall
  Cc: George Dunlap, Andrew Cooper, Stefano Stabellini, Xen-devel,
	Sergej Proskurin

On 03/08/16 19:21, Tamas K Lengyel wrote:
>> Although the behavior is very different compare to what x86 does. By default
>> the guest will be able to play with altp2m.
> 
> Personally my life would be a lot easier on x86 too if the default XSM
> behavior was external-use only for altp2m. 

I'm a patch that allows altp2m to be exposed only to external tools
sounds like something that would be a useful improvement.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 11/25] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-08-01 17:10 ` [PATCH v2 11/25] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
@ 2016-08-04 11:46   ` Julien Grall
  2016-08-06  9:54     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 11:46 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
> v2: Substituted the call to tlb_flush for p2m_flush_table.
>     Added comments.
>     Cosmetic fixes.
> ---
>  xen/arch/arm/altp2m.c        | 50 ++++++++++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/hvm.c           |  2 +-
>  xen/include/asm-arm/altp2m.h |  4 ++++
>  3 files changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
> index c22d2e4..80ed553 100644
> --- a/xen/arch/arm/altp2m.c
> +++ b/xen/arch/arm/altp2m.c
> @@ -212,6 +212,56 @@ void altp2m_flush(struct domain *d)
>      altp2m_unlock(d);
>  }
>
> +int altp2m_destroy_by_id(struct domain *d, unsigned int idx)
> +{
> +    struct p2m_domain *p2m;
> +    int rc = -EBUSY;
> +
> +    /*
> +     * The altp2m[0] is considered as the hostp2m and is used as a safe harbor
> +     * to which you can switch as long as altp2m is active. After deactivating
> +     * altp2m, the system switches back to the original hostp2m view. That is,
> +     * altp2m[0] should only be destroyed/flushed/freed, when altp2m is
> +     * deactivated.
> +     */
> +    if ( !idx || idx >= MAX_ALTP2M )
> +        return rc;
> +
> +    domain_pause_except_self(d);
> +
> +    altp2m_lock(d);
> +
> +    if ( d->arch.altp2m_vttbr[idx] != INVALID_VTTBR )
> +    {
> +        p2m = d->arch.altp2m_p2m[idx];
> +
> +        if ( !_atomic_read(p2m->active_vcpus) )
> +        {
> +            read_lock(&p2m->lock);

Please avoid to use open read_lock and use the p2m_*lock helpers. Also 
read lock does not prevent multiple thread to access the p2m at the same 
time.

> +
> +            p2m_flush_table(p2m);
> +
> +            /*
> +             * Reset VTTBR.
> +             *
> +             * Note that VMID is not freed so that it can be reused later.
> +             */
> +            p2m->vttbr.vttbr = INVALID_VTTBR;
> +            d->arch.altp2m_vttbr[idx] = INVALID_VTTBR;
> +
> +            read_unlock(&p2m->lock);

Why did you decide to reset the p2m rather than free it? The code would 
be simpler with the latter.

> +
> +            rc = 0;
> +        }
> +    }
> +
> +    altp2m_unlock(d);
> +
> +    domain_unpause_except_self(d);
> +
> +    return rc;
> +}
> +
>  void altp2m_teardown(struct domain *d)
>  {
>      unsigned int i;
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index 063a06b..df29cdc 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -125,7 +125,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>          break;
>
>      case HVMOP_altp2m_destroy_p2m:
> -        rc = -EOPNOTSUPP;
> +        rc = altp2m_destroy_by_id(d, a.u.view.view);
>          break;
>
>      case HVMOP_altp2m_switch_p2m:
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index 3ecae27..afa1580 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -60,4 +60,8 @@ int altp2m_init_next(struct domain *d,
>  /* Flush all the alternate p2m's for a domain */
>  void altp2m_flush(struct domain *d);
>
> +/* Make a specific alternate p2m invalid */
> +int altp2m_destroy_by_id(struct domain *d,
> +                         unsigned int idx);
> +
>  #endif /* __ASM_ARM_ALTP2M_H */
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 12/25] arm/altp2m: Add HVMOP_altp2m_switch_p2m.
  2016-08-01 17:10 ` [PATCH v2 12/25] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
@ 2016-08-04 11:51   ` Julien Grall
  2016-08-06 10:13     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 11:51 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/altp2m.c        | 32 ++++++++++++++++++++++++++++++++
>  xen/arch/arm/hvm.c           |  2 +-
>  xen/include/asm-arm/altp2m.h |  4 ++++
>  3 files changed, 37 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
> index 80ed553..7404f42 100644
> --- a/xen/arch/arm/altp2m.c
> +++ b/xen/arch/arm/altp2m.c
> @@ -33,6 +33,38 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
>      return v->domain->arch.altp2m_p2m[index];
>  }
>
> +int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
> +{
> +    struct vcpu *v;
> +    int rc = -EINVAL;
> +
> +    if ( idx >= MAX_ALTP2M )
> +        return rc;
> +
> +    domain_pause_except_self(d);
> +
> +    altp2m_lock(d);
> +
> +    if ( d->arch.altp2m_vttbr[idx] != INVALID_VTTBR )
> +    {
> +        for_each_vcpu( d, v )
> +            if ( idx != vcpu_altp2m(v).p2midx )
> +            {
> +                atomic_dec(&altp2m_get_altp2m(v)->active_vcpus);
> +                vcpu_altp2m(v).p2midx = idx;
> +                atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
> +            }
> +
> +        rc = 0;
> +    }

If a domain is calling the function on itself, the current vCPU will not 
switch to the new altp2m because the VTTBR is only restore during 
context switch.

However, I am not sure why you store a p2midx rather than directly a 
pointer to the p2m.

> +
> +    altp2m_unlock(d);
> +
> +    domain_unpause_except_self(d);
> +
> +    return rc;
> +}
> +
>  static void altp2m_vcpu_reset(struct vcpu *v)
>  {
>      struct altp2mvcpu *av = &vcpu_altp2m(v);
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index df29cdc..3b508df 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -129,7 +129,7 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>          break;
>
>      case HVMOP_altp2m_switch_p2m:
> -        rc = -EOPNOTSUPP;
> +        rc = altp2m_switch_domain_altp2m_by_id(d, a.u.view.view);
>          break;
>
>      case HVMOP_altp2m_set_mem_access:
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index afa1580..790bb33 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -49,6 +49,10 @@ void altp2m_teardown(struct domain *d);
>  void altp2m_vcpu_initialise(struct vcpu *v);
>  void altp2m_vcpu_destroy(struct vcpu *v);
>
> +/* Switch alternate p2m for entire domain */
> +int altp2m_switch_domain_altp2m_by_id(struct domain *d,
> +                                      unsigned int idx);
> +
>  /* Make a specific alternate p2m valid. */
>  int altp2m_init_by_id(struct domain *d,
>                        unsigned int idx);
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 13/25] arm/altp2m: Make p2m_restore_state ready for altp2m.
  2016-08-01 17:10 ` [PATCH v2 13/25] arm/altp2m: Make p2m_restore_state ready for altp2m Sergej Proskurin
@ 2016-08-04 11:55   ` Julien Grall
  2016-08-06 10:20     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 11:55 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello,

On 01/08/16 18:10, Sergej Proskurin wrote:
> This commit adapts the function "p2m_restore_state" in a way that the
> currently active altp2m table is considered during state restoration.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c           | 4 +++-
>  xen/include/asm-arm/altp2m.h | 3 +++
>  2 files changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 8afea11..bcad51f 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -115,7 +115,9 @@ void p2m_save_state(struct vcpu *p)
>  void p2m_restore_state(struct vcpu *n)
>  {
>      register_t hcr;
> -    struct p2m_domain *p2m = &n->domain->arch.p2m;
> +    struct domain *d = n->domain;
> +    struct p2m_domain *p2m = unlikely(altp2m_active(d)) ?
> +                             altp2m_get_altp2m(n) : p2m_get_hostp2m(d);

This seems to be a common idiom in multiple place. I would like to see a 
helper for this.

However, I think this could be avoided if you store a pointer to the p2m 
directly in arch_vcpu.

>
>      if ( is_idle_vcpu(n) )
>          return;
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index 790bb33..a6496b7 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -49,6 +49,9 @@ void altp2m_teardown(struct domain *d);
>  void altp2m_vcpu_initialise(struct vcpu *v);
>  void altp2m_vcpu_destroy(struct vcpu *v);
>
> +/* Get current alternate p2m table. */
> +struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
> +

This should be added in the patch where the function was added (i.e 
patch #8).

>  /* Switch alternate p2m for entire domain */
>  int altp2m_switch_domain_altp2m_by_id(struct domain *d,
>                                        unsigned int idx);
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-08-01 17:10 ` [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva " Sergej Proskurin
@ 2016-08-04 11:59   ` Julien Grall
  2016-08-06 10:38     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 11:59 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> The function get_page_from_gva uses ARM's hardware support to translate
> gva's to machine addresses. This function is used, among others, for
> memory regulation purposes, e.g, within the context of memory ballooning.
> To ensure correct behavior while altp2m is in use, we use the host's p2m
> table for the associated gva to ma translation. This is required at this
> point, as altp2m lazily copies pages from the host's p2m and even might
> be flushed because of changes to the host's p2m (as it is done within
> the context of memory ballooning).

I was expecting to see some change in p2m_mem_access_check_and_get_page. 
Is there any reason to not fix it?


> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 31 +++++++++++++++++++++++++++++--
>  1 file changed, 29 insertions(+), 2 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index bcad51f..784f8da 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1614,7 +1614,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>                                      unsigned long flags)
>  {
>      struct domain *d = v->domain;
> -    struct p2m_domain *p2m = &d->arch.p2m;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);

This is more a clean-up than necessary. I would prefer to see a patch 
modifying all "&d->arch.p2m" by p2m_get_hostp2m in one go.

>      struct page_info *page = NULL;
>      paddr_t maddr = 0;
>      int rc;
> @@ -1628,7 +1628,34 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>
>      p2m_read_lock(p2m);
>
> -    rc = gvirt_to_maddr(va, &maddr, flags);
> +    /*
> +     * If altp2m is active, we still read the gva from the hostp2m, as it

s/still/need to/

> +     * contains all valid mappings while the currently active altp2m view might
> +     * not have the required gva mapping yet.
> +     */
> +    if ( unlikely(altp2m_active(d)) )
> +    {
> +        unsigned long irq_flags = 0;
> +        uint64_t ovttbr = READ_SYSREG64(VTTBR_EL2);
> +
> +        if ( ovttbr != p2m->vttbr.vttbr )
> +        {
> +            local_irq_save(irq_flags);
> +            WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
> +            isb();
> +        }
> +
> +        rc = gvirt_to_maddr(va, &maddr, flags);
> +
> +        if ( ovttbr != p2m->vttbr.vttbr )
> +        {
> +            WRITE_SYSREG64(ovttbr, VTTBR_EL2);
> +            isb();
> +            local_irq_restore(irq_flags);
> +        }

The pattern is very similar to what p2m_flush_tlb does. Can we get 
macro/helpers to avoid duplicate code?

> +    }
> +    else
> +        rc = gvirt_to_maddr(va, &maddr, flags);
>
>      if ( rc )
>          goto err;
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 15/25] arm/altp2m: Extend __p2m_lookup.
  2016-08-01 17:10 ` [PATCH v2 15/25] arm/altp2m: Extend __p2m_lookup Sergej Proskurin
@ 2016-08-04 12:04   ` Julien Grall
  2016-08-06 10:44     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 12:04 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> This commit extends the functionality of the function "__p2m_lookup".
> The function "__p2m_lookup" performs the necessary steps gathering
> information concerning memory attributes and the p2m table level a
> specific gfn is mapped to. Thus, we extend the function's prototype so
> that the caller can optionally get these information for further
> processing.
>
> Also, we extend the function prototype of "__p2m_lookup" to hold an
> argument of type "struct p2m_domain*", as we need to distinguish between
> the host's p2m and different altp2m views. While doing so, we needed to
> extend the function's prototypes of the following functions:
>
> * __p2m_get_mem_access
> * p2m_mem_access_and_get_page
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 59 ++++++++++++++++++++++++++++++++++++------------------
>  1 file changed, 39 insertions(+), 20 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 784f8da..326e343 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -168,15 +168,22 @@ void p2m_flush_tlb(struct p2m_domain *p2m)
>      }
>  }
>
> +static int __p2m_get_mem_access(struct p2m_domain*, gfn_t, xenmem_access_t*);
> +
>  /*
>   * Lookup the MFN corresponding to a domain's GFN.
>   *
>   * There are no processor functions to do a stage 2 only lookup therefore we
>   * do a a software walk.
> + *
> + * Optionally, __p2m_lookup takes arguments to provide information about the
> + * p2m type, the p2m table level the paddr is mapped to, associated mem
> + * attributes, and memory access rights.
>   */
> -static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
> +static mfn_t __p2m_lookup(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
> +                          unsigned int *level, unsigned int *mattr,
> +                          xenmem_access_t *xma)

Please give a look at my series "xen/arm: Rework the P2M code to follow 
break-before-make sequence" [1], it will expose a clean interface to 
retrieve all the necessary information.

>  mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
>  {
>      mfn_t ret;
> -    struct p2m_domain *p2m = &d->arch.p2m;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>
>      p2m_read_lock(p2m);
> -    ret = __p2m_lookup(d, gfn, t);
> +    ret = __p2m_lookup(p2m, gfn, t, NULL, NULL, NULL);
>      p2m_read_unlock(p2m);
>
>      return ret;
> @@ -479,10 +499,9 @@ static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry,
>      return 0;
>  }
>
> -static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
> +static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
>                                  xenmem_access_t *access)
>  {
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>      void *i;
>      unsigned int index;
>
> @@ -525,7 +544,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
>           * No setting was found in the Radix tree. Check if the
>           * entry exists in the page-tables.
>           */
> -        mfn_t mfn = __p2m_lookup(d, gfn, NULL);
> +        mfn_t mfn = __p2m_lookup(p2m, gfn, NULL, NULL, NULL, NULL);
>
>          if ( mfn_eq(mfn, INVALID_MFN) )
>              return -ESRCH;
> @@ -1519,7 +1538,7 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>   * we indeed found a conflicting mem_access setting.
>   */
>  static struct page_info*
> -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
> +p2m_mem_access_check_and_get_page(struct p2m_domain *p2m, vaddr_t gva, unsigned long flag)
>  {
>      long rc;
>      paddr_t ipa;
> @@ -1539,7 +1558,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>       * We do this first as this is faster in the default case when no
>       * permission is set on the page.
>       */
> -    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
> +    rc = __p2m_get_mem_access(p2m, gfn, &xma);
>      if ( rc < 0 )
>          goto err;
>
> @@ -1588,7 +1607,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>       * We had a mem_access permission limiting the access, but the page type
>       * could also be limiting, so we need to check that as well.
>       */
> -    mfn = __p2m_lookup(current->domain, gfn, &t);
> +    mfn = __p2m_lookup(p2m, gfn, &t, NULL, NULL, NULL);
>      if ( mfn_eq(mfn, INVALID_MFN) )
>          goto err;
>
> @@ -1671,7 +1690,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>
>  err:
>      if ( !page && p2m->mem_access_enabled )
> -        page = p2m_mem_access_check_and_get_page(va, flags);
> +        page = p2m_mem_access_check_and_get_page(p2m, va, flags);

p2m_mem_access_check_and_get_page should take a vCPU in parameter and 
not a p2m.

I know the function is already buggy because it assumes that the vCPU == 
current. But let's not broken more this function.

>
>      p2m_read_unlock(p2m);
>
> @@ -1948,7 +1967,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>
>      p2m_read_lock(p2m);
> -    ret = __p2m_get_mem_access(d, gfn, access);
> +    ret = __p2m_get_mem_access(p2m, gfn, access);
>      p2m_read_unlock(p2m);
>
>      return ret;
>

Regards,

[1] 
https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02952.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 17/25] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-08-01 17:10 ` [PATCH v2 17/25] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
@ 2016-08-04 12:06   ` Julien Grall
  2016-08-06 10:46     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 12:06 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> This commit changes the prototype of the following functions:
> - apply_p2m_changes
> - apply_one_level
> - p2m_insert_mapping
> - p2m_remove_mapping
>
> These changes are required as our implementation reuses most of the
> existing ARM p2m implementation to set page table attributes of the
> individual altp2m views. Therefore, exiting function prototypes have
> been extended to hold another argument (of type struct p2m_domain *).
> This allows to specify the p2m/altp2m domain that should be processed by
> the individual function -- instead of accessing the host's default p2m
> domain.

I would prefer if you rebase this series on top of "xen/arm: Rework the 
P2M code to follow break-before-make sequence" [1]. This will offer a 
cleaner interface for altp2m.

>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
> v2: Adoption of the functions "__p2m_lookup" and "__p2m_get_mem_access"
>     have been moved out of this commit.
> ---
>  xen/arch/arm/p2m.c | 49 +++++++++++++++++++++++++------------------------
>  1 file changed, 25 insertions(+), 24 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 53258e1..d4b7c92 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -702,6 +702,7 @@ static int p2m_shatter_page(struct p2m_domain *p2m,
>   * -ve == (-Exxx) error.
>   */
>  static int apply_one_level(struct domain *d,
> +                           struct p2m_domain *p2m,
>                             lpae_t *entry,
>                             unsigned int level,
>                             bool_t flush_cache,
> @@ -717,7 +718,6 @@ static int apply_one_level(struct domain *d,
>      const paddr_t level_size = level_sizes[level];
>      const paddr_t level_mask = level_masks[level];
>
> -    struct p2m_domain *p2m = &d->arch.p2m;
>      lpae_t pte;
>      const lpae_t orig_pte = *entry;
>      int rc;
> @@ -955,6 +955,7 @@ static void update_reference_mapping(struct page_info *page,
>  }
>
>  static int apply_p2m_changes(struct domain *d,
> +                     struct p2m_domain *p2m,
>                       enum p2m_operation op,
>                       gfn_t sgfn,
>                       unsigned long nr,
> @@ -967,7 +968,6 @@ static int apply_p2m_changes(struct domain *d,
>      paddr_t end_gpaddr = pfn_to_paddr(gfn_x(sgfn) + nr);
>      paddr_t maddr = pfn_to_paddr(mfn_x(smfn));
>      int rc, ret;
> -    struct p2m_domain *p2m = &d->arch.p2m;
>      lpae_t *mappings[4] = { NULL, NULL, NULL, NULL };
>      struct page_info *pages[4] = { NULL, NULL, NULL, NULL };
>      paddr_t addr;
> @@ -1093,7 +1093,7 @@ static int apply_p2m_changes(struct domain *d,
>              lpae_t *entry = &mappings[level][offset];
>              lpae_t old_entry = *entry;
>
> -            ret = apply_one_level(d, entry,
> +            ret = apply_one_level(d, p2m, entry,
>                                    level, flush_pt, op,
>                                    start_gpaddr, end_gpaddr,
>                                    &addr, &maddr, &flush,
> @@ -1178,7 +1178,7 @@ static int apply_p2m_changes(struct domain *d,
>  out:
>      if ( flush )
>      {
> -        p2m_flush_tlb(&d->arch.p2m);
> +        p2m_flush_tlb(p2m);
>          ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
>          if ( !rc )
>              rc = ret;
> @@ -1205,31 +1205,33 @@ out:
>           * addr keeps the address of the end of the last successfully-inserted
>           * mapping.
>           */
> -        apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
> -                          0, p2m_invalid, d->arch.p2m.default_access);
> +        apply_p2m_changes(d, p2m, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
> +                          0, p2m_invalid, p2m->default_access);
>      }
>
>      return rc;
>  }
>
>  static inline int p2m_insert_mapping(struct domain *d,
> +                                     struct p2m_domain *p2m,
>                                       gfn_t start_gfn,
>                                       unsigned long nr,
>                                       mfn_t mfn,
>                                       p2m_type_t t)
>  {
> -    return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
> -                             0, t, d->arch.p2m.default_access);
> +    return apply_p2m_changes(d, p2m, INSERT, start_gfn, nr, mfn,
> +                             0, t, p2m->default_access);
>  }
>
>  static inline int p2m_remove_mapping(struct domain *d,
> +                                     struct p2m_domain *p2m,
>                                       gfn_t start_gfn,
>                                       unsigned long nr,
>                                       mfn_t mfn)
>  {
> -    return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
> +    return apply_p2m_changes(d, p2m, REMOVE, start_gfn, nr, mfn,
>                               /* arguments below not used when removing mapping */
> -                             0, p2m_invalid, d->arch.p2m.default_access);
> +                             0, p2m_invalid, p2m->default_access);
>  }
>
>  int map_regions_rw_cache(struct domain *d,
> @@ -1237,7 +1239,7 @@ int map_regions_rw_cache(struct domain *d,
>                           unsigned long nr,
>                           mfn_t mfn)
>  {
> -    return p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_c);
> +    return p2m_insert_mapping(d, p2m_get_hostp2m(d), gfn, nr, mfn, p2m_mmio_direct_c);
>  }
>
>  int unmap_regions_rw_cache(struct domain *d,
> @@ -1245,7 +1247,7 @@ int unmap_regions_rw_cache(struct domain *d,
>                             unsigned long nr,
>                             mfn_t mfn)
>  {
> -    return p2m_remove_mapping(d, gfn, nr, mfn);
> +    return p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, nr, mfn);
>  }
>
>  int map_mmio_regions(struct domain *d,
> @@ -1253,7 +1255,7 @@ int map_mmio_regions(struct domain *d,
>                       unsigned long nr,
>                       mfn_t mfn)
>  {
> -    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_nc);
> +    return p2m_insert_mapping(d, p2m_get_hostp2m(d), start_gfn, nr, mfn, p2m_mmio_direct_nc);
>  }
>
>  int unmap_mmio_regions(struct domain *d,
> @@ -1261,7 +1263,7 @@ int unmap_mmio_regions(struct domain *d,
>                         unsigned long nr,
>                         mfn_t mfn)
>  {
> -    return p2m_remove_mapping(d, start_gfn, nr, mfn);
> +    return p2m_remove_mapping(d, p2m_get_hostp2m(d), start_gfn, nr, mfn);
>  }
>
>  int map_dev_mmio_region(struct domain *d,
> @@ -1291,14 +1293,14 @@ int guest_physmap_add_entry(struct domain *d,
>                              unsigned long page_order,
>                              p2m_type_t t)
>  {
> -    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
> +    return p2m_insert_mapping(d, p2m_get_hostp2m(d), gfn, (1 << page_order), mfn, t);
>  }
>
>  void guest_physmap_remove_page(struct domain *d,
>                                 gfn_t gfn,
>                                 mfn_t mfn, unsigned int page_order)
>  {
> -    p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
> +    p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, (1 << page_order), mfn);
>  }
>
>  int p2m_alloc_table(struct p2m_domain *p2m)
> @@ -1505,26 +1507,25 @@ int p2m_init(struct domain *d)
>
>  int relinquish_p2m_mapping(struct domain *d)
>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>      unsigned long nr;
>
>      nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
>
> -    return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
> -                             INVALID_MFN, 0, p2m_invalid,
> -                             d->arch.p2m.default_access);
> +    return apply_p2m_changes(d, p2m, RELINQUISH, p2m->lowest_mapped_gfn, nr,
> +                             INVALID_MFN, 0, p2m_invalid, p2m->default_access);
>  }
>
>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>      gfn_t end = gfn_add(start, nr);
>
>      start = gfn_max(start, p2m->lowest_mapped_gfn);
>      end = gfn_min(end, p2m->max_mapped_gfn);
>
> -    return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
> -                             0, p2m_invalid, d->arch.p2m.default_access);
> +    return apply_p2m_changes(d, p2m, CACHEFLUSH, start, nr, INVALID_MFN,
> +                             0, p2m_invalid, p2m->default_access);
>  }
>
>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
> @@ -1963,7 +1964,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>          return 0;
>      }
>
> -    rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
> +    rc = apply_p2m_changes(d, p2m, MEMACCESS, gfn_add(gfn, start),
>                             (nr - start), INVALID_MFN, mask, 0, a);
>      if ( rc < 0 )
>          return rc;
>

[1] 
https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02952.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-01 17:10 ` [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
@ 2016-08-04 13:50   ` Julien Grall
  2016-08-06 12:51     ` Sergej Proskurin
  2016-08-04 16:59   ` Julien Grall
  1 sibling, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 13:50 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> This commit adds the function "altp2m_lazy_copy" implementing the altp2m
> paging mechanism. The function "altp2m_lazy_copy" lazily copies the
> hostp2m's mapping into the currently active altp2m view on 2nd stage
> translation violations on instruction or data access. Every altp2m
> violation generates a vm_event.

I think you want to "translation fault" and not "violation". The latter 
looks more a permission issue whilst it is not the case here.

However I am not sure what you mean by "every altp2m violation generates 
a vm_event". Do you mean the userspace will be aware of it?

>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/altp2m.c        |  86 ++++++++++++++++++++++++++++++
>  xen/arch/arm/p2m.c           |   6 +++
>  xen/arch/arm/traps.c         | 124 +++++++++++++++++++++++++++++++------------
>  xen/include/asm-arm/altp2m.h |  15 ++++--
>  xen/include/asm-arm/p2m.h    |   6 +--
>  5 files changed, 196 insertions(+), 41 deletions(-)
>
> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
> index f3c1cff..78fc1d5 100644
> --- a/xen/arch/arm/altp2m.c
> +++ b/xen/arch/arm/altp2m.c
> @@ -33,6 +33,32 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
>      return v->domain->arch.altp2m_p2m[index];
>  }
>
> +bool_t altp2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
> +{
> +    struct domain *d = v->domain;
> +    bool_t rc = 0;

Please use true/false rather than 0/1.

> +
> +    if ( idx >= MAX_ALTP2M )
> +        return rc;
> +
> +    altp2m_lock(d);
> +
> +    if ( d->arch.altp2m_vttbr[idx] != INVALID_VTTBR )
> +    {
> +        if ( idx != vcpu_altp2m(v).p2midx )
> +        {
> +            atomic_dec(&altp2m_get_altp2m(v)->active_vcpus);
> +            vcpu_altp2m(v).p2midx = idx;
> +            atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
> +        }
> +        rc = 1;
> +    }
> +
> +    altp2m_unlock(d);
> +
> +    return rc;
> +}
> +

You implement 2 distinct features in this patch which make really 
difficult to read and they are not all described in the commit message:

  * Implementation of altp2m_switch_vcpu_altp2m_by_id and p2m_alp2m_check
  * Implementation of lazy copy when receiving a data abort

So please split it.

>  int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
>  {
>      struct vcpu *v;
> @@ -133,6 +159,66 @@ out:
>      return rc;
>  }
>
> +bool_t altp2m_lazy_copy(struct vcpu *v,
> +                        paddr_t gpa,
> +                        unsigned long gva,

this should be vaddr_t

> +                        struct npfec npfec,
> +                        struct p2m_domain **ap2m)

Why do you need the parameter ap2m? None of the callers make use of it 
except setting it.

> +{
> +    struct domain *d = v->domain;
> +    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);

p2m_get_hostp2m(d);

> +    p2m_type_t p2mt;
> +    xenmem_access_t xma;
> +    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
> +    mfn_t mfn;
> +    unsigned int level;
> +    int rc = 0;

Please use true/false rather than 0/1. Also this should be bool_t.

> +
> +    static const p2m_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> +        ACCESS(n),
> +        ACCESS(r),
> +        ACCESS(w),
> +        ACCESS(rw),
> +        ACCESS(x),
> +        ACCESS(rx),
> +        ACCESS(wx),
> +        ACCESS(rwx),
> +        ACCESS(rx2rw),
> +        ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    *ap2m = altp2m_get_altp2m(v);
> +    if ( *ap2m == NULL)
> +        return 0;
> +
> +    /* Check if entry is part of the altp2m view */
> +    mfn = p2m_lookup_attr(*ap2m, gfn, NULL, NULL, NULL, NULL);
> +    if ( !mfn_eq(mfn, INVALID_MFN) )
> +        goto out;
> +
> +    /* Check if entry is part of the host p2m view */
> +    mfn = p2m_lookup_attr(hp2m, gfn, &p2mt, &level, NULL, &xma);
> +    if ( mfn_eq(mfn, INVALID_MFN) )
> +        goto out;

This is quite racy. The page could be removed from the host p2m by the 
time you have added it in the altp2m because you have no lock.

> +
> +    rc = modify_altp2m_entry(d, *ap2m, gpa, pfn_to_paddr(mfn_x(mfn)), level,
> +                             p2mt, memaccess[xma]);

Please avoid to mix bool and int even though today we have implicitly 
conversion.

> +    if ( rc )
> +    {
> +        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m %lx\n",
> +                (unsigned long)gpa, (unsigned long)(paddr_to_pfn(mfn_x(mfn))),

By using (unsigned long) you will truncate the address on ARM32 because 
we are able to support up to 40 bits address.

Also why do you print the full address? The guest physical address may 
not be page-aligned so it will confuse the user.

> +                (unsigned long)*ap2m);

It does not seem really helpful to print the pointer here. You will not 
be able to exploit it when reading the log. Also this should be printed 
with "%p" and not using a cast.

> +        domain_crash(hp2m->domain);
> +    }
> +
> +    rc = 1;
> +
> +out:
> +    return rc;
> +}
> +
>  static inline void altp2m_reset(struct p2m_domain *p2m)
>  {
>      read_lock(&p2m->lock);
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 31810e6..bee8be7 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1812,6 +1812,12 @@ void __init setup_virt_paging(void)
>      smp_call_function(setup_virt_paging_one, (void *)val, 1);
>  }
>
> +void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
> +{
> +    if ( altp2m_active(v->domain) )
> +        altp2m_switch_vcpu_altp2m_by_id(v, idx);
> +}
> +

I am not sure why this function lives here.

>  bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec)
>  {
>      int rc;
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 12be7c9..628abd7 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -48,6 +48,8 @@
>  #include <asm/vgic.h>
>  #include <asm/cpuerrata.h>
>
> +#include <asm/altp2m.h>
> +
>  /* The base of the stack must always be double-word aligned, which means
>   * that both the kernel half of struct cpu_user_regs (which is pushed in
>   * entry.S) and struct cpu_info (which lives at the bottom of a Xen
> @@ -2403,35 +2405,64 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
>      int rc;
>      register_t gva = READ_SYSREG(FAR_EL2);
>      uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
> +    struct vcpu *v = current;
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m = NULL;
> +    paddr_t gpa;
> +
> +    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
> +        gpa = get_faulting_ipa(gva);
> +    else
> +    {
> +        /*
> +         * Flush the TLB to make sure the DTLB is clear before
> +         * doing GVA->IPA translation. If we got here because of
> +         * an entry only present in the ITLB, this translation may
> +         * still be inaccurate.
> +         */
> +        flush_tlb_local();
> +
> +        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
> +        if ( rc == -EFAULT )
> +            return; /* Try again */
> +    }

This code movement should really be a separate patch.

>
>      switch ( fsc )
>      {
> +    case FSC_FLT_TRANS:
> +    {
> +        if ( altp2m_active(d) )
> +        {
> +            const struct npfec npfec = {
> +                .insn_fetch = 1,
> +                .gla_valid = 1,
> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> +            };
> +
> +            /*
> +             * Copy the entire page of the failing instruction into the

I think "page" is misleading here. altp2m_lazy_copy is able to copy a 
superpage mapping also.

> +             * currently active altp2m view.
> +             */
> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
> +                return;
> +
> +            rc = p2m_mem_access_check(gpa, gva, npfec);

Why do you call p2m_mem_access_check here? If you are here it is for a 
translation fault which you handle via altp2m_lazy_copy.

> +
> +            /* Trap was triggered by mem_access, work here is done */
> +            if ( !rc )
> +                return;
> +        }
> +
> +        break;
> +    }
>      case FSC_FLT_PERM:
>      {
> -        paddr_t gpa;
>          const struct npfec npfec = {
>              .insn_fetch = 1,
>              .gla_valid = 1,
>              .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
>          };
>
> -        if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
> -            gpa = get_faulting_ipa(gva);
> -        else
> -        {
> -            /*
> -             * Flush the TLB to make sure the DTLB is clear before
> -             * doing GVA->IPA translation. If we got here because of
> -             * an entry only present in the ITLB, this translation may
> -             * still be inaccurate.
> -             */
> -            flush_tlb_local();
> -
> -            rc = gva_to_ipa(gva, &gpa, GV2M_READ);
> -            if ( rc == -EFAULT )
> -                return; /* Try again */
> -        }
> -
>          rc = p2m_mem_access_check(gpa, gva, npfec);
>
>          /* Trap was triggered by mem_access, work here is done */
> @@ -2451,6 +2482,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      int rc;
>      mmio_info_t info;
>      uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
> +    struct vcpu *v = current;
> +    struct p2m_domain *p2m = NULL;
>
>      info.dabt = dabt;
>  #ifdef CONFIG_ARM_32
> @@ -2459,7 +2492,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      info.gva = READ_SYSREG64(FAR_EL2);
>  #endif
>
> -    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
> +    if ( hpfar_is_valid(hsr.dabt.s1ptw, fsc) )
>          info.gpa = get_faulting_ipa(info.gva);
>      else
>      {
> @@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>
>      switch ( fsc )
>      {
> -    case FSC_FLT_PERM:
> +    case FSC_FLT_TRANS:
>      {
> -        const struct npfec npfec = {
> -            .read_access = !dabt.write,
> -            .write_access = dabt.write,
> -            .gla_valid = 1,
> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> -        };
> +        if ( altp2m_active(current->domain) )

I would much prefer to this checking altp2m only if the MMIO was not 
emulated (so moving the code afterwards). This will avoid to add 
overhead when access the virtual interrupt controller.

Also, how would that fit with break-before-make sequence introduced in [1]?

> +        {
> +            const struct npfec npfec = {
> +                .read_access = !dabt.write,
> +                .write_access = dabt.write,
> +                .gla_valid = 1,
> +                .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> +            };
>
> -        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
> +            /*
> +             * Copy the entire page of the failing data access into the
> +             * currently active altp2m view.
> +             */
> +            if ( altp2m_lazy_copy(v, info.gpa, info.gva, npfec, &p2m) )
> +                return;
> +
> +            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);

Similar question here.

> +
> +            /* Trap was triggered by mem_access, work here is done */
> +            if ( !rc )
> +                return;
> +        }
>
> -        /* Trap was triggered by mem_access, work here is done */
> -        if ( !rc )
> -            return;
> -        break;
> -    }
> -    case FSC_FLT_TRANS:
>          if ( dabt.s1ptw )
>              goto bad_data_abort;
>
> @@ -2515,6 +2556,23 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>              return;
>          }
>          break;
> +    }
> +    case FSC_FLT_PERM:
> +    {
> +        const struct npfec npfec = {
> +            .read_access = !dabt.write,
> +            .write_access = dabt.write,
> +            .gla_valid = 1,
> +            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> +        };
> +
> +        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
> +
> +        /* Trap was triggered by mem_access, work here is done */
> +        if ( !rc )
> +            return;
> +        break;
> +    }

Why did you move the case handling FSC_FLT_PERM?

>      default:
>          gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
>                  hsr.bits, dabt.dfsc);
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index 9aeb7d6..8bfbc6a 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -38,9 +38,7 @@ static inline bool_t altp2m_active(const struct domain *d)
>  /* Alternate p2m VCPU */
>  static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
>  {
> -    /* Not implemented on ARM, should not be reached. */
> -    BUG();
> -    return 0;
> +    return vcpu_altp2m(v).p2midx;
>  }
>
>  int altp2m_init(struct domain *d);
> @@ -52,6 +50,10 @@ void altp2m_vcpu_destroy(struct vcpu *v);
>  /* Get current alternate p2m table. */
>  struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
>
> +/* Switch alternate p2m for a single vcpu. */
> +bool_t altp2m_switch_vcpu_altp2m_by_id(struct vcpu *v,
> +                                       unsigned int idx);
> +
>  /* Switch alternate p2m for entire domain */
>  int altp2m_switch_domain_altp2m_by_id(struct domain *d,
>                                        unsigned int idx);
> @@ -81,6 +83,13 @@ int altp2m_set_mem_access(struct domain *d,
>                            p2m_access_t a,
>                            gfn_t gfn);
>
> +/* Alternate p2m paging mechanism. */
> +bool_t altp2m_lazy_copy(struct vcpu *v,
> +                        paddr_t gpa,
> +                        unsigned long gla,
> +                        struct npfec npfec,
> +                        struct p2m_domain **ap2m);
> +
>  /* Propagates changes made to hostp2m to affected altp2m views. */
>  void altp2m_propagate_change(struct domain *d,
>                               gfn_t sgfn,
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 59186c9..16e33ca 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -145,11 +145,7 @@ void p2m_mem_access_emulate_check(struct vcpu *v,
>      /* Not supported on ARM. */
>  }
>
> -static inline
> -void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
> -{
> -    /* Not supported on ARM. */
> -}
> +void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
>
>  /* Initialise vmid allocator */
>  void p2m_vmid_allocator_init(void);
>

[1] 
https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02957.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn.
  2016-08-01 17:10 ` [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn Sergej Proskurin
@ 2016-08-04 14:04   ` Julien Grall
  2016-08-06 13:45     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 14:04 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> This commit adds the functionality to change mfn mappings for specified
> gfn's in altp2m views. This mechanism can be used within the context of
> VMI, e.g., to establish stealthy debugging.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/altp2m.c        | 116 +++++++++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/hvm.c           |   7 ++-
>  xen/arch/arm/p2m.c           |  14 ++++++
>  xen/include/asm-arm/altp2m.h |   6 +++
>  xen/include/asm-arm/p2m.h    |   4 ++
>  5 files changed, 146 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
> index 78fc1d5..db86c14 100644
> --- a/xen/arch/arm/altp2m.c
> +++ b/xen/arch/arm/altp2m.c
> @@ -294,6 +294,122 @@ out:
>      altp2m_unlock(d);
>  }
>
> +int altp2m_change_gfn(struct domain *d,
> +                      unsigned int idx,
> +                      gfn_t old_gfn,
> +                      gfn_t new_gfn)
> +{
> +    struct p2m_domain *hp2m, *ap2m;
> +    paddr_t old_gpa = pfn_to_paddr(gfn_x(old_gfn));
> +    mfn_t mfn;
> +    xenmem_access_t xma;
> +    p2m_type_t p2mt;
> +    unsigned int level;
> +    int rc = -EINVAL;
> +
> +    static const p2m_access_t memaccess[] = {
> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
> +        ACCESS(n),
> +        ACCESS(r),
> +        ACCESS(w),
> +        ACCESS(rw),
> +        ACCESS(x),
> +        ACCESS(rx),
> +        ACCESS(wx),
> +        ACCESS(rwx),
> +        ACCESS(rx2rw),
> +        ACCESS(n2rwx),
> +#undef ACCESS
> +    };
> +
> +    if ( idx >= MAX_ALTP2M || d->arch.altp2m_vttbr[idx] == INVALID_VTTBR )

The second check is not safe. Another operation could destroy the altp2m 
at the same time, but because of memory ordering this thread may still 
see altp2m_vttbr as valid.


> +        return rc;
> +
> +    hp2m = p2m_get_hostp2m(d);
> +    ap2m = d->arch.altp2m_p2m[idx];
> +
> +    altp2m_lock(d);
> +
> +    /*
> +     * Flip mem_access_enabled to true when a permission is set, as to prevent
> +     * allocating or inserting super-pages.
> +     */
> +    ap2m->mem_access_enabled = true;

Can you give more details about why you need this?

> +
> +    mfn = p2m_lookup_attr(ap2m, old_gfn, &p2mt, &level, NULL, NULL);
> +
> +    /* Check whether the page needs to be reset. */
> +    if ( gfn_eq(new_gfn, INVALID_GFN) )
> +    {
> +        /* If mfn is mapped by old_gpa, remove old_gpa from the altp2m table. */
> +        if ( !mfn_eq(mfn, INVALID_MFN) )
> +        {
> +            rc = remove_altp2m_entry(d, ap2m, old_gpa, pfn_to_paddr(mfn_x(mfn)), level);

remove_altp2m_entry should take a gfn and mfn in parameter and not an 
address. The latter is a call for misusage of the API.

> +            if ( rc )
> +            {
> +                rc = -EINVAL;
> +                goto out;
> +            }
> +        }
> +
> +        rc = 0;
> +        goto out;
> +    }
> +
> +    /* Check host p2m if no valid entry in altp2m present. */
> +    if ( mfn_eq(mfn, INVALID_MFN) )
> +    {
> +        mfn = p2m_lookup_attr(hp2m, old_gfn, &p2mt, &level, NULL, &xma);
> +        if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )

Please add a comment to explain why the second check.

> +        {
> +            rc = -EINVAL;
> +            goto out;
> +        }
> +
> +        /* If this is a superpage, copy that first. */
> +        if ( level != 3 )
> +        {
> +            rc = modify_altp2m_entry(d, ap2m, old_gpa, pfn_to_paddr(mfn_x(mfn)),
> +                                     level, p2mt, memaccess[xma]);
> +            if ( rc )
> +            {
> +                rc = -EINVAL;
> +                goto out;
> +            }
> +        }
> +    }
> +
> +    mfn = p2m_lookup_attr(ap2m, new_gfn, &p2mt, &level, NULL, &xma);
> +
> +    /* If new_gfn is not part of altp2m, get the mapping information from hp2m */
> +    if ( mfn_eq(mfn, INVALID_MFN) )
> +        mfn = p2m_lookup_attr(hp2m, new_gfn, &p2mt, &level, NULL, &xma);
> +
> +    if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )

Please add a comment to explain why the second check.

> +    {
> +        rc = -EINVAL;
> +        goto out;
> +    }
> +
> +    /* Set mem access attributes - currently supporting only one (4K) page. */
> +    level = 3;
> +    rc = modify_altp2m_entry(d, ap2m, old_gpa, pfn_to_paddr(mfn_x(mfn)),

modify_altp2m_entry should take a gfn and mfn in parameter and not an 
address. The latter is a call for misusage of the API.

> +                             level, p2mt, memaccess[xma]);
> +    if ( rc )
> +    {
> +        rc = -EINVAL;
> +        goto out;
> +    }
> +
> +    rc = 0;
> +
> +out:
> +    altp2m_unlock(d);
> +
> +    return rc;
> +}
> +
> +
>  static void altp2m_vcpu_reset(struct vcpu *v)
>  {
>      struct altp2mvcpu *av = &vcpu_altp2m(v);
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index 00a244a..38b32de 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -142,7 +142,12 @@ static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>          break;
>
>      case HVMOP_altp2m_change_gfn:
> -        rc = -EOPNOTSUPP;
> +        if ( a.u.change_gfn.pad1 || a.u.change_gfn.pad2 )
> +            rc = -EINVAL;
> +        else
> +            rc = altp2m_change_gfn(d, a.u.change_gfn.view,
> +                                   _gfn(a.u.change_gfn.old_gfn),
> +                                   _gfn(a.u.change_gfn.new_gfn));
>          break;
>      }
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index bee8be7..2f4751b 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1321,6 +1321,20 @@ void guest_physmap_remove_page(struct domain *d,
>      p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, (1 << page_order), mfn);
>  }
>
> +int remove_altp2m_entry(struct domain *d, struct p2m_domain *ap2m,
> +                        paddr_t gpa, paddr_t maddr, unsigned int level)

The interface should take mfn_t and gfn_t in parameter and not address.

> +{
> +    paddr_t size = level_sizes[level];
> +    paddr_t mask = level_masks[level];
> +    gfn_t gfn = _gfn(paddr_to_pfn(gpa & mask));
> +    mfn_t mfn = _mfn(paddr_to_pfn(maddr & mask));
> +    unsigned long nr = paddr_to_pfn(size);
> +
> +    ASSERT(p2m_is_altp2m(ap2m));
> +
> +    return p2m_remove_mapping(d, ap2m, gfn, nr, mfn);
> +}
> +
>  int modify_altp2m_entry(struct domain *d, struct p2m_domain *ap2m,
>                          paddr_t gpa, paddr_t maddr, unsigned int level,
>                          p2m_type_t t, p2m_access_t a)
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index 8bfbc6a..64fbff7 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -99,4 +99,10 @@ void altp2m_propagate_change(struct domain *d,
>                               p2m_type_t p2mt,
>                               p2m_access_t p2ma);
>
> +/* Change a gfn->mfn mapping */
> +int altp2m_change_gfn(struct domain *d,
> +                      unsigned int idx,
> +                      gfn_t old_gfn,
> +                      gfn_t new_gfn);
> +
>  #endif /* __ASM_ARM_ALTP2M_H */
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 16e33ca..8433d66 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -182,6 +182,10 @@ mfn_t p2m_lookup_attr(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
>                        unsigned int *level, unsigned int *mattr,
>                        xenmem_access_t *xma);
>
> +/* Remove an altp2m view's entry. */
> +int remove_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
> +                        paddr_t gpa, paddr_t maddr, unsigned int level);
> +
>  /* Modify an altp2m view's entry or its attributes. */
>  int modify_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
>                          paddr_t gpa, paddr_t maddr, unsigned int level,
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 18/25] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-08-01 17:10 ` [PATCH v2 18/25] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
@ 2016-08-04 14:19   ` Julien Grall
  2016-08-06 11:03     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 14:19 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
>  int p2m_alloc_table(struct p2m_domain *p2m)
>  {
>      unsigned int i;
> @@ -1920,7 +1948,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>                          uint32_t start, uint32_t mask, xenmem_access_t access,
>                          unsigned int altp2m_idx)
>  {
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
>      p2m_access_t a;
>      long rc = 0;
>
> @@ -1939,33 +1967,60 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>  #undef ACCESS
>      };
>
> +    /* altp2m view 0 is treated as the hostp2m */
> +    if ( altp2m_idx )
> +    {
> +        if ( altp2m_idx >= MAX_ALTP2M ||
> +             d->arch.altp2m_vttbr[altp2m_idx] == INVALID_VTTBR )
> +            return -EINVAL;
> +
> +        ap2m = d->arch.altp2m_p2m[altp2m_idx];
> +    }
> +
>      switch ( access )
>      {
>      case 0 ... ARRAY_SIZE(memaccess) - 1:
>          a = memaccess[access];
>          break;
>      case XENMEM_access_default:
> -        a = p2m->default_access;
> +        a = hp2m->default_access;

Why the default_access is set the host p2m and not the alt p2m?

>          break;
>      default:
>          return -EINVAL;
>      }
>

[...]

> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index a6496b7..dc41f93 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -71,4 +71,14 @@ void altp2m_flush(struct domain *d);
>  int altp2m_destroy_by_id(struct domain *d,
>                           unsigned int idx);
>
> +/* Set memory access attributes of the gfn in the altp2m view. If the altp2m
> + * view does not contain the particular entry, copy it first from the hostp2m.
> + *
> + * Currently supports memory attribute adoptions of only one (4K) page. */

Coding style:

/*
  *  Foo
  *  Bar
  */

> +int altp2m_set_mem_access(struct domain *d,
> +                          struct p2m_domain *hp2m,
> +                          struct p2m_domain *ap2m,
> +                          p2m_access_t a,
> +                          gfn_t gfn);
> +
>  #endif /* __ASM_ARM_ALTP2M_H */
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 32326cb..9859ad1 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -180,6 +180,17 @@ void p2m_dump_info(struct domain *d);
>  /* Look up the MFN corresponding to a domain's GFN. */
>  mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
>
> +/* Lookup the MFN, memory attributes, and page table level corresponding to a
> + * domain's GFN. */
> +mfn_t p2m_lookup_attr(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
> +                      unsigned int *level, unsigned int *mattr,
> +                      xenmem_access_t *xma);

I don't want to see a such interface expose outside of p2m. The outside 
world may not know what means the level. And I don't understand why you 
return "mattr" here.

> +
> +/* Modify an altp2m view's entry or its attributes. */
> +int modify_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
> +                        paddr_t gpa, paddr_t maddr, unsigned int level,
> +                        p2m_type_t t, p2m_access_t a);
> +
>  /* Clean & invalidate caches corresponding to a region of guest address space */
>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
>
> @@ -303,6 +314,16 @@ static inline int get_page_and_type(struct page_info *page,
>  /* get host p2m table */
>  #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
>
> +static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
> +{
> +    return p2m->p2m_class == p2m_host;
> +}
> +
> +static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
> +{
> +    return p2m->p2m_class == p2m_alternate;
> +}
> +

Why those helpers are only added here? They should be at the same place 
you define the p2m_class.

>  /* vm_event and mem_access are supported on any ARM guest */
>  static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
>  {
>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 19/25] arm/altp2m: Add altp2m_propagate_change.
  2016-08-01 17:10 ` [PATCH v2 19/25] arm/altp2m: Add altp2m_propagate_change Sergej Proskurin
@ 2016-08-04 14:50   ` Julien Grall
  2016-08-06 11:26     ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 14:50 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> This commit introduces the function "altp2m_propagate_change" that is
> responsible to propagate changes applied to the host's p2m to a specific
> or even all altp2m views. In this way, Xen can in-/decrease the guest's
> physmem at run-time without leaving the altp2m views with
> stalled/invalid entries.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/altp2m.c        | 75 ++++++++++++++++++++++++++++++++++++++++++++
>  xen/arch/arm/p2m.c           | 14 +++++++++
>  xen/include/asm-arm/altp2m.h |  9 ++++++
>  xen/include/asm-arm/p2m.h    |  5 +++
>  4 files changed, 103 insertions(+)
>
> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
> index f98fd73..f3c1cff 100644
> --- a/xen/arch/arm/altp2m.c
> +++ b/xen/arch/arm/altp2m.c
> @@ -133,6 +133,81 @@ out:
>      return rc;
>  }
>
> +static inline void altp2m_reset(struct p2m_domain *p2m)
> +{
> +    read_lock(&p2m->lock);

Again read_lock does not protect you against concurrent access. Only 
against someone else update the page table.

This should be p2m_write_lock.

> +
> +    p2m_flush_table(p2m);
> +    p2m_flush_tlb(p2m);

altp2m_reset may be called on a p2m used by a running vCPU. What this 
code does is:
     1) clearing root page table
     2) free intermediate page table
     3) invalidate the TLB

Until step 3, the other TLBs may contain entries pointing the 
intermediate page table. But they were freed and could therefore be 
re-used for another purpose. So step 2 and 3 should be inverted.

I will re-iterate same message as in the previous series. Please have a 
think about the locking and memory ordering of all this series. I found 
a lot of race condition and I may have miss someone. If you have a doubt 
don't hesitate to ask.

> +
> +    p2m->lowest_mapped_gfn = INVALID_GFN;
> +    p2m->max_mapped_gfn = _gfn(0);
> +
> +    read_unlock(&p2m->lock);
> +}
> +
> +void altp2m_propagate_change(struct domain *d,
> +                             gfn_t sgfn,
> +                             unsigned long nr,
> +                             mfn_t smfn,
> +                             uint32_t mask,
> +                             p2m_type_t p2mt,
> +                             p2m_access_t p2ma)
> +{
> +    struct p2m_domain *p2m;
> +    mfn_t m;
> +    unsigned int i;
> +    unsigned int reset_count = 0;
> +    unsigned int last_reset_idx = ~0;
> +
> +    if ( !altp2m_active(d) )

This is not safe. d->arch.altp2m_active maybe be turn off just after you 
read. Maybe you want to protect it with altp2m_lock.

> +        return;
> +
> +    altp2m_lock(d);
> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        if ( d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
> +            continue;
> +
> +        p2m = d->arch.altp2m_p2m[i];
> +
> +        m = p2m_lookup_attr(p2m, sgfn, NULL, NULL, NULL, NULL);

What is the benefits to check if it already exists in the altp2m?

> +
> +        /* Check for a dropped page that may impact this altp2m. */
> +        if ( (mfn_eq(smfn, INVALID_MFN) || p2mt == p2m_invalid) &&

Why the check to p2mt against p2m_invalid?

> +             gfn_x(sgfn) >= gfn_x(p2m->lowest_mapped_gfn) &&
> +             gfn_x(sgfn) <= gfn_x(p2m->max_mapped_gfn) )
> +        {
> +            if ( !reset_count++ )
> +            {
> +                altp2m_reset(p2m);
> +                last_reset_idx = i;
> +            }
> +            else
> +            {
> +                /* At least 2 altp2m's impacted, so reset everything. */

So if you remove a 4KB page in more than 2 altp2m, you will flush all 
the p2m. This sounds really more time consuming (you have to free all 
the intermediate page table) than removing a single 4KB page.

> +                for ( i = 0; i < MAX_ALTP2M; i++ )
> +                {
> +                    if ( i == last_reset_idx ||
> +                         d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
> +                        continue;
> +
> +                    p2m = d->arch.altp2m_p2m[i];
> +                    altp2m_reset(p2m);
> +                }
> +                goto out;
> +            }
> +        }
> +        else if ( !mfn_eq(m, INVALID_MFN) )
> +            modify_altp2m_range(d, p2m, sgfn, nr, smfn,
> +                                mask, p2mt, p2ma);

I am a bit concerned about this function. We decided to limit the size 
of the mapping to avoid long running memory operations (see XSA-158).

With this function you multiply up to 10 times the duration of the 
operation.

Also, what is modify_altp2m_range has failed?


> +    }
> +
> +out:
> +    altp2m_unlock(d);
> +}
> +
>  static void altp2m_vcpu_reset(struct vcpu *v)
>  {
>      struct altp2mvcpu *av = &vcpu_altp2m(v);
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index e0a7f38..31810e6 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -992,6 +992,7 @@ static int apply_p2m_changes(struct domain *d,
>      const bool_t preempt = !is_idle_vcpu(current);
>      bool_t flush = false;
>      bool_t flush_pt;
> +    bool_t entry_written = false;
>      PAGE_LIST_HEAD(free_pages);
>      struct page_info *pg;
>
> @@ -1112,6 +1113,7 @@ static int apply_p2m_changes(struct domain *d,
>                                    &addr, &maddr, &flush,
>                                    t, a);
>              if ( ret < 0 ) { rc = ret ; goto out; }
> +            if ( ret ) entry_written = 1;

Please don't mix false and 1. This should be true here.

>              count += ret;
>
>              if ( ret != P2M_ONE_PROGRESS_NOP )
> @@ -1208,6 +1210,9 @@ out:
>
>      p2m_write_unlock(p2m);
>
> +    if ( rc >= 0 && entry_written && p2m_is_hostp2m(p2m) )
> +        altp2m_propagate_change(d, sgfn, nr, smfn, mask, t, a);

There are operation which does not require propagation (for instance 
RELINQUISH and CACHEFLUSH).

> +
>      if ( rc < 0 && ( op == INSERT ) &&
>           addr != start_gpaddr )
>      {
> @@ -1331,6 +1336,15 @@ int modify_altp2m_entry(struct domain *d, struct p2m_domain *ap2m,
>      return apply_p2m_changes(d, ap2m, INSERT, gfn, nr, mfn, 0, t, a);
>  }
>
> +int modify_altp2m_range(struct domain *d, struct p2m_domain *ap2m,
> +                        gfn_t sgfn, unsigned long nr, mfn_t smfn,
> +                        uint32_t m, p2m_type_t t, p2m_access_t a)
> +{
> +    ASSERT(p2m_is_altp2m(ap2m));
> +
> +    return apply_p2m_changes(d, ap2m, INSERT, sgfn, nr, smfn, m, t, a);
> +}
> +
>  int p2m_alloc_table(struct p2m_domain *p2m)
>  {
>      unsigned int i;
> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
> index dc41f93..9aeb7d6 100644
> --- a/xen/include/asm-arm/altp2m.h
> +++ b/xen/include/asm-arm/altp2m.h
> @@ -81,4 +81,13 @@ int altp2m_set_mem_access(struct domain *d,
>                            p2m_access_t a,
>                            gfn_t gfn);
>
> +/* Propagates changes made to hostp2m to affected altp2m views. */
> +void altp2m_propagate_change(struct domain *d,
> +                             gfn_t sgfn,
> +                             unsigned long nr,
> +                             mfn_t smfn,
> +                             uint32_t mask,
> +                             p2m_type_t p2mt,
> +                             p2m_access_t p2ma);
> +
>  #endif /* __ASM_ARM_ALTP2M_H */
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 9859ad1..59186c9 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -191,6 +191,11 @@ int modify_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
>                          paddr_t gpa, paddr_t maddr, unsigned int level,
>                          p2m_type_t t, p2m_access_t a);
>
> +/* Modify an altp2m view's range of entries or their attributes. */
> +int modify_altp2m_range(struct domain *d, struct p2m_domain *p2m,
> +                        gfn_t sgfn, unsigned long nr, mfn_t smfn,
> +                        uint32_t mask, p2m_type_t t, p2m_access_t a);
> +
>  /* Clean & invalidate caches corresponding to a region of guest address space */
>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-03 16:54   ` Julien Grall
@ 2016-08-04 16:01     ` Sergej Proskurin
  2016-08-04 16:04       ` Julien Grall
  2016-08-09 19:16     ` Tamas K Lengyel
  1 sibling, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-04 16:01 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/03/2016 06:54 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> This commit moves the altp2m-related code from x86 to ARM. Functions
>> that are no yet supported notify the caller or print a BUG message
>> stating their absence.
>>
>> Also, the struct arch_domain is extended with the altp2m_active
>> attribute, representing the current altp2m activity configuration of the
>> domain.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Removed altp2m command-line option: Guard through HVM_PARAM_ALTP2M.
>>     Removed not used altp2m helper stubs in altp2m.h.
>> ---
>>  xen/arch/arm/hvm.c           | 79
>> ++++++++++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/altp2m.h |  4 +--
>>  xen/include/asm-arm/domain.h |  3 ++
>>  3 files changed, 84 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index d999bde..eb524ae 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -32,6 +32,81 @@
>>
>>  #include <asm/hypercall.h>
>>
>> +#include <asm/altp2m.h>
>> +
>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>> +{
>> +    struct xen_hvm_altp2m_op a;
>> +    struct domain *d = NULL;
>> +    int rc = 0;
>> +
>> +    if ( copy_from_guest(&a, arg, 1) )
>> +        return -EFAULT;
>> +
>> +    if ( a.pad1 || a.pad2 ||
>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>> +        return -EINVAL;
>> +
>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>> +        rcu_lock_domain_by_any_id(a.domain) :
>> rcu_lock_current_domain();
>> +
>> +    if ( d == NULL )
>> +        return -ESRCH;
>> +
>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>> +         !d->arch.altp2m_active )
>
> Why not using altp2m_active(d) here?
>

I have already changed that within the next patch version. Thank you.

> Also this check looks quite racy. What does prevent another CPU to
> disable altp2m at the same time? How the code would behave?
>

Thank you. I will protect this part with the altp2m_lock.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-04 16:01     ` Sergej Proskurin
@ 2016-08-04 16:04       ` Julien Grall
  2016-08-04 16:22         ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 16:04 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 04/08/16 17:01, Sergej Proskurin wrote:
> Hi Julien,
>
>
> On 08/03/2016 06:54 PM, Julien Grall wrote:
>> Hello Sergej,
>>
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>> This commit moves the altp2m-related code from x86 to ARM. Functions
>>> that are no yet supported notify the caller or print a BUG message
>>> stating their absence.
>>>
>>> Also, the struct arch_domain is extended with the altp2m_active
>>> attribute, representing the current altp2m activity configuration of the
>>> domain.
>>>
>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>> ---
>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>> Cc: Julien Grall <julien.grall@arm.com>
>>> ---
>>> v2: Removed altp2m command-line option: Guard through HVM_PARAM_ALTP2M.
>>>     Removed not used altp2m helper stubs in altp2m.h.
>>> ---
>>>  xen/arch/arm/hvm.c           | 79
>>> ++++++++++++++++++++++++++++++++++++++++++++
>>>  xen/include/asm-arm/altp2m.h |  4 +--
>>>  xen/include/asm-arm/domain.h |  3 ++
>>>  3 files changed, 84 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>>> index d999bde..eb524ae 100644
>>> --- a/xen/arch/arm/hvm.c
>>> +++ b/xen/arch/arm/hvm.c
>>> @@ -32,6 +32,81 @@
>>>
>>>  #include <asm/hypercall.h>
>>>
>>> +#include <asm/altp2m.h>
>>> +
>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>> +{
>>> +    struct xen_hvm_altp2m_op a;
>>> +    struct domain *d = NULL;
>>> +    int rc = 0;
>>> +
>>> +    if ( copy_from_guest(&a, arg, 1) )
>>> +        return -EFAULT;
>>> +
>>> +    if ( a.pad1 || a.pad2 ||
>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>> +        return -EINVAL;
>>> +
>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>> +        rcu_lock_domain_by_any_id(a.domain) :
>>> rcu_lock_current_domain();
>>> +
>>> +    if ( d == NULL )
>>> +        return -ESRCH;
>>> +
>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>> +         !d->arch.altp2m_active )
>>
>> Why not using altp2m_active(d) here?
>>
>
> I have already changed that within the next patch version. Thank you.
>
>> Also this check looks quite racy. What does prevent another CPU to
>> disable altp2m at the same time? How the code would behave?
>>
>
> Thank you. I will protect this part with the altp2m_lock.

I have noticed that you use the altp2m_lock (it is a spinlock) in 
multiple places. So you will serialize a lot of code. Is it fine for you?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 03/25] arm/altp2m: Add struct vttbr.
  2016-08-03 17:05     ` Julien Grall
@ 2016-08-04 16:11       ` Sergej Proskurin
  2016-08-04 16:15         ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-04 16:11 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,

>> Hello Sergej,
>>
>> Title: s/altp2m/p2m/

I will adapt the titles of all patches, thank you.

>>
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>> The struct vttbr introduces a simple way to precisely access the
>>> individual fields of the vttbr.
>>
>> I am not sure whether this is really helpful. You don't seem to take
>> often advantage of those fields and the actual accesses don't seem
>> necessary (I will comment on the usage).
>>
>>> ---
>>>  xen/arch/arm/p2m.c              |  8 ++++----
>>>  xen/arch/arm/traps.c            |  2 +-
>>>  xen/include/asm-arm/p2m.h       |  2 +-
>>>  xen/include/asm-arm/processor.h | 16 ++++++++++++++++
>>>  4 files changed, 22 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 40a0b80..cbc64a1 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -122,7 +122,7 @@ void p2m_restore_state(struct vcpu *n)
>>>      WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>>>      isb();
>>>
>>> -    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>>> +    WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
>>>      isb();
>>>
>>>      if ( is_32bit_domain(n->domain) )
>>> @@ -147,10 +147,10 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
>>>       * VMID. So switch to the VTTBR of a given P2M if different.
>>>       */
>>>      ovttbr = READ_SYSREG64(VTTBR_EL2);
>>> -    if ( ovttbr != p2m->vttbr )
>>> +    if ( ovttbr != p2m->vttbr.vttbr )
>>>      {
>>>          local_irq_save(flags);
>>> -        WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>>> +        WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
>>>          isb();
>>>      }
>>>
>>> @@ -1293,7 +1293,7 @@ static int p2m_alloc_table(struct domain *d)
>>>
>>>      p2m->root = page;
>>>
>>> -    p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid &
>>> 0xff) << 48;
>>> +    p2m->vttbr.vttbr = page_to_maddr(p2m->root) |
>>> ((uint64_t)p2m->vmid & 0xff) << 48;
>>>
>>>      /*
>>>       * Make sure that all TLBs corresponding to the new VMID are
>>> flushed
>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>> index 06f06e3..12be7c9 100644
>>> --- a/xen/arch/arm/traps.c
>>> +++ b/xen/arch/arm/traps.c
>>> @@ -881,7 +881,7 @@ void vcpu_show_registers(const struct vcpu *v)
>>>      ctxt.ifsr32_el2 = v->arch.ifsr;
>>>  #endif
>>>
>>> -    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
>>> +    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr.vttbr;
>>>
>>>      _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1,
>>> v);
>>>  }
>>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>>> index 53c4d78..5c7cd1a 100644
>>> --- a/xen/include/asm-arm/p2m.h
>>> +++ b/xen/include/asm-arm/p2m.h
>>> @@ -33,7 +33,7 @@ struct p2m_domain {
>>>      uint8_t vmid;
>>>
>>>      /* Current Translation Table Base Register for the p2m */
>>> -    uint64_t vttbr;
>>> +    struct vttbr vttbr;
>>>
>>>      /*
>>>       * Highest guest frame that's ever been mapped in the p2m
>>> diff --git a/xen/include/asm-arm/processor.h
>>> b/xen/include/asm-arm/processor.h
>>> index 15bf890..f8ca18c 100644
>>> --- a/xen/include/asm-arm/processor.h
>>> +++ b/xen/include/asm-arm/processor.h
>>> @@ -529,6 +529,22 @@ union hsr {
>>>
>>>
>>>  };
>>> +
>>> +/* VTTBR: Virtualization Translation Table Base Register */
>>> +struct vttbr {
>>> +    union {
>>> +        struct {
>>> +            u64 baddr :40, /* variable res0: from 0-(x-1) bit */
>>
>> As mentioned on the previous series, this field is 48 bits for ARMv8
>> (see ARM D7.2.102 in DDI 0487A.j).
>>

I must have missed it during refactoring. At this point, I will
distinguish between __arm__ and __aarch64__, thank you.

>>> +                res1  :8,
>>> +                vmid  :8,
>>> +                res2  :8;
>>> +        };
>>> +        u64 vttbr;
>>> +    };
>>> +};
>>> +
>>> +#define INVALID_VTTBR (0UL)
>>> +
>>>  #endif
>>>
>>>  /* HSR.EC == HSR_CP{15,14,10}_32 */
>>>
>>
>> Regards,
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 03/25] arm/altp2m: Add struct vttbr.
  2016-08-04 16:11       ` Sergej Proskurin
@ 2016-08-04 16:15         ` Julien Grall
  2016-08-06  8:54           ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 16:15 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 04/08/16 17:11, Sergej Proskurin wrote:
>>>> diff --git a/xen/include/asm-arm/processor.h
>>>> b/xen/include/asm-arm/processor.h
>>>> index 15bf890..f8ca18c 100644
>>>> --- a/xen/include/asm-arm/processor.h
>>>> +++ b/xen/include/asm-arm/processor.h
>>>> @@ -529,6 +529,22 @@ union hsr {
>>>>
>>>>
>>>>  };
>>>> +
>>>> +/* VTTBR: Virtualization Translation Table Base Register */
>>>> +struct vttbr {
>>>> +    union {
>>>> +        struct {
>>>> +            u64 baddr :40, /* variable res0: from 0-(x-1) bit */
>>>
>>> As mentioned on the previous series, this field is 48 bits for ARMv8
>>> (see ARM D7.2.102 in DDI 0487A.j).
>>>
>
> I must have missed it during refactoring. At this point, I will
> distinguish between __arm__ and __aarch64__, thank you.

After reading this series I see no point having this union. So I would 
much prefer to see this patch dropped.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-04 16:04       ` Julien Grall
@ 2016-08-04 16:22         ` Sergej Proskurin
  2016-08-04 16:51           ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-04 16:22 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini


On 08/04/2016 06:04 PM, Julien Grall wrote:
>
>
> On 04/08/16 17:01, Sergej Proskurin wrote:
>> Hi Julien,
>>
>>
>> On 08/03/2016 06:54 PM, Julien Grall wrote:
>>> Hello Sergej,
>>>
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>> This commit moves the altp2m-related code from x86 to ARM. Functions
>>>> that are no yet supported notify the caller or print a BUG message
>>>> stating their absence.
>>>>
>>>> Also, the struct arch_domain is extended with the altp2m_active
>>>> attribute, representing the current altp2m activity configuration
>>>> of the
>>>> domain.
>>>>
>>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>>> ---
>>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>>> Cc: Julien Grall <julien.grall@arm.com>
>>>> ---
>>>> v2: Removed altp2m command-line option: Guard through
>>>> HVM_PARAM_ALTP2M.
>>>>     Removed not used altp2m helper stubs in altp2m.h.
>>>> ---
>>>>  xen/arch/arm/hvm.c           | 79
>>>> ++++++++++++++++++++++++++++++++++++++++++++
>>>>  xen/include/asm-arm/altp2m.h |  4 +--
>>>>  xen/include/asm-arm/domain.h |  3 ++
>>>>  3 files changed, 84 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>>>> index d999bde..eb524ae 100644
>>>> --- a/xen/arch/arm/hvm.c
>>>> +++ b/xen/arch/arm/hvm.c
>>>> @@ -32,6 +32,81 @@
>>>>
>>>>  #include <asm/hypercall.h>
>>>>
>>>> +#include <asm/altp2m.h>
>>>> +
>>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>>> +{
>>>> +    struct xen_hvm_altp2m_op a;
>>>> +    struct domain *d = NULL;
>>>> +    int rc = 0;
>>>> +
>>>> +    if ( copy_from_guest(&a, arg, 1) )
>>>> +        return -EFAULT;
>>>> +
>>>> +    if ( a.pad1 || a.pad2 ||
>>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>>> +        return -EINVAL;
>>>> +
>>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>>> +        rcu_lock_domain_by_any_id(a.domain) :
>>>> rcu_lock_current_domain();
>>>> +
>>>> +    if ( d == NULL )
>>>> +        return -ESRCH;
>>>> +
>>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>>> +         !d->arch.altp2m_active )
>>>
>>> Why not using altp2m_active(d) here?
>>>
>>
>> I have already changed that within the next patch version. Thank you.
>>
>>> Also this check looks quite racy. What does prevent another CPU to
>>> disable altp2m at the same time? How the code would behave?
>>>
>>
>> Thank you. I will protect this part with the altp2m_lock.
>
> I have noticed that you use the altp2m_lock (it is a spinlock) in
> multiple places. So you will serialize a lot of code. Is it fine for you?
>

I would need to move the lock from altp2m_init_by_id to the outside.
This would not lock much more code as it already does. Apart from that,
since activating/deactivating altp2m on a specific domain should be used
very rarely (including the first time when no altp2m structures are
initialized), it is fine to me. Unless, you would like me to use a
different lock instead?

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-04 16:22         ` Sergej Proskurin
@ 2016-08-04 16:51           ` Julien Grall
  2016-08-05  6:55             ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 16:51 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 04/08/16 17:22, Sergej Proskurin wrote:
>
> On 08/04/2016 06:04 PM, Julien Grall wrote:
>>
>>
>> On 04/08/16 17:01, Sergej Proskurin wrote:
>>> Hi Julien,
>>>
>>>
>>> On 08/03/2016 06:54 PM, Julien Grall wrote:
>>>> Hello Sergej,
>>>>
>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>> This commit moves the altp2m-related code from x86 to ARM. Functions
>>>>> that are no yet supported notify the caller or print a BUG message
>>>>> stating their absence.
>>>>>
>>>>> Also, the struct arch_domain is extended with the altp2m_active
>>>>> attribute, representing the current altp2m activity configuration
>>>>> of the
>>>>> domain.
>>>>>
>>>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>>>> ---
>>>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>>>> Cc: Julien Grall <julien.grall@arm.com>
>>>>> ---
>>>>> v2: Removed altp2m command-line option: Guard through
>>>>> HVM_PARAM_ALTP2M.
>>>>>     Removed not used altp2m helper stubs in altp2m.h.
>>>>> ---
>>>>>  xen/arch/arm/hvm.c           | 79
>>>>> ++++++++++++++++++++++++++++++++++++++++++++
>>>>>  xen/include/asm-arm/altp2m.h |  4 +--
>>>>>  xen/include/asm-arm/domain.h |  3 ++
>>>>>  3 files changed, 84 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>>>>> index d999bde..eb524ae 100644
>>>>> --- a/xen/arch/arm/hvm.c
>>>>> +++ b/xen/arch/arm/hvm.c
>>>>> @@ -32,6 +32,81 @@
>>>>>
>>>>>  #include <asm/hypercall.h>
>>>>>
>>>>> +#include <asm/altp2m.h>
>>>>> +
>>>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>> +{
>>>>> +    struct xen_hvm_altp2m_op a;
>>>>> +    struct domain *d = NULL;
>>>>> +    int rc = 0;
>>>>> +
>>>>> +    if ( copy_from_guest(&a, arg, 1) )
>>>>> +        return -EFAULT;
>>>>> +
>>>>> +    if ( a.pad1 || a.pad2 ||
>>>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>>>> +        return -EINVAL;
>>>>> +
>>>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>>>> +        rcu_lock_domain_by_any_id(a.domain) :
>>>>> rcu_lock_current_domain();
>>>>> +
>>>>> +    if ( d == NULL )
>>>>> +        return -ESRCH;
>>>>> +
>>>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>>>> +         !d->arch.altp2m_active )
>>>>
>>>> Why not using altp2m_active(d) here?
>>>>
>>>
>>> I have already changed that within the next patch version. Thank you.
>>>
>>>> Also this check looks quite racy. What does prevent another CPU to
>>>> disable altp2m at the same time? How the code would behave?
>>>>
>>>
>>> Thank you. I will protect this part with the altp2m_lock.
>>
>> I have noticed that you use the altp2m_lock (it is a spinlock) in
>> multiple places. So you will serialize a lot of code. Is it fine for you?
>>
>
> I would need to move the lock from altp2m_init_by_id to the outside.
> This would not lock much more code as it already does. Apart from that,
> since activating/deactivating altp2m on a specific domain should be used
> very rarely (including the first time when no altp2m structures are
> initialized), it is fine to me. Unless, you would like me to use a
> different lock instead?

I don't know, it was an open question. There are a couple of places 
where you may need to add lock atlp2m_lock (such altp2m_copy_lazy), so 
you would serialize all the access. If you care about performance, then 
you may want to use a different lock or method of locking (such as 
read-write-lock).

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-01 17:10 ` [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
  2016-08-04 13:50   ` Julien Grall
@ 2016-08-04 16:59   ` Julien Grall
  2016-08-06 12:57     ` Sergej Proskurin
  1 sibling, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-04 16:59 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hi Sergej,

On 01/08/16 18:10, Sergej Proskurin wrote:
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 12be7c9..628abd7 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c

[...]

> @@ -2403,35 +2405,64 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,

[...]

>      switch ( fsc )
>      {
> +    case FSC_FLT_TRANS:
> +    {
> +        if ( altp2m_active(d) )
> +        {
> +            const struct npfec npfec = {
> +                .insn_fetch = 1,
> +                .gla_valid = 1,
> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> +            };
> +
> +            /*
> +             * Copy the entire page of the failing instruction into the
> +             * currently active altp2m view.
> +             */
> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
> +                return;

I forgot to mention that I think there is a race condition here. If 
multiple vCPU (let say A and B) use the same altp2m, they may fault here.

If vCPU A already fixed the fault, this function will return false and 
continue. So this will lead to inject an instruction abort to the guest.

> +
> +            rc = p2m_mem_access_check(gpa, gva, npfec);
> +
> +            /* Trap was triggered by mem_access, work here is done */
> +            if ( !rc )
> +                return;
> +        }
> +
> +        break;
> +    }

[...]

> @@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>
>      switch ( fsc )
>      {
> -    case FSC_FLT_PERM:
> +    case FSC_FLT_TRANS:
>      {
> -        const struct npfec npfec = {
> -            .read_access = !dabt.write,
> -            .write_access = dabt.write,
> -            .gla_valid = 1,
> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> -        };
> +        if ( altp2m_active(current->domain) )
> +        {
> +            const struct npfec npfec = {
> +                .read_access = !dabt.write,
> +                .write_access = dabt.write,
> +                .gla_valid = 1,
> +                .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> +            };
>
> -        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
> +            /*
> +             * Copy the entire page of the failing data access into the
> +             * currently active altp2m view.
> +             */
> +            if ( altp2m_lazy_copy(v, info.gpa, info.gva, npfec, &p2m) )
> +                return;

Ditto.

> +
> +            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
> +
> +            /* Trap was triggered by mem_access, work here is done */
> +            if ( !rc )
> +                return;
> +        }

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines.
  2016-08-03 18:12   ` Julien Grall
@ 2016-08-05  6:53     ` Sergej Proskurin
  2016-08-05  9:20       ` Julien Grall
  2016-08-09  9:44       ` Sergej Proskurin
  0 siblings, 2 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-05  6:53 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/03/2016 08:12 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> The p2m initialization now invokes initialization routines responsible
>> for the allocation and initialization of altp2m structures. The same
>> applies to teardown routines. The functionality has been adopted from
>> the x86 altp2m implementation.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Shared code between host/altp2m init/teardown functions.
>>     Added conditional init/teardown of altp2m.
>>     Altp2m related functions are moved to altp2m.c
>> ---
>>  xen/arch/arm/Makefile        |  1 +
>>  xen/arch/arm/altp2m.c        | 71
>> ++++++++++++++++++++++++++++++++++++++++++++
>>  xen/arch/arm/p2m.c           | 28 +++++++++++++----
>>  xen/include/asm-arm/altp2m.h |  6 ++++
>>  xen/include/asm-arm/domain.h |  4 +++
>>  xen/include/asm-arm/p2m.h    |  5 ++++
>>  6 files changed, 110 insertions(+), 5 deletions(-)
>>  create mode 100644 xen/arch/arm/altp2m.c
>>
>> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
>> index 23aaf52..4a7f660 100644
>> --- a/xen/arch/arm/Makefile
>> +++ b/xen/arch/arm/Makefile
>> @@ -5,6 +5,7 @@ subdir-$(CONFIG_ARM_64) += efi
>>  subdir-$(CONFIG_ACPI) += acpi
>>
>>  obj-$(CONFIG_ALTERNATIVE) += alternative.o
>> +obj-y += altp2m.o
>>  obj-y += bootfdt.o
>>  obj-y += cpu.o
>>  obj-y += cpuerrata.o
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> new file mode 100644
>> index 0000000..abbd39a
>> --- /dev/null
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -0,0 +1,71 @@
>> +/*
>> + * arch/arm/altp2m.c
>> + *
>> + * Alternate p2m
>> + * Copyright (c) 2016 Sergej Proskurin <proskurin@sec.in.tum.de>
>> + *
>> + * This program is free software; you can redistribute it and/or
>> modify it
>> + * under the terms and conditions of the GNU General Public License,
>> version 2,
>> + * as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but
>> WITHOUT ANY
>> + * WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> FITNESS
>> + * FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> more
>> + * details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> along with
>> + * this program; If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <asm/p2m.h>
>> +#include <asm/altp2m.h>
>> +
>> +int altp2m_init(struct domain *d)
>> +{
>> +    unsigned int i;
>> +
>> +    spin_lock_init(&d->arch.altp2m_lock);
>> +
>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>> +    {
>> +        d->arch.altp2m_p2m[i] = NULL;
>> +        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
>
> I don't think altp2m_vttbr is useful. There is no real performance
> impact to free the whole altp2m if the altp2m is destroyed (see
> altp2m_destroy_by_id) and re-allocated afterwards.
>
> The code will actually much simpler. With this solution you will be
> able to detect if an altp2m is available by testin altp2m_p2m[i] is NULL.
>

This is true. I did not want to free the entire domain got every time
the hostp2m got changed, while altp2m was active (see
altp2m_propagate_change). But it would not introduce much more overhead
when it does. Thank you.

>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +void altp2m_teardown(struct domain *d)
>> +{
>> +    unsigned int i;
>> +    struct p2m_domain *p2m;
>> +
>> +    altp2m_lock(d);
>
> The lock is not necessary here.
>

Fair, the domain gets destroyed anyway. Thank you.

>> +
>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>> +    {
>> +        if ( !d->arch.altp2m_p2m[i] )
>> +            continue;
>> +
>> +        p2m = d->arch.altp2m_p2m[i];
>> +        p2m_free_one(p2m);
>> +        xfree(p2m);
>> +
>> +        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
>> +        d->arch.altp2m_p2m[i] = NULL;
>
> The domain will never be used afterward, so there is no point to set
> altp2m_vttbr and altp2m_p2m.
>

Makes sense.

>> +    }
>> +
>> +    d->arch.altp2m_active = false;
>
> Ditto.
>

Thanks.

>> +
>> +    altp2m_unlock(d);
>> +}
>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * tab-width: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index ff9c0d1..29ec5e5 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -14,6 +14,8 @@
>>  #include <asm/hardirq.h>
>>  #include <asm/page.h>
>>
>> +#include <asm/altp2m.h>
>> +
>>  #ifdef CONFIG_ARM_64
>>  static unsigned int __read_mostly p2m_root_order;
>>  static unsigned int __read_mostly p2m_root_level;
>> @@ -1392,7 +1394,7 @@ void p2m_flush_table(struct p2m_domain *p2m)
>>          free_domheap_page(pg);
>>  }
>>
>> -static inline void p2m_free_one(struct p2m_domain *p2m)
>> +void p2m_free_one(struct p2m_domain *p2m)
>
> Please expose p2m_free_one in patch #4.
>

Ok.

>>  {
>>      p2m_flush_table(p2m);
>>
>> @@ -1415,9 +1417,13 @@ int p2m_init_one(struct domain *d, struct
>> p2m_domain *p2m)
>>      rwlock_init(&p2m->lock);
>>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>>
>> -    rc = p2m_alloc_vmid(p2m);
>> -    if ( rc != 0 )
>> -        return rc;
>> +    /* Reused altp2m views keep their VMID. */
>> +    if ( p2m->vmid == INVALID_VMID )
>> +    {
>> +        rc = p2m_alloc_vmid(p2m);
>> +        if ( rc != 0 )
>> +            return rc;
>> +    }
>
> My suggestion above will avoid this kind of hack.
>
>>
>>      p2m->domain = d;
>>      p2m->access_required = false;
>> @@ -1441,6 +1447,9 @@ static void p2m_teardown_hostp2m(struct domain *d)
>>
>>  void p2m_teardown(struct domain *d)
>>  {
>> +    if ( altp2m_enabled(d) )
>> +        altp2m_teardown(d);
>> +
>>      p2m_teardown_hostp2m(d);
>>  }
>>
>> @@ -1460,7 +1469,16 @@ static int p2m_init_hostp2m(struct domain *d)
>>
>>  int p2m_init(struct domain *d)
>>  {
>> -    return p2m_init_hostp2m(d);
>> +    int rc;
>> +
>> +    rc = p2m_init_hostp2m(d);
>> +    if ( rc )
>> +        return rc;
>> +
>> +    if ( altp2m_enabled(d) )
>
> I am a bit skeptical that you can fully use altp2m with this check.
> p2m_init is created at the very beginning when the domain is
> allocated. So the HVM_PARAM_ALTP2M will not be set.
>

I will respond to this answer, after I have made sure this is the case.
You are right, the need for this call depends on which point in the
domain initialization process libxl sets the parameter HVM_PARAM_ALTP2M.
Thank you.

>> +        rc = altp2m_init(d);
>> +
>> +    return rc;
>>  }
>>
>>  int relinquish_p2m_mapping(struct domain *d)
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index d47b249..79ea66b 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -22,6 +22,9 @@
>>
>>  #include <xen/sched.h>
>>
>> +#define altp2m_lock(d)    spin_lock(&(d)->arch.altp2m_lock)
>> +#define altp2m_unlock(d)  spin_unlock(&(d)->arch.altp2m_lock)
>> +
>>  #define altp2m_enabled(d)
>> ((d)->arch.hvm_domain.params[HVM_PARAM_ALTP2M])
>>
>>  /* Alternate p2m on/off per domain */
>> @@ -38,4 +41,7 @@ static inline uint16_t altp2m_vcpu_idx(const struct
>> vcpu *v)
>>      return 0;
>>  }
>>
>> +int altp2m_init(struct domain *d);
>> +void altp2m_teardown(struct domain *d);
>> +
>>  #endif /* __ASM_ARM_ALTP2M_H */
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index cc4bda0..3c25ea5 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -129,6 +129,10 @@ struct arch_domain
>>
>>      /* altp2m: allow multiple copies of host p2m */
>>      bool_t altp2m_active;
>> +    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
>> +    uint64_t altp2m_vttbr[MAX_ALTP2M];
>> +    /* Covers access to members of the struct altp2m. */
>
> I cannot find any "struct altp2m" in the code.
>

Stalled comment, thank you.

>> +    spinlock_t altp2m_lock;
>>  }  __cacheline_aligned;
>>
>>  struct arch_vcpu
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 1f9c370..24a1f61 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -9,6 +9,8 @@
>>  #include <xen/p2m-common.h>
>>  #include <public/memory.h>
>>
>> +#define MAX_ALTP2M 10           /* ARM might contain an arbitrary
>> number of
>> +                                   altp2m views. */
>>  #define paddr_bits PADDR_BITS
>>
>>  /* Holds the bit size of IPAs in p2m tables.  */
>> @@ -212,6 +214,9 @@ void guest_physmap_remove_page(struct domain *d,
>>
>>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>>
>> +/* Release resources held by the p2m structure. */
>> +void p2m_free_one(struct p2m_domain *p2m);
>> +
>>  /*
>>   * Populate-on-demand
>>   */
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-04 16:51           ` Julien Grall
@ 2016-08-05  6:55             ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-05  6:55 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 06:51 PM, Julien Grall wrote:
>
>
> On 04/08/16 17:22, Sergej Proskurin wrote:
>>
>> On 08/04/2016 06:04 PM, Julien Grall wrote:
>>>
>>>
>>> On 04/08/16 17:01, Sergej Proskurin wrote:
>>>> Hi Julien,
>>>>
>>>>
>>>> On 08/03/2016 06:54 PM, Julien Grall wrote:
>>>>> Hello Sergej,
>>>>>
>>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>>> This commit moves the altp2m-related code from x86 to ARM. Functions
>>>>>> that are no yet supported notify the caller or print a BUG message
>>>>>> stating their absence.
>>>>>>
>>>>>> Also, the struct arch_domain is extended with the altp2m_active
>>>>>> attribute, representing the current altp2m activity configuration
>>>>>> of the
>>>>>> domain.
>>>>>>
>>>>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>>>>> ---
>>>>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>>>>> Cc: Julien Grall <julien.grall@arm.com>
>>>>>> ---
>>>>>> v2: Removed altp2m command-line option: Guard through
>>>>>> HVM_PARAM_ALTP2M.
>>>>>>     Removed not used altp2m helper stubs in altp2m.h.
>>>>>> ---
>>>>>>  xen/arch/arm/hvm.c           | 79
>>>>>> ++++++++++++++++++++++++++++++++++++++++++++
>>>>>>  xen/include/asm-arm/altp2m.h |  4 +--
>>>>>>  xen/include/asm-arm/domain.h |  3 ++
>>>>>>  3 files changed, 84 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>>>>>> index d999bde..eb524ae 100644
>>>>>> --- a/xen/arch/arm/hvm.c
>>>>>> +++ b/xen/arch/arm/hvm.c
>>>>>> @@ -32,6 +32,81 @@
>>>>>>
>>>>>>  #include <asm/hypercall.h>
>>>>>>
>>>>>> +#include <asm/altp2m.h>
>>>>>> +
>>>>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>> +{
>>>>>> +    struct xen_hvm_altp2m_op a;
>>>>>> +    struct domain *d = NULL;
>>>>>> +    int rc = 0;
>>>>>> +
>>>>>> +    if ( copy_from_guest(&a, arg, 1) )
>>>>>> +        return -EFAULT;
>>>>>> +
>>>>>> +    if ( a.pad1 || a.pad2 ||
>>>>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>>>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>>>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>>>>> +        return -EINVAL;
>>>>>> +
>>>>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>>>>> +        rcu_lock_domain_by_any_id(a.domain) :
>>>>>> rcu_lock_current_domain();
>>>>>> +
>>>>>> +    if ( d == NULL )
>>>>>> +        return -ESRCH;
>>>>>> +
>>>>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>>>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>>>>> +         !d->arch.altp2m_active )
>>>>>
>>>>> Why not using altp2m_active(d) here?
>>>>>
>>>>
>>>> I have already changed that within the next patch version. Thank you.
>>>>
>>>>> Also this check looks quite racy. What does prevent another CPU to
>>>>> disable altp2m at the same time? How the code would behave?
>>>>>
>>>>
>>>> Thank you. I will protect this part with the altp2m_lock.
>>>
>>> I have noticed that you use the altp2m_lock (it is a spinlock) in
>>> multiple places. So you will serialize a lot of code. Is it fine for
>>> you?
>>>
>>
>> I would need to move the lock from altp2m_init_by_id to the outside.
>> This would not lock much more code as it already does. Apart from that,
>> since activating/deactivating altp2m on a specific domain should be used
>> very rarely (including the first time when no altp2m structures are
>> initialized), it is fine to me. Unless, you would like me to use a
>> different lock instead?
>
> I don't know, it was an open question. There are a couple of places
> where you may need to add lock atlp2m_lock (such altp2m_copy_lazy), so
> you would serialize all the access. If you care about performance,
> then you may want to use a different lock or method of locking (such
> as read-write-lock).
>

Fair enough. I will go through the locks and think about your
suggestion, thank you.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions.
  2016-08-03 17:40   ` Julien Grall
@ 2016-08-05  7:26     ` Sergej Proskurin
  2016-08-05  9:16       ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-05  7:26 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/03/2016 07:40 PM, Julien Grall wrote:
> Hello Sergej,
>
> Title: s/altp2m/p2m/ and please drop the full stop.

Ok.

>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> This commit pulls out generic init/teardown functionality out of
>> p2m_init and p2m_teardown into p2m_init_one, p2m_free_one, and
>> p2m_flush_table functions.  This allows our future implementation to
>> reuse existing code for the initialization/teardown of altp2m views.
>
> Please avoid to mix-up code movement and new additions. This makes the
> code more difficult to review.

I will remove unnecessary code movements.

>
> Also, you don't mention the new changes in the commit message.

Since this patch is new to the patch series (the patch that got split is
#07, where I have commented the changes), I did not add any but rather
described the patch without being specific to the patch version.

>
> After reading the patch, it should really be divided and explain why
> you split like that.
>

Ok.

>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Added the function p2m_flush_table to the previous version.
>> ---
>>  xen/arch/arm/p2m.c        | 74
>> +++++++++++++++++++++++++++++++++++++----------
>>  xen/include/asm-arm/p2m.h | 11 +++++++
>>  2 files changed, 70 insertions(+), 15 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index cbc64a1..17f3299 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1360,50 +1360,94 @@ static void p2m_free_vmid(struct domain *d)
>>      spin_unlock(&vmid_alloc_lock);
>>  }
>>
>> -void p2m_teardown(struct domain *d)
>> +/* Reset this p2m table to be empty */
>> +void p2m_flush_table(struct p2m_domain *p2m)
>
> Any function exported should have its prototype in an header within
> the same patch.
>

I will change that.

>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> -    struct page_info *pg;
>> +    struct page_info *page, *pg;
>> +    unsigned int i;
>> +
>> +    page = p2m->root;
>
>
> This function can be called with p2m->root equal to NULL. (see the
> check in p2m_free_one.
>

I will add the check, thank you.

>> +
>> +    /* Clear all concatenated first level pages */
>> +    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>> +        clear_and_clean_page(page + i);
>>
>> +    /* Free the rest of the trie pages back to the paging pool */
>>      while ( (pg = page_list_remove_head(&p2m->pages)) )
>>          free_domheap_page(pg);
>> +}
>> +
>> +static inline void p2m_free_one(struct p2m_domain *p2m)
>
> Why inline here? Also, it seems that you export the function later.
> Why don't you do it here?
>

I will do that. Thank you.

> Finally, I think this function should be rename p2m_teardown_one to
> match the callers' name.
>

Ok.

>> +{
>> +    p2m_flush_table(p2m);
>> +
>> +    /* Free VMID and reset VTTBR */
>> +    p2m_free_vmid(p2m->domain);
>
> Why do you move the call to p2m_free_vmid?
>

When flushing a table, I did not want to free the associated VMID, as it
would need to be allocated right afterwards (see altp2m_propagate_change
and altp2m_reset). Since this would need to be done also in functions
like p2m_altp2m_reset, I moved this call to p2m_free_one. I believe
there is no need to free the VMIDs if the associated p2m is not freed as
well.

>> +    p2m->vttbr.vttbr = INVALID_VTTBR;
>
> Why do you reset vttbr, the p2m will never be used afterwards.
>

Fair. I did that just for the sake of completeness.

>>
>>      if ( p2m->root )
>>          free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>
>>      p2m->root = NULL;
>>
>> -    p2m_free_vmid(d);
>> -
>>      radix_tree_destroy(&p2m->mem_access_settings, NULL);
>>  }
>>
>> -int p2m_init(struct domain *d)
>> +int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
>
> Any function exported should have its prototype in an header within
> the same patch.
>

I will change that, thank you.

>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>      int rc = 0;
>>
>>      rwlock_init(&p2m->lock);
>>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>>
>> -    p2m->vmid = INVALID_VMID;
>> -
>
> Why this is dropped?
>

This will be shown in patch #07. We reuse altp2m views and check whether
a p2m was flushed by checking for a valid VMID.

>>      rc = p2m_alloc_vmid(d);
>>      if ( rc != 0 )
>>          return rc;
>>
>> -    p2m->max_mapped_gfn = _gfn(0);
>> -    p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
>> -
>> -    p2m->default_access = p2m_access_rwx;
>> +    p2m->domain = d;
>> +    p2m->access_required = false;
>>      p2m->mem_access_enabled = false;
>> +    p2m->default_access = p2m_access_rwx;
>> +    p2m->root = NULL;
>> +    p2m->max_mapped_gfn = _gfn(0);
>> +    p2m->lowest_mapped_gfn = INVALID_GFN;
>
> Please don't move code when it is not necessary. This make the code
> review more difficult to read.
>

Ok.

>> +    p2m->vttbr.vttbr = INVALID_VTTBR;
>>      radix_tree_init(&p2m->mem_access_settings);
>>
>> -    rc = p2m_alloc_table(d);
>> -
>
> The function p2m_init_one should fully initialize the p2m (i.e
> allocate the table).
>

The function p2m_init_one is currently called also for initializing the
hostp2m, which is not dynamically allocated. Since we are sharing code
between the hostp2m and altp2m initialization, I solved it that way. We
could always allocate the hostp2m dynamically, that would solve it quite
easily without additional checks. The other solution can simply check,
whether the p2m is NULL and perform additional p2m allocation. What do
you think?

> Why altp2m_destroy_by_id don't free the p2m entirely?
> This would simply a lot this series and avoid to spread p2m
> initialization everywhere.
>

Same reason. I will change that accordingly, thank you.

>>      return rc;
>>  }
>>
>> +static void p2m_teardown_hostp2m(struct domain *d)
>> +{
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +    p2m_free_one(p2m);
>> +}
>> +
>> +void p2m_teardown(struct domain *d)
>> +{
>> +    p2m_teardown_hostp2m(d);
>> +}
>> +
>> +static int p2m_init_hostp2m(struct domain *d)
>> +{
>> +    int rc;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +    p2m->p2m_class = p2m_host;
>> +
>> +    rc = p2m_init_one(d, p2m);
>> +    if ( rc )
>> +        return rc;
>> +
>> +    return p2m_alloc_table(d);
>> +}
>> +
>> +int p2m_init(struct domain *d)
>> +{
>> +    return p2m_init_hostp2m(d);
>> +}
>> +
>>  int relinquish_p2m_mapping(struct domain *d)
>>  {
>>      struct p2m_domain *p2m = &d->arch.p2m;
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 5c7cd1a..1f9c370 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -18,6 +18,11 @@ struct domain;
>>
>>  extern void memory_type_changed(struct domain *);
>>
>> +typedef enum {
>> +    p2m_host,
>> +    p2m_alternate,
>> +} p2m_class_t;
>> +
>
> This addition should really be in a separate patch.
>

The function p2m_init_hostp2m uses p2m_host for initialization. I can
introduce a patch before this one, however to make it easier for the
reviewer.

>>  /* Per-p2m-table state */
>>  struct p2m_domain {
>>      /* Lock that protects updates to the p2m */
>> @@ -78,6 +83,12 @@ struct p2m_domain {
>>       * enough available bits to store this information.
>>       */
>>      struct radix_tree_root mem_access_settings;
>> +
>> +    /* Choose between: host/alternate */
>> +    p2m_class_t p2m_class;
>> +
>> +    /* Back pointer to domain */
>> +    struct domain *domain;
>
> Same here. With justification why we want it.
>

Ok.

>>  };
>>
>>  /*
>>

Thank you very much.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions.
  2016-08-05  7:26     ` Sergej Proskurin
@ 2016-08-05  9:16       ` Julien Grall
  2016-08-06  8:43         ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-05  9:16 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 05/08/16 08:26, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

>
> On 08/03/2016 07:40 PM, Julien Grall wrote:

[...]

>>> +{
>>> +    p2m_flush_table(p2m);
>>> +
>>> +    /* Free VMID and reset VTTBR */
>>> +    p2m_free_vmid(p2m->domain);
>>
>> Why do you move the call to p2m_free_vmid?
>>
>
> When flushing a table, I did not want to free the associated VMID, as it
> would need to be allocated right afterwards (see altp2m_propagate_change
> and altp2m_reset). Since this would need to be done also in functions
> like p2m_altp2m_reset, I moved this call to p2m_free_one. I believe
> there is no need to free the VMIDs if the associated p2m is not freed as
> well.

That does not answer my question. You moved p2m_free_vmid within 
p2m_free_one. It used to be at the end of the function and now it is at 
the beginning. See...

>>> +    p2m->vttbr.vttbr = INVALID_VTTBR;

[...]

>>>
>>>      if ( p2m->root )
>>>          free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>>
>>>      p2m->root = NULL;
>>>
>>> -    p2m_free_vmid(d);
>>> -

here. So please don't move code unless there is a good reason. This 
series is already quite difficult to read.

>>>      radix_tree_destroy(&p2m->mem_access_settings, NULL);
>>>  }
>>>
>>> -int p2m_init(struct domain *d)
>>> +int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
>>
>> Any function exported should have its prototype in an header within
>> the same patch.
>>
>
> I will change that, thank you.
>
>>>  {
>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>>      int rc = 0;
>>>
>>>      rwlock_init(&p2m->lock);
>>>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>>>
>>> -    p2m->vmid = INVALID_VMID;
>>> -
>>
>> Why this is dropped?
>>
>
> This will be shown in patch #07. We reuse altp2m views and check whether
> a p2m was flushed by checking for a valid VMID.

Which is wrong. A flushed altp2m should not need to be reinitialized 
(i.e reseting the lock...).

A flushed altp2m only need to have max_mapped_gfn and lowest_mapped_gfn 
reset.

[...]

>>> +    p2m->vttbr.vttbr = INVALID_VTTBR;
>>>      radix_tree_init(&p2m->mem_access_settings);
>>>
>>> -    rc = p2m_alloc_table(d);
>>> -
>>
>> The function p2m_init_one should fully initialize the p2m (i.e
>> allocate the table).
>>
>
> The function p2m_init_one is currently called also for initializing the
> hostp2m, which is not dynamically allocated. Since we are sharing code
> between the hostp2m and altp2m initialization, I solved it that way. We
> could always allocate the hostp2m dynamically, that would solve it quite
> easily without additional checks. The other solution can simply check,
> whether the p2m is NULL and perform additional p2m allocation. What do
> you think?

I don't understand your point here. It does not matter whether the 
hostp2m is allocated dynamically or not.

p2m_init_one should fully initialize a p2m and this does not depends on 
dynamic allocation.

[...]

>>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>>> index 5c7cd1a..1f9c370 100644
>>> --- a/xen/include/asm-arm/p2m.h
>>> +++ b/xen/include/asm-arm/p2m.h
>>> @@ -18,6 +18,11 @@ struct domain;
>>>
>>>  extern void memory_type_changed(struct domain *);
>>>
>>> +typedef enum {
>>> +    p2m_host,
>>> +    p2m_alternate,
>>> +} p2m_class_t;
>>> +
>>
>> This addition should really be in a separate patch.
>>
>
> The function p2m_init_hostp2m uses p2m_host for initialization. I can
> introduce a patch before this one, however to make it easier for the
> reviewer.

You did not get my point here. A patch should do one thing: moving code 
or adding new code. Not both at the same.

Adding p2m_class_t and setting p2m_host should really be done in a 
separate patch. This will ease the review this patch series.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines.
  2016-08-05  6:53     ` Sergej Proskurin
@ 2016-08-05  9:20       ` Julien Grall
  2016-08-06  8:30         ` Sergej Proskurin
  2016-08-09  9:44       ` Sergej Proskurin
  1 sibling, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-05  9:20 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

On 05/08/16 07:53, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/03/2016 08:12 PM, Julien Grall wrote:
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>> +int altp2m_init(struct domain *d)
>>> +{
>>> +    unsigned int i;
>>> +
>>> +    spin_lock_init(&d->arch.altp2m_lock);
>>> +
>>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>>> +    {
>>> +        d->arch.altp2m_p2m[i] = NULL;
>>> +        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
>>
>> I don't think altp2m_vttbr is useful. There is no real performance
>> impact to free the whole altp2m if the altp2m is destroyed (see
>> altp2m_destroy_by_id) and re-allocated afterwards.
>>
>> The code will actually much simpler. With this solution you will be
>> able to detect if an altp2m is available by testin altp2m_p2m[i] is NULL.
>>
>
> This is true. I did not want to free the entire domain got every time
> the hostp2m got changed, while altp2m was active (see
> altp2m_propagate_change). But it would not introduce much more overhead
> when it does. Thank you.

I think you misunderstood my point. When you flush the altp2m you only 
need to reset lowest_mapped_gfn and max_mapped_gfn aside freeing 
intermediate page table.

The altp2m should be fully freed when it get destroyed, nobody will use 
it anyway. This will also simplify a lot the logic.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines.
  2016-08-05  9:20       ` Julien Grall
@ 2016-08-06  8:30         ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  8:30 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/05/2016 11:20 AM, Julien Grall wrote:
> On 05/08/16 07:53, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>> On 08/03/2016 08:12 PM, Julien Grall wrote:
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>> +int altp2m_init(struct domain *d)
>>>> +{
>>>> +    unsigned int i;
>>>> +
>>>> +    spin_lock_init(&d->arch.altp2m_lock);
>>>> +
>>>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>>>> +    {
>>>> +        d->arch.altp2m_p2m[i] = NULL;
>>>> +        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
>>>
>>> I don't think altp2m_vttbr is useful. There is no real performance
>>> impact to free the whole altp2m if the altp2m is destroyed (see
>>> altp2m_destroy_by_id) and re-allocated afterwards.
>>>
>>> The code will actually much simpler. With this solution you will be
>>> able to detect if an altp2m is available by testin altp2m_p2m[i] is
>>> NULL.
>>>
>>
>> This is true. I did not want to free the entire domain got every time
>> the hostp2m got changed, while altp2m was active (see
>> altp2m_propagate_change). But it would not introduce much more overhead
>> when it does. Thank you.
>
> I think you misunderstood my point. When you flush the altp2m you only
> need to reset lowest_mapped_gfn and max_mapped_gfn aside freeing
> intermediate page table.
>

I see your point. I will consider your suggestion in the next patch.
Thank you.

> The altp2m should be fully freed when it get destroyed, nobody will
> use it anyway. This will also simplify a lot the logic.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions.
  2016-08-05  9:16       ` Julien Grall
@ 2016-08-06  8:43         ` Sergej Proskurin
  2016-08-06 13:26           ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  8:43 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/05/2016 11:16 AM, Julien Grall wrote:
>
>
> On 05/08/16 08:26, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>>
>> On 08/03/2016 07:40 PM, Julien Grall wrote:
>
> [...]
>
>>>> +{
>>>> +    p2m_flush_table(p2m);
>>>> +
>>>> +    /* Free VMID and reset VTTBR */
>>>> +    p2m_free_vmid(p2m->domain);
>>>
>>> Why do you move the call to p2m_free_vmid?
>>>
>>
>> When flushing a table, I did not want to free the associated VMID, as it
>> would need to be allocated right afterwards (see altp2m_propagate_change
>> and altp2m_reset). Since this would need to be done also in functions
>> like p2m_altp2m_reset, I moved this call to p2m_free_one. I believe
>> there is no need to free the VMIDs if the associated p2m is not freed as
>> well.
>
> That does not answer my question. You moved p2m_free_vmid within
> p2m_free_one. It used to be at the end of the function and now it is
> at the beginning. See...
>
>>>> +    p2m->vttbr.vttbr = INVALID_VTTBR;
>
> [...]
>
>>>>
>>>>      if ( p2m->root )
>>>>          free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>>>
>>>>      p2m->root = NULL;
>>>>
>>>> -    p2m_free_vmid(d);
>>>> -
>
> here. So please don't move code unless there is a good reason. This
> series is already quite difficult to read.
>

This move did not have any particular reason except that for me, it
appeared to be more logical and cleaner to read this way. Apart from
that: This patch creates two functions out of one (in the case of the
former p2m_teardown). Because of this, I did not even think of
preserving a certain function call order as the former function does not
exit in a way it used to anymore.

>>>>      radix_tree_destroy(&p2m->mem_access_settings, NULL);
>>>>  }
>>>>
>>>> -int p2m_init(struct domain *d)
>>>> +int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
>>>
>>> Any function exported should have its prototype in an header within
>>> the same patch.
>>>
>>
>> I will change that, thank you.
>>
>>>>  {
>>>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>>>      int rc = 0;
>>>>
>>>>      rwlock_init(&p2m->lock);
>>>>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>>>>
>>>> -    p2m->vmid = INVALID_VMID;
>>>> -
>>>
>>> Why this is dropped?
>>>
>>
>> This will be shown in patch #07. We reuse altp2m views and check whether
>> a p2m was flushed by checking for a valid VMID.
>
> Which is wrong. A flushed altp2m should not need to be reinitialized
> (i.e reseting the lock...).
>
> A flushed altp2m only need to have max_mapped_gfn and
> lowest_mapped_gfn reset.
>
> [...]
>

I will try to adjust this function considering your suggestion in the
patch #07.

>>>> +    p2m->vttbr.vttbr = INVALID_VTTBR;
>>>>      radix_tree_init(&p2m->mem_access_settings);
>>>>
>>>> -    rc = p2m_alloc_table(d);
>>>> -
>>>
>>> The function p2m_init_one should fully initialize the p2m (i.e
>>> allocate the table).
>>>
>>
>> The function p2m_init_one is currently called also for initializing the
>> hostp2m, which is not dynamically allocated. Since we are sharing code
>> between the hostp2m and altp2m initialization, I solved it that way. We
>> could always allocate the hostp2m dynamically, that would solve it quite
>> easily without additional checks. The other solution can simply check,
>> whether the p2m is NULL and perform additional p2m allocation. What do
>> you think?
>
> I don't understand your point here. It does not matter whether the
> hostp2m is allocated dynamically or not.
>
> p2m_init_one should fully initialize a p2m and this does not depends
> on dynamic allocation.
>
> [...]
>

Yes, you are correct. I will move try to prevent spreading of the p2m
allocation across multiple functions. Thank you.

>>>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>>>> index 5c7cd1a..1f9c370 100644
>>>> --- a/xen/include/asm-arm/p2m.h
>>>> +++ b/xen/include/asm-arm/p2m.h
>>>> @@ -18,6 +18,11 @@ struct domain;
>>>>
>>>>  extern void memory_type_changed(struct domain *);
>>>>
>>>> +typedef enum {
>>>> +    p2m_host,
>>>> +    p2m_alternate,
>>>> +} p2m_class_t;
>>>> +
>>>
>>> This addition should really be in a separate patch.
>>>
>>
>> The function p2m_init_hostp2m uses p2m_host for initialization. I can
>> introduce a patch before this one, however to make it easier for the
>> reviewer.
>
> You did not get my point here. A patch should do one thing: moving
> code or adding new code. Not both at the same.
>
> Adding p2m_class_t and setting p2m_host should really be done in a
> separate patch. This will ease the review this patch series.

Fair enough. Thank you.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 03/25] arm/altp2m: Add struct vttbr.
  2016-08-04 16:15         ` Julien Grall
@ 2016-08-06  8:54           ` Sergej Proskurin
  2016-08-06 13:20             ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  8:54 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 06:15 PM, Julien Grall wrote:
>
>
> On 04/08/16 17:11, Sergej Proskurin wrote:
>>>>> diff --git a/xen/include/asm-arm/processor.h
>>>>> b/xen/include/asm-arm/processor.h
>>>>> index 15bf890..f8ca18c 100644
>>>>> --- a/xen/include/asm-arm/processor.h
>>>>> +++ b/xen/include/asm-arm/processor.h
>>>>> @@ -529,6 +529,22 @@ union hsr {
>>>>>
>>>>>
>>>>>  };
>>>>> +
>>>>> +/* VTTBR: Virtualization Translation Table Base Register */
>>>>> +struct vttbr {
>>>>> +    union {
>>>>> +        struct {
>>>>> +            u64 baddr :40, /* variable res0: from 0-(x-1) bit */
>>>>
>>>> As mentioned on the previous series, this field is 48 bits for ARMv8
>>>> (see ARM D7.2.102 in DDI 0487A.j).
>>>>
>>
>> I must have missed it during refactoring. At this point, I will
>> distinguish between __arm__ and __aarch64__, thank you.
>
> After reading this series I see no point having this union. So I would
> much prefer to see this patch dropped.
>

I can do that. However, I do not understand why we would prefer using
error prone bit operations for VTTBR initialization instead of having a
unified and simple way of initializing and using the VTTBR including the
VMID and the root table address.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 05/25] arm/altp2m: Rename and extend p2m_alloc_table.
  2016-08-03 17:57   ` Julien Grall
@ 2016-08-06  8:57     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  8:57 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/03/2016 07:57 PM, Julien Grall wrote:
> Hello Sergej,
>
> Title: s/altp2m/p2m/
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> The initially named function "p2m_alloc_table" allocated pages solely
>> required for the host's p2m. The new implementation leaves p2m
>> allocation related parts inside of this function to generally initialize
>> p2m/altp2m tables. This is done generically, as the host's p2m and
>> altp2m tables are allocated similarly. Since this function will be used
>> by the altp2m initialization routines, it is not made static. In
>> addition, this commit provides the overlay function "p2m_table_init"
>> that is used for the host's p2m allocation/initialization.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Removed altp2m table initialization from "p2m_table_init".
>> ---
>>  xen/arch/arm/p2m.c | 29 +++++++++++++++++++++++------
>>  1 file changed, 23 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 17f3299..63fe3d9 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1277,11 +1277,11 @@ void guest_physmap_remove_page(struct domain *d,
>>      p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
>>  }
>>
>> -static int p2m_alloc_table(struct domain *d)
>> +int p2m_alloc_table(struct p2m_domain *p2m)
>
> The function p2m_alloc_table should not be exposed outside of the p2m.
> I explained it why in the previous patch.
>

It will be considered in the next patch.

>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> -    struct page_info *page;
>>      unsigned int i;
>> +    struct page_info *page;
>> +    struct vttbr *vttbr = &p2m->vttbr;
>>
>>      page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
>>      if ( page == NULL )
>> @@ -1293,11 +1293,28 @@ static int p2m_alloc_table(struct domain *d)
>>
>>      p2m->root = page;
>>
>> -    p2m->vttbr.vttbr = page_to_maddr(p2m->root) |
>> ((uint64_t)p2m->vmid & 0xff) << 48;
>> +    /* Initialize the VTTBR associated with the allocated p2m table. */
>> +    vttbr->vttbr = 0;
>> +    vttbr->vmid = p2m->vmid & 0xff;
>> +    vttbr->baddr = page_to_maddr(p2m->root);
>
> This change does not belong to this patch. If we want to use VTTBR, it
> should be in patch #3.
>

Will not be considered if we decide to drop the patch #03.

>> +
>> +    return 0;
>> +}
>> +
>> +static int p2m_table_init(struct domain *d)
>> +{
>> +    int rc;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +
>> +    rc = p2m_alloc_table(p2m);
>> +    if ( rc != 0 )
>> +        return -ENOMEM;
>> +
>> +    d->arch.altp2m_active = false;
>
> Please avoid to spread the change for altp2m everywhere. The addition
> of altp2m in the p2m code should really be a single patch.
>

Ok, thank you. I will move this part to altp2m_init.

>>
>>      /*
>>       * Make sure that all TLBs corresponding to the new VMID are
>> flushed
>> -     * before using it
>> +     * before using it.
>>       */
>>      p2m_flush_tlb(p2m);
>>
>> @@ -1440,7 +1457,7 @@ static int p2m_init_hostp2m(struct domain *d)
>>      if ( rc )
>>          return rc;
>>
>> -    return p2m_alloc_table(d);
>> +    return p2m_table_init(d);
>>  }
>>
>>  int p2m_init(struct domain *d)
>>
>
>

Best regards,
~Sergej



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 06/25] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-08-03 18:02   ` Julien Grall
@ 2016-08-06  9:00     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  9:00 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/03/2016 08:02 PM, Julien Grall wrote:
> Hello Sergej,
>
> Title: s/altp2m/p2m/ and please remove the full stop.
>
> Also this is not really a cosmetic change.

Ok, thank you.

>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> This commit changes the prototype of the following functions:
>> - p2m_alloc_vmid
>> - p2m_free_vmid
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/p2m.c | 12 +++++-------
>>  1 file changed, 5 insertions(+), 7 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 63fe3d9..ff9c0d1 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1337,11 +1337,10 @@ void p2m_vmid_allocator_init(void)
>>      set_bit(INVALID_VMID, vmid_mask);
>>  }
>>
>> -static int p2m_alloc_vmid(struct domain *d)
>> +static int p2m_alloc_vmid(struct p2m_domain *p2m)
>
> I am not a big fan of this interface change. I would much prefer if
> the VMID was returned and set to p2m->vmid by the caller.
>
> So this would return either the VMID or INVALID_VMID.
>

Fair enough. I will change this in the next patch series.

>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> -
>>      int rc, nr;
>> +    struct domain *d = p2m->domain;
>>
>>      spin_lock(&vmid_alloc_lock);
>>
>> @@ -1367,9 +1366,8 @@ out:
>>      return rc;
>>  }
>>
>> -static void p2m_free_vmid(struct domain *d)
>> +static void p2m_free_vmid(struct p2m_domain *p2m)
>
> Likewise here. It would be better if the function is taking a vmid in
> parameter.
>

Ok.

>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>      spin_lock(&vmid_alloc_lock);
>>      if ( p2m->vmid != INVALID_VMID )
>>          clear_bit(p2m->vmid, vmid_mask);
>> @@ -1399,7 +1397,7 @@ static inline void p2m_free_one(struct
>> p2m_domain *p2m)
>>      p2m_flush_table(p2m);
>>
>>      /* Free VMID and reset VTTBR */
>> -    p2m_free_vmid(p2m->domain);
>> +    p2m_free_vmid(p2m);
>>      p2m->vttbr.vttbr = INVALID_VTTBR;
>>
>>      if ( p2m->root )
>> @@ -1417,7 +1415,7 @@ int p2m_init_one(struct domain *d, struct
>> p2m_domain *p2m)
>>      rwlock_init(&p2m->lock);
>>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>>
>> -    rc = p2m_alloc_vmid(d);
>> +    rc = p2m_alloc_vmid(p2m);
>>      if ( rc != 0 )
>>          return rc;
>>
>>
>

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-08-03 18:41   ` Julien Grall
@ 2016-08-06  9:03     ` Sergej Proskurin
  2016-08-06  9:36     ` Sergej Proskurin
  1 sibling, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  9:03 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/03/2016 08:41 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
>> specific domain. This commit adopts the x86
>> HVMOP_altp2m_set_domain_state implementation. Note that the function
>> altp2m_flush is currently implemented in form of a stub.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Dynamically allocate memory for altp2m views only when needed.
>>     Move altp2m related helpers to altp2m.c.
>>     p2m_flush_tlb is made publicly accessible.
>> ---
>>  xen/arch/arm/altp2m.c          | 116
>> +++++++++++++++++++++++++++++++++++++++++
>>  xen/arch/arm/hvm.c             |  34 +++++++++++-
>>  xen/arch/arm/p2m.c             |   2 +-
>>  xen/include/asm-arm/altp2m.h   |  15 ++++++
>>  xen/include/asm-arm/domain.h   |   9 ++++
>>  xen/include/asm-arm/flushtlb.h |   4 ++
>>  xen/include/asm-arm/p2m.h      |  11 ++++
>>  7 files changed, 189 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> index abbd39a..767f233 100644
>> --- a/xen/arch/arm/altp2m.c
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -19,6 +19,122 @@
>>
>>  #include <asm/p2m.h>
>>  #include <asm/altp2m.h>
>> +#include <asm/flushtlb.h>
>> +
>> +struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
>> +{
>> +    unsigned int index = vcpu_altp2m(v).p2midx;
>> +
>> +    if ( index == INVALID_ALTP2M )
>> +        return NULL;
>> +
>> +    BUG_ON(index >= MAX_ALTP2M);
>> +
>> +    return v->domain->arch.altp2m_p2m[index];
>> +}
>> +
>> +static void altp2m_vcpu_reset(struct vcpu *v)
>> +{
>> +    struct altp2mvcpu *av = &vcpu_altp2m(v);
>> +
>> +    av->p2midx = INVALID_ALTP2M;
>> +}
>> +
>> +void altp2m_vcpu_initialise(struct vcpu *v)
>> +{
>> +    if ( v != current )
>> +        vcpu_pause(v);
>> +
>> +    altp2m_vcpu_reset(v);
>
> I don't understand why you call altp2m_vcpu_reset which will set
> p2midx to INVALID_ALTP2M but a line after you set to 0.
>

It is a leftover from the x86 implementation. Since we do not need to
reset further fields, I can remove the call to altp2m_vcpu_reset.

>> +    vcpu_altp2m(v).p2midx = 0;
>> +    atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
>> +
>> +    if ( v != current )
>> +        vcpu_unpause(v);
>> +}
>> +
>> +void altp2m_vcpu_destroy(struct vcpu *v)
>> +{
>> +    struct p2m_domain *p2m;
>> +
>> +    if ( v != current )
>> +        vcpu_pause(v);
>> +
>> +    if ( (p2m = altp2m_get_altp2m(v)) )
>> +        atomic_dec(&p2m->active_vcpus);
>> +
>> +    altp2m_vcpu_reset(v);
>> +
>> +    if ( v != current )
>> +        vcpu_unpause(v);
>> +}
>> +
>> +static int altp2m_init_helper(struct domain *d, unsigned int idx)
>> +{
>> +    int rc;
>> +    struct p2m_domain *p2m = d->arch.altp2m_p2m[idx];
>> +
>> +    if ( p2m == NULL )
>> +    {
>> +        /* Allocate a new, zeroed altp2m view. */
>> +        p2m = xzalloc(struct p2m_domain);
>> +        if ( p2m == NULL)
>> +        {
>> +            rc = -ENOMEM;
>> +            goto err;
>> +        }
>> +    }
>
> Why don't you re-allocate the p2m from scratch?
>
>> +
>> +    /* Initialize the new altp2m view. */
>> +    rc = p2m_init_one(d, p2m);
>> +    if ( rc )
>> +        goto err;
>> +
>> +    /* Allocate a root table for the altp2m view. */
>> +    rc = p2m_alloc_table(p2m);
>> +    if ( rc )
>> +        goto err;
>> +
>> +    p2m->p2m_class = p2m_alternate;
>> +    p2m->access_required = 1;
>
> Please use true here. Although, I am not sure why you want to enable
> the access by default.
>
>> +    _atomic_set(&p2m->active_vcpus, 0);
>> +
>> +    d->arch.altp2m_p2m[idx] = p2m;
>> +    d->arch.altp2m_vttbr[idx] = p2m->vttbr.vttbr;
>> +
>> +    /*
>> +     * Make sure that all TLBs corresponding to the current VMID are
>> flushed
>> +     * before using it.
>> +     */
>> +    p2m_flush_tlb(p2m);
>> +
>> +    return rc;
>> +
>> +err:
>> +    if ( p2m )
>> +        xfree(p2m);
>> +
>> +    d->arch.altp2m_p2m[idx] = NULL;
>> +
>> +    return rc;
>> +}
>> +
>> +int altp2m_init_by_id(struct domain *d, unsigned int idx)
>> +{
>> +    int rc = -EINVAL;
>> +
>> +    if ( idx >= MAX_ALTP2M )
>> +        return rc;
>> +
>> +    altp2m_lock(d);
>> +
>> +    if ( d->arch.altp2m_vttbr[idx] == INVALID_VTTBR )
>> +        rc = altp2m_init_helper(d, idx);
>> +
>> +    altp2m_unlock(d);
>> +
>> +    return rc;
>> +}
>>
>>  int altp2m_init(struct domain *d)
>>  {
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index 01a3243..78370c6 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -80,8 +80,40 @@ static int
>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>          break;
>>
>>      case HVMOP_altp2m_set_domain_state:
>> -        rc = -EOPNOTSUPP;
>> +    {
>> +        struct vcpu *v;
>> +        bool_t ostate;
>> +
>> +        if ( !altp2m_enabled(d) )
>> +        {
>> +            rc = -EINVAL;
>> +            break;
>> +        }
>> +
>> +        ostate = d->arch.altp2m_active;
>> +        d->arch.altp2m_active = !!a.u.domain_state.state;
>> +
>> +        /* If the alternate p2m state has changed, handle
>> appropriately */
>> +        if ( (d->arch.altp2m_active != ostate) &&
>> +             (ostate || !(rc = altp2m_init_by_id(d, 0))) )
>> +        {
>> +            for_each_vcpu( d, v )
>> +            {
>> +                if ( !ostate )
>> +                    altp2m_vcpu_initialise(v);
>> +                else
>> +                    altp2m_vcpu_destroy(v);
>> +            }
>
> The implementation of this hvmop param looks racy to me. What does
> prevent to CPU running in this function at the same time? One will
> destroy, whilst the other one will initialize.
>
> It might even be possible to have both doing the initialization
> because there is no synchronization barrier for altp2m_active.
>
>> +
>> +            /*
>> +             * The altp2m_active state has been deactivated. It is
>> now safe to
>> +             * flush all altp2m views -- including altp2m[0].
>> +             */
>> +            if ( ostate )
>> +                altp2m_flush(d);
>
> The function altp2m_flush is defined afterwards (in patch #9). Please
> make sure that all the patches compile one by one.
>
>> +        }
>>          break;
>> +    }
>>
>>      case HVMOP_altp2m_vcpu_enable_notify:
>>          rc = -EOPNOTSUPP;
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 29ec5e5..8afea11 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -139,7 +139,7 @@ void p2m_restore_state(struct vcpu *n)
>>      isb();
>>  }
>>
>> -static void p2m_flush_tlb(struct p2m_domain *p2m)
>> +void p2m_flush_tlb(struct p2m_domain *p2m)
>
> This should ideally be in a separate patch.
>
>>  {
>>      unsigned long flags = 0;
>>      uint64_t ovttbr;
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index 79ea66b..a33c740 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -22,6 +22,8 @@
>>
>>  #include <xen/sched.h>
>>
>> +#define INVALID_ALTP2M    0xffff
>> +
>>  #define altp2m_lock(d)    spin_lock(&(d)->arch.altp2m_lock)
>>  #define altp2m_unlock(d)  spin_unlock(&(d)->arch.altp2m_lock)
>>
>> @@ -44,4 +46,17 @@ static inline uint16_t altp2m_vcpu_idx(const
>> struct vcpu *v)
>>  int altp2m_init(struct domain *d);
>>  void altp2m_teardown(struct domain *d);
>>
>> +void altp2m_vcpu_initialise(struct vcpu *v);
>> +void altp2m_vcpu_destroy(struct vcpu *v);
>> +
>> +/* Make a specific alternate p2m valid. */
>> +int altp2m_init_by_id(struct domain *d,
>> +                      unsigned int idx);
>> +
>> +/* Flush all the alternate p2m's for a domain */
>> +static inline void altp2m_flush(struct domain *d)
>> +{
>> +    /* Not yet implemented. */
>> +}
>> +
>>  #endif /* __ASM_ARM_ALTP2M_H */
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 3c25ea5..63a9650 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -135,6 +135,12 @@ struct arch_domain
>>      spinlock_t altp2m_lock;
>>  }  __cacheline_aligned;
>>
>> +struct altp2mvcpu {
>> +    uint16_t p2midx; /* alternate p2m index */
>> +};
>> +
>> +#define vcpu_altp2m(v) ((v)->arch.avcpu)
>
>
> Please avoid to have half of altp2m  defined in altp2m.h and the other
> half in domain.h.
>
>> +
>>  struct arch_vcpu
>>  {
>>      struct {
>> @@ -264,6 +270,9 @@ struct arch_vcpu
>>      struct vtimer phys_timer;
>>      struct vtimer virt_timer;
>>      bool_t vtimer_initialized;
>> +
>> +    /* Alternate p2m context */
>> +    struct altp2mvcpu avcpu;
>>  }  __cacheline_aligned;
>>
>>  void vcpu_show_execution_state(struct vcpu *);
>> diff --git a/xen/include/asm-arm/flushtlb.h
>> b/xen/include/asm-arm/flushtlb.h
>> index 329fbb4..57c3c34 100644
>> --- a/xen/include/asm-arm/flushtlb.h
>> +++ b/xen/include/asm-arm/flushtlb.h
>> @@ -2,6 +2,7 @@
>>  #define __ASM_ARM_FLUSHTLB_H__
>>
>>  #include <xen/cpumask.h>
>> +#include <asm/p2m.h>
>>
>>  /*
>>   * Filter the given set of CPUs, removing those that definitely
>> flushed their
>> @@ -25,6 +26,9 @@ do
>> {                                                                    \
>>  /* Flush specified CPUs' TLBs */
>>  void flush_tlb_mask(const cpumask_t *mask);
>>
>> +/* Flush CPU's TLBs for the specified domain */
>> +void p2m_flush_tlb(struct p2m_domain *p2m);
>> +
>
> This function should be declared in p2m.h and not flushtlb.h.
>
>>  #endif /* __ASM_ARM_FLUSHTLB_H__ */
>>  /*
>>   * Local variables:
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 24a1f61..f13f285 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -9,6 +9,8 @@
>>  #include <xen/p2m-common.h>
>>  #include <public/memory.h>
>>
>> +#include <asm/atomic.h>
>> +
>>  #define MAX_ALTP2M 10           /* ARM might contain an arbitrary
>> number of
>>                                     altp2m views. */
>>  #define paddr_bits PADDR_BITS
>> @@ -86,6 +88,9 @@ struct p2m_domain {
>>       */
>>      struct radix_tree_root mem_access_settings;
>>
>> +    /* Alternate p2m: count of vcpu's currently using this p2m. */
>> +    atomic_t active_vcpus;
>> +
>>      /* Choose between: host/alternate */
>>      p2m_class_t p2m_class;
>>
>> @@ -214,6 +219,12 @@ void guest_physmap_remove_page(struct domain *d,
>>
>>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>>
>> +/* Allocates page table for a p2m. */
>> +int p2m_alloc_table(struct p2m_domain *p2m);
>> +
>> +/* Initialize the p2m structure. */
>> +int p2m_init_one(struct domain *d, struct p2m_domain *p2m);
>
> These declarations belong to the patch that exported them not. Not here.
>
>> +
>>  /* Release resources held by the p2m structure. */
>>  void p2m_free_one(struct p2m_domain *p2m);
>>
>>
>
> Regards,
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-08-03 18:41   ` Julien Grall
  2016-08-06  9:03     ` Sergej Proskurin
@ 2016-08-06  9:36     ` Sergej Proskurin
  2016-08-06 14:18       ` Julien Grall
                         ` (2 more replies)
  1 sibling, 3 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  9:36 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

(I did not finish answering all questions in the previous mail)


Hi  Julien,

On 08/03/2016 08:41 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> The HVMOP_altp2m_set_domain_state allows to activate altp2m on a
>> specific domain. This commit adopts the x86
>> HVMOP_altp2m_set_domain_state implementation. Note that the function
>> altp2m_flush is currently implemented in form of a stub.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Dynamically allocate memory for altp2m views only when needed.
>>     Move altp2m related helpers to altp2m.c.
>>     p2m_flush_tlb is made publicly accessible.
>> ---
>>  xen/arch/arm/altp2m.c          | 116
>> +++++++++++++++++++++++++++++++++++++++++
>>  xen/arch/arm/hvm.c             |  34 +++++++++++-
>>  xen/arch/arm/p2m.c             |   2 +-
>>  xen/include/asm-arm/altp2m.h   |  15 ++++++
>>  xen/include/asm-arm/domain.h   |   9 ++++
>>  xen/include/asm-arm/flushtlb.h |   4 ++
>>  xen/include/asm-arm/p2m.h      |  11 ++++
>>  7 files changed, 189 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> index abbd39a..767f233 100644
>> --- a/xen/arch/arm/altp2m.c
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -19,6 +19,122 @@
>>
>>  #include <asm/p2m.h>
>>  #include <asm/altp2m.h>
>> +#include <asm/flushtlb.h>
>> +
>> +struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
>> +{
>> +    unsigned int index = vcpu_altp2m(v).p2midx;
>> +
>> +    if ( index == INVALID_ALTP2M )
>> +        return NULL;
>> +
>> +    BUG_ON(index >= MAX_ALTP2M);
>> +
>> +    return v->domain->arch.altp2m_p2m[index];
>> +}
>> +
>> +static void altp2m_vcpu_reset(struct vcpu *v)
>> +{
>> +    struct altp2mvcpu *av = &vcpu_altp2m(v);
>> +
>> +    av->p2midx = INVALID_ALTP2M;
>> +}
>> +
>> +void altp2m_vcpu_initialise(struct vcpu *v)
>> +{
>> +    if ( v != current )
>> +        vcpu_pause(v);
>> +
>> +    altp2m_vcpu_reset(v);
>
> I don't understand why you call altp2m_vcpu_reset which will set
> p2midx to INVALID_ALTP2M but a line after you set to 0.
>

It is a leftover from the x86 implementation. Since we do not need to
reset further fields, I can remove the call to altp2m_vcpu_reset.

>> +    vcpu_altp2m(v).p2midx = 0;
>> +    atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
>> +
>> +    if ( v != current )
>> +        vcpu_unpause(v);
>> +}
>> +
>> +void altp2m_vcpu_destroy(struct vcpu *v)
>> +{
>> +    struct p2m_domain *p2m;
>> +
>> +    if ( v != current )
>> +        vcpu_pause(v);
>> +
>> +    if ( (p2m = altp2m_get_altp2m(v)) )
>> +        atomic_dec(&p2m->active_vcpus);
>> +
>> +    altp2m_vcpu_reset(v);
>> +
>> +    if ( v != current )
>> +        vcpu_unpause(v);
>> +}
>> +
>> +static int altp2m_init_helper(struct domain *d, unsigned int idx)
>> +{
>> +    int rc;
>> +    struct p2m_domain *p2m = d->arch.altp2m_p2m[idx];
>> +
>> +    if ( p2m == NULL )
>> +    {
>> +        /* Allocate a new, zeroed altp2m view. */
>> +        p2m = xzalloc(struct p2m_domain);
>> +        if ( p2m == NULL)
>> +        {
>> +            rc = -ENOMEM;
>> +            goto err;
>> +        }
>> +    }
>
> Why don't you re-allocate the p2m from scratch?
>

The reason is still the reuse of already allocated altp2m's, e.g.,
within the context of altp2m_propagate_change and altp2m_reset. We have
already discussed this in patch #07.

>> +
>> +    /* Initialize the new altp2m view. */
>> +    rc = p2m_init_one(d, p2m);
>> +    if ( rc )
>> +        goto err;
>> +
>> +    /* Allocate a root table for the altp2m view. */
>> +    rc = p2m_alloc_table(p2m);
>> +    if ( rc )
>> +        goto err;
>> +
>> +    p2m->p2m_class = p2m_alternate;
>> +    p2m->access_required = 1;
>
> Please use true here. Although, I am not sure why you want to enable
> the access by default.
>

Will do.

p2m->access_required is true by default in the x86 implementation. Also,
there is currently no way to manually set access_required on altp2m.
Besides, I do not see a scenario, where it makes sense to run altp2m
without access_required set to true.

>> +    _atomic_set(&p2m->active_vcpus, 0);
>> +
>> +    d->arch.altp2m_p2m[idx] = p2m;
>> +    d->arch.altp2m_vttbr[idx] = p2m->vttbr.vttbr;
>> +
>> +    /*
>> +     * Make sure that all TLBs corresponding to the current VMID are
>> flushed
>> +     * before using it.
>> +     */
>> +    p2m_flush_tlb(p2m);
>> +
>> +    return rc;
>> +
>> +err:
>> +    if ( p2m )
>> +        xfree(p2m);
>> +
>> +    d->arch.altp2m_p2m[idx] = NULL;
>> +
>> +    return rc;
>> +}
>> +
>> +int altp2m_init_by_id(struct domain *d, unsigned int idx)
>> +{
>> +    int rc = -EINVAL;
>> +
>> +    if ( idx >= MAX_ALTP2M )
>> +        return rc;
>> +
>> +    altp2m_lock(d);
>> +
>> +    if ( d->arch.altp2m_vttbr[idx] == INVALID_VTTBR )
>> +        rc = altp2m_init_helper(d, idx);
>> +
>> +    altp2m_unlock(d);
>> +
>> +    return rc;
>> +}
>>
>>  int altp2m_init(struct domain *d)
>>  {
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index 01a3243..78370c6 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -80,8 +80,40 @@ static int
>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>          break;
>>
>>      case HVMOP_altp2m_set_domain_state:
>> -        rc = -EOPNOTSUPP;
>> +    {
>> +        struct vcpu *v;
>> +        bool_t ostate;
>> +
>> +        if ( !altp2m_enabled(d) )
>> +        {
>> +            rc = -EINVAL;
>> +            break;
>> +        }
>> +
>> +        ostate = d->arch.altp2m_active;
>> +        d->arch.altp2m_active = !!a.u.domain_state.state;
>> +
>> +        /* If the alternate p2m state has changed, handle
>> appropriately */
>> +        if ( (d->arch.altp2m_active != ostate) &&
>> +             (ostate || !(rc = altp2m_init_by_id(d, 0))) )
>> +        {
>> +            for_each_vcpu( d, v )
>> +            {
>> +                if ( !ostate )
>> +                    altp2m_vcpu_initialise(v);
>> +                else
>> +                    altp2m_vcpu_destroy(v);
>> +            }
>
> The implementation of this hvmop param looks racy to me. What does
> prevent to CPU running in this function at the same time? One will
> destroy, whilst the other one will initialize.
>
> It might even be possible to have both doing the initialization
> because there is no synchronization barrier for altp2m_active.

We have discussed this issue in patch #01.

>
>> +
>> +            /*
>> +             * The altp2m_active state has been deactivated. It is
>> now safe to
>> +             * flush all altp2m views -- including altp2m[0].
>> +             */
>> +            if ( ostate )
>> +                altp2m_flush(d);
>
> The function altp2m_flush is defined afterwards (in patch #9). Please
> make sure that all the patches compile one by one.
>

The patches compile one by one. Please note that there is an
altp2m_flush stub inside of this patch.

+/* Flush all the alternate p2m's for a domain */
+static inline void altp2m_flush(struct domain *d)
+{
+    /* Not yet implemented. */
+}


>> +        }
>>          break;
>> +    }
>>
>>      case HVMOP_altp2m_vcpu_enable_notify:
>>          rc = -EOPNOTSUPP;
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 29ec5e5..8afea11 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -139,7 +139,7 @@ void p2m_restore_state(struct vcpu *n)
>>      isb();
>>  }
>>
>> -static void p2m_flush_tlb(struct p2m_domain *p2m)
>> +void p2m_flush_tlb(struct p2m_domain *p2m)
>
> This should ideally be in a separate patch.
>

Ok.

>>  {
>>      unsigned long flags = 0;
>>      uint64_t ovttbr;
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index 79ea66b..a33c740 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -22,6 +22,8 @@
>>
>>  #include <xen/sched.h>
>>
>> +#define INVALID_ALTP2M    0xffff
>> +
>>  #define altp2m_lock(d)    spin_lock(&(d)->arch.altp2m_lock)
>>  #define altp2m_unlock(d)  spin_unlock(&(d)->arch.altp2m_lock)
>>
>> @@ -44,4 +46,17 @@ static inline uint16_t altp2m_vcpu_idx(const
>> struct vcpu *v)
>>  int altp2m_init(struct domain *d);
>>  void altp2m_teardown(struct domain *d);
>>
>> +void altp2m_vcpu_initialise(struct vcpu *v);
>> +void altp2m_vcpu_destroy(struct vcpu *v);
>> +
>> +/* Make a specific alternate p2m valid. */
>> +int altp2m_init_by_id(struct domain *d,
>> +                      unsigned int idx);
>> +
>> +/* Flush all the alternate p2m's for a domain */
>> +static inline void altp2m_flush(struct domain *d)
>> +{
>> +    /* Not yet implemented. */
>> +}
>> +
>>  #endif /* __ASM_ARM_ALTP2M_H */
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 3c25ea5..63a9650 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -135,6 +135,12 @@ struct arch_domain
>>      spinlock_t altp2m_lock;
>>  }  __cacheline_aligned;
>>
>> +struct altp2mvcpu {
>> +    uint16_t p2midx; /* alternate p2m index */
>> +};
>> +
>> +#define vcpu_altp2m(v) ((v)->arch.avcpu)
>
>
> Please avoid to have half of altp2m  defined in altp2m.h and the other
> half in domain.h.
>

Ok, thank you.

>> +
>>  struct arch_vcpu
>>  {
>>      struct {
>> @@ -264,6 +270,9 @@ struct arch_vcpu
>>      struct vtimer phys_timer;
>>      struct vtimer virt_timer;
>>      bool_t vtimer_initialized;
>> +
>> +    /* Alternate p2m context */
>> +    struct altp2mvcpu avcpu;
>>  }  __cacheline_aligned;
>>
>>  void vcpu_show_execution_state(struct vcpu *);
>> diff --git a/xen/include/asm-arm/flushtlb.h
>> b/xen/include/asm-arm/flushtlb.h
>> index 329fbb4..57c3c34 100644
>> --- a/xen/include/asm-arm/flushtlb.h
>> +++ b/xen/include/asm-arm/flushtlb.h
>> @@ -2,6 +2,7 @@
>>  #define __ASM_ARM_FLUSHTLB_H__
>>
>>  #include <xen/cpumask.h>
>> +#include <asm/p2m.h>
>>
>>  /*
>>   * Filter the given set of CPUs, removing those that definitely
>> flushed their
>> @@ -25,6 +26,9 @@ do
>> {                                                                    \
>>  /* Flush specified CPUs' TLBs */
>>  void flush_tlb_mask(const cpumask_t *mask);
>>
>> +/* Flush CPU's TLBs for the specified domain */
>> +void p2m_flush_tlb(struct p2m_domain *p2m);
>> +
>
> This function should be declared in p2m.h and not flushtlb.h.
>

Ok, thank you.

>>  #endif /* __ASM_ARM_FLUSHTLB_H__ */
>>  /*
>>   * Local variables:
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 24a1f61..f13f285 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -9,6 +9,8 @@
>>  #include <xen/p2m-common.h>
>>  #include <public/memory.h>
>>
>> +#include <asm/atomic.h>
>> +
>>  #define MAX_ALTP2M 10           /* ARM might contain an arbitrary
>> number of
>>                                     altp2m views. */
>>  #define paddr_bits PADDR_BITS
>> @@ -86,6 +88,9 @@ struct p2m_domain {
>>       */
>>      struct radix_tree_root mem_access_settings;
>>
>> +    /* Alternate p2m: count of vcpu's currently using this p2m. */
>> +    atomic_t active_vcpus;
>> +
>>      /* Choose between: host/alternate */
>>      p2m_class_t p2m_class;
>>
>> @@ -214,6 +219,12 @@ void guest_physmap_remove_page(struct domain *d,
>>
>>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>>
>> +/* Allocates page table for a p2m. */
>> +int p2m_alloc_table(struct p2m_domain *p2m);
>> +
>> +/* Initialize the p2m structure. */
>> +int p2m_init_one(struct domain *d, struct p2m_domain *p2m);
>
> These declarations belong to the patch that exported them not. Not here.
>

I will change that.

>> +
>>  /* Release resources held by the p2m structure. */
>>  void p2m_free_one(struct p2m_domain *p2m);
>>
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 09/25] arm/altp2m: Add altp2m table flushing routine.
  2016-08-03 18:44   ` Julien Grall
@ 2016-08-06  9:45     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  9:45 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/03/2016 08:44 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> The current implementation differentiates between flushing and
>> destroying altp2m views. This commit adds the function altp2m_flush,
>> which allows to flush all or individual altp2m views without destroying
>> the entire table. In this way, altp2m views can be reused at a later
>> point in time.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Pages in p2m->pages are not cleared in p2m_flush_table anymore.
>>     VMID is freed in p2m_free_one.
>>     Cosmetic fixes.
>> ---
>>  xen/arch/arm/altp2m.c        | 38
>> ++++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/altp2m.h |  5 +----
>>  xen/include/asm-arm/p2m.h    |  3 +++
>>  3 files changed, 42 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> index 767f233..e73424c 100644
>> --- a/xen/arch/arm/altp2m.c
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -151,6 +151,44 @@ int altp2m_init(struct domain *d)
>>      return 0;
>>  }
>>
>> +void altp2m_flush(struct domain *d)
>> +{
>> +    unsigned int i;
>> +    struct p2m_domain *p2m;
>> +
>> +    /*
>> +     * If altp2m is active, we are not allowed to flush altp2m[0].
>> This special
>> +     * view is considered as the hostp2m as long as altp2m is active.
>> +     */
>> +    ASSERT(!altp2m_active(d));
>
> Because of the race condition I mentioned in the previous patch (#8),
> this ASSERT may be hit randomly.
>

I will handle it by removing the race condition as discussed in patch #1
and #8.

>> +
>> +    altp2m_lock(d);
>> +
>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>> +    {
>> +        if ( d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
>> +            continue;
>> +
>> +        p2m = d->arch.altp2m_p2m[i];
>> +
>> +        read_lock(&p2m->lock);
>
> You should use p2m_lock rather than re-inventing your own. If
> p2m_*lock helpers need to be exposed, then expose them.
>
> Also, this should be a write_lock otherwise someone else could access
> at the same time.

Ok, thank you.

>
>> +
>> +        p2m_flush_table(p2m);
>> +
>> +        /*
>> +         * Reset VTTBR.
>> +         *
>> +         * Note that VMID is not freed so that it can be reused later.
>> +         */
>> +        p2m->vttbr.vttbr = INVALID_VTTBR;
>> +        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
>> +
>> +        read_unlock(&p2m->lock);
>
> I would much prefer if the p2m is fully destroyed rather than
> re-initialized.
>

We have discussed this in patch #07. In case of altp2m_reset, I will
still need to simply flush the p2m (without destroying it entirely) but
I will do it in a rather different way as it is implemented right now.
To avoid any inconveniences, I will probably make some suggestions, when
I am at that point of implementation.

>> +    }
>> +
>> +    altp2m_unlock(d);
>> +}
>> +
>>  void altp2m_teardown(struct domain *d)
>>  {
>>      unsigned int i;
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index a33c740..3ba82a8 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -54,9 +54,6 @@ int altp2m_init_by_id(struct domain *d,
>>                        unsigned int idx);
>>
>>  /* Flush all the alternate p2m's for a domain */
>> -static inline void altp2m_flush(struct domain *d)
>> -{
>> -    /* Not yet implemented. */
>> -}
>> +void altp2m_flush(struct domain *d);
>>
>>  #endif /* __ASM_ARM_ALTP2M_H */
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index f13f285..32326cb 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -222,6 +222,9 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>>  /* Allocates page table for a p2m. */
>>  int p2m_alloc_table(struct p2m_domain *p2m);
>>
>> +/* Flushes the page table held by the p2m. */
>> +void p2m_flush_table(struct p2m_domain *p2m);
>> +
>>  /* Initialize the p2m structure. */
>>  int p2m_init_one(struct domain *d, struct p2m_domain *p2m);
>>
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 10/25] arm/altp2m: Add HVMOP_altp2m_create_p2m.
  2016-08-03 18:48   ` Julien Grall
@ 2016-08-06  9:46     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  9:46 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/03/2016 08:48 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Cosmetic fixes.
>> ---
>>  xen/arch/arm/altp2m.c        | 23 +++++++++++++++++++++++
>>  xen/arch/arm/hvm.c           |  3 ++-
>>  xen/include/asm-arm/altp2m.h |  4 ++++
>>  3 files changed, 29 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> index e73424c..c22d2e4 100644
>> --- a/xen/arch/arm/altp2m.c
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -136,6 +136,29 @@ int altp2m_init_by_id(struct domain *d, unsigned
>> int idx)
>>      return rc;
>>  }
>>
>> +int altp2m_init_next(struct domain *d, uint16_t *idx)
>> +{
>> +    int rc = -EINVAL;
>> +    unsigned int i;
>> +
>> +    altp2m_lock(d);
>> +
>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>> +    {
>> +        if ( d->arch.altp2m_vttbr[i] != INVALID_VTTBR )
>> +            continue;
>> +
>> +        rc = altp2m_init_helper(d, i);
>> +        *idx = (uint16_t) i;
>
> The cast is not necessary. You could make i uint16_t.
>

Ok.

>> +
>> +        break;
>> +    }
>> +
>> +    altp2m_unlock(d);
>> +
>> +    return rc;
>> +}
>> +
>>  int altp2m_init(struct domain *d)
>>  {
>>      unsigned int i;
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index 78370c6..063a06b 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -120,7 +120,8 @@ static int
>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>          break;
>>
>>      case HVMOP_altp2m_create_p2m:
>> -        rc = -EOPNOTSUPP;
>> +        if ( !(rc = altp2m_init_next(d, &a.u.view.view)) )
>> +            rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
>>          break;
>>
>>      case HVMOP_altp2m_destroy_p2m:
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index 3ba82a8..3ecae27 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -53,6 +53,10 @@ void altp2m_vcpu_destroy(struct vcpu *v);
>>  int altp2m_init_by_id(struct domain *d,
>>                        unsigned int idx);
>>
>> +/* Find an available alternate p2m and make it valid */
>
> The comment and the implementation don't match the name of the
> function. I would rename the function altp2m_find_available or
> something similar.
>

Ok, thank you.

>> +int altp2m_init_next(struct domain *d,
>> +                     uint16_t *idx);
>> +
>>  /* Flush all the alternate p2m's for a domain */
>>  void altp2m_flush(struct domain *d);
>>
>>
>

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 11/25] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-08-04 11:46   ` Julien Grall
@ 2016-08-06  9:54     ` Sergej Proskurin
  2016-08-06 13:36       ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06  9:54 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 01:46 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Substituted the call to tlb_flush for p2m_flush_table.
>>     Added comments.
>>     Cosmetic fixes.
>> ---
>>  xen/arch/arm/altp2m.c        | 50
>> ++++++++++++++++++++++++++++++++++++++++++++
>>  xen/arch/arm/hvm.c           |  2 +-
>>  xen/include/asm-arm/altp2m.h |  4 ++++
>>  3 files changed, 55 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> index c22d2e4..80ed553 100644
>> --- a/xen/arch/arm/altp2m.c
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -212,6 +212,56 @@ void altp2m_flush(struct domain *d)
>>      altp2m_unlock(d);
>>  }
>>
>> +int altp2m_destroy_by_id(struct domain *d, unsigned int idx)
>> +{
>> +    struct p2m_domain *p2m;
>> +    int rc = -EBUSY;
>> +
>> +    /*
>> +     * The altp2m[0] is considered as the hostp2m and is used as a
>> safe harbor
>> +     * to which you can switch as long as altp2m is active. After
>> deactivating
>> +     * altp2m, the system switches back to the original hostp2m
>> view. That is,
>> +     * altp2m[0] should only be destroyed/flushed/freed, when altp2m is
>> +     * deactivated.
>> +     */
>> +    if ( !idx || idx >= MAX_ALTP2M )
>> +        return rc;
>> +
>> +    domain_pause_except_self(d);
>> +
>> +    altp2m_lock(d);
>> +
>> +    if ( d->arch.altp2m_vttbr[idx] != INVALID_VTTBR )
>> +    {
>> +        p2m = d->arch.altp2m_p2m[idx];
>> +
>> +        if ( !_atomic_read(p2m->active_vcpus) )
>> +        {
>> +            read_lock(&p2m->lock);
>
> Please avoid to use open read_lock and use the p2m_*lock helpers. Also
> read lock does not prevent multiple thread to access the p2m at the
> same time.
>

Thank you very much.

>> +
>> +            p2m_flush_table(p2m);
>> +
>> +            /*
>> +             * Reset VTTBR.
>> +             *
>> +             * Note that VMID is not freed so that it can be reused
>> later.
>> +             */
>> +            p2m->vttbr.vttbr = INVALID_VTTBR;
>> +            d->arch.altp2m_vttbr[idx] = INVALID_VTTBR;
>> +
>> +            read_unlock(&p2m->lock);
>
> Why did you decide to reset the p2m rather than free it? The code
> would be simpler with the latter.
>

* First, to be as similar to the x86 implementation as possible.
* Second, simply to reuse already allocated p2m's without additional
overhead.

If you should not agree on this choice, I could adapt the implementation
accordingly.

>> +
>> +            rc = 0;
>> +        }
>> +    }
>> +
>> +    altp2m_unlock(d);
>> +
>> +    domain_unpause_except_self(d);
>> +
>> +    return rc;
>> +}
>> +
>>  void altp2m_teardown(struct domain *d)
>>  {
>>      unsigned int i;
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index 063a06b..df29cdc 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -125,7 +125,7 @@ static int
>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>          break;
>>
>>      case HVMOP_altp2m_destroy_p2m:
>> -        rc = -EOPNOTSUPP;
>> +        rc = altp2m_destroy_by_id(d, a.u.view.view);
>>          break;
>>
>>      case HVMOP_altp2m_switch_p2m:
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index 3ecae27..afa1580 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -60,4 +60,8 @@ int altp2m_init_next(struct domain *d,
>>  /* Flush all the alternate p2m's for a domain */
>>  void altp2m_flush(struct domain *d);
>>
>> +/* Make a specific alternate p2m invalid */
>> +int altp2m_destroy_by_id(struct domain *d,
>> +                         unsigned int idx);
>> +
>>  #endif /* __ASM_ARM_ALTP2M_H */
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 12/25] arm/altp2m: Add HVMOP_altp2m_switch_p2m.
  2016-08-04 11:51   ` Julien Grall
@ 2016-08-06 10:13     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 10:13 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 01:51 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/altp2m.c        | 32 ++++++++++++++++++++++++++++++++
>>  xen/arch/arm/hvm.c           |  2 +-
>>  xen/include/asm-arm/altp2m.h |  4 ++++
>>  3 files changed, 37 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> index 80ed553..7404f42 100644
>> --- a/xen/arch/arm/altp2m.c
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -33,6 +33,38 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
>>      return v->domain->arch.altp2m_p2m[index];
>>  }
>>
>> +int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int
>> idx)
>> +{
>> +    struct vcpu *v;
>> +    int rc = -EINVAL;
>> +
>> +    if ( idx >= MAX_ALTP2M )
>> +        return rc;
>> +
>> +    domain_pause_except_self(d);
>> +
>> +    altp2m_lock(d);
>> +
>> +    if ( d->arch.altp2m_vttbr[idx] != INVALID_VTTBR )
>> +    {
>> +        for_each_vcpu( d, v )
>> +            if ( idx != vcpu_altp2m(v).p2midx )
>> +            {
>> +                atomic_dec(&altp2m_get_altp2m(v)->active_vcpus);
>> +                vcpu_altp2m(v).p2midx = idx;
>> +                atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
>> +            }
>> +
>> +        rc = 0;
>> +    }
>
> If a domain is calling the function on itself, the current vCPU will
> not switch to the new altp2m because the VTTBR is only restore during
> context switch.

Is this really the case that a switch from a guest to Xen could result
in no context switch at all? I just saw the asserts in context_switch:

[...]
ASSERT(prev != next);
[...]

However, I thought that every trap/hypercall to Xen would eventually
restore the VCPU's state. In this case, I will need to directly write to
VTTBR_EL2. Thank you.

>
> However, I am not sure why you store a p2midx rather than directly a
> pointer to the p2m.
>

Again, this idea has been also adopted from the x86 implementation. And
I really do not see any reason to change that. Do you?

>> +
>> +    altp2m_unlock(d);
>> +
>> +    domain_unpause_except_self(d);
>> +
>> +    return rc;
>> +}
>> +
>>  static void altp2m_vcpu_reset(struct vcpu *v)
>>  {
>>      struct altp2mvcpu *av = &vcpu_altp2m(v);
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index df29cdc..3b508df 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -129,7 +129,7 @@ static int
>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>          break;
>>
>>      case HVMOP_altp2m_switch_p2m:
>> -        rc = -EOPNOTSUPP;
>> +        rc = altp2m_switch_domain_altp2m_by_id(d, a.u.view.view);
>>          break;
>>
>>      case HVMOP_altp2m_set_mem_access:
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index afa1580..790bb33 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -49,6 +49,10 @@ void altp2m_teardown(struct domain *d);
>>  void altp2m_vcpu_initialise(struct vcpu *v);
>>  void altp2m_vcpu_destroy(struct vcpu *v);
>>
>> +/* Switch alternate p2m for entire domain */
>> +int altp2m_switch_domain_altp2m_by_id(struct domain *d,
>> +                                      unsigned int idx);
>> +
>>  /* Make a specific alternate p2m valid. */
>>  int altp2m_init_by_id(struct domain *d,
>>                        unsigned int idx);
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 13/25] arm/altp2m: Make p2m_restore_state ready for altp2m.
  2016-08-04 11:55   ` Julien Grall
@ 2016-08-06 10:20     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 10:20 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 01:55 PM, Julien Grall wrote:
> Hello,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> This commit adapts the function "p2m_restore_state" in a way that the
>> currently active altp2m table is considered during state restoration.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/p2m.c           | 4 +++-
>>  xen/include/asm-arm/altp2m.h | 3 +++
>>  2 files changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 8afea11..bcad51f 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -115,7 +115,9 @@ void p2m_save_state(struct vcpu *p)
>>  void p2m_restore_state(struct vcpu *n)
>>  {
>>      register_t hcr;
>> -    struct p2m_domain *p2m = &n->domain->arch.p2m;
>> +    struct domain *d = n->domain;
>> +    struct p2m_domain *p2m = unlikely(altp2m_active(d)) ?
>> +                             altp2m_get_altp2m(n) : p2m_get_hostp2m(d);
>
> This seems to be a common idiom in multiple place. I would like to see
> a helper for this.

Ok.

>
> However, I think this could be avoided if you store a pointer to the
> p2m directly in arch_vcpu.
>

This answers my question in the previous patch (patch #12). It would
definitely make thinks easier in some cases. I will check if we can do
it without too much troubles.

>>
>>      if ( is_idle_vcpu(n) )
>>          return;
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index 790bb33..a6496b7 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -49,6 +49,9 @@ void altp2m_teardown(struct domain *d);
>>  void altp2m_vcpu_initialise(struct vcpu *v);
>>  void altp2m_vcpu_destroy(struct vcpu *v);
>>
>> +/* Get current alternate p2m table. */
>> +struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
>> +
>
> This should be added in the patch where the function was added (i.e
> patch #8).
>

Ok.

>>  /* Switch alternate p2m for entire domain */
>>  int altp2m_switch_domain_altp2m_by_id(struct domain *d,
>>                                        unsigned int idx);
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-08-04 11:59   ` Julien Grall
@ 2016-08-06 10:38     ` Sergej Proskurin
  2016-08-06 13:45       ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 10:38 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 01:59 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> The function get_page_from_gva uses ARM's hardware support to translate
>> gva's to machine addresses. This function is used, among others, for
>> memory regulation purposes, e.g, within the context of memory
>> ballooning.
>> To ensure correct behavior while altp2m is in use, we use the host's p2m
>> table for the associated gva to ma translation. This is required at this
>> point, as altp2m lazily copies pages from the host's p2m and even might
>> be flushed because of changes to the host's p2m (as it is done within
>> the context of memory ballooning).
>
> I was expecting to see some change in
> p2m_mem_access_check_and_get_page. Is there any reason to not fix it?
>
>

I did not yet encounter any issues with
p2m_mem_access_check_and_get_page. According to ARM ARM, ATS1C** (see
gva_to_ipa_par) translates VA to IPA in non-secure privilege levels (as
it is the the case here). Thus, the 2nd level translation represented by
the (alt)p2m is not really considered at this point and hence make an
extension obsolete.

Or did you have anything else in mind?

>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/p2m.c | 31 +++++++++++++++++++++++++++++--
>>  1 file changed, 29 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index bcad51f..784f8da 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1614,7 +1614,7 @@ struct page_info *get_page_from_gva(struct vcpu
>> *v, vaddr_t va,
>>                                      unsigned long flags)
>>  {
>>      struct domain *d = v->domain;
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>
> This is more a clean-up than necessary. I would prefer to see a patch
> modifying all "&d->arch.p2m" by p2m_get_hostp2m in one go.
>

I will add a patch, which will do just that. Thank you.

>>      struct page_info *page = NULL;
>>      paddr_t maddr = 0;
>>      int rc;
>> @@ -1628,7 +1628,34 @@ struct page_info *get_page_from_gva(struct
>> vcpu *v, vaddr_t va,
>>
>>      p2m_read_lock(p2m);
>>
>> -    rc = gvirt_to_maddr(va, &maddr, flags);
>> +    /*
>> +     * If altp2m is active, we still read the gva from the hostp2m,
>> as it
>
> s/still/need to/
>

Ok.

>> +     * contains all valid mappings while the currently active altp2m
>> view might
>> +     * not have the required gva mapping yet.
>> +     */
>> +    if ( unlikely(altp2m_active(d)) )
>> +    {
>> +        unsigned long irq_flags = 0;
>> +        uint64_t ovttbr = READ_SYSREG64(VTTBR_EL2);
>> +
>> +        if ( ovttbr != p2m->vttbr.vttbr )
>> +        {
>> +            local_irq_save(irq_flags);
>> +            WRITE_SYSREG64(p2m->vttbr.vttbr, VTTBR_EL2);
>> +            isb();
>> +        }
>> +
>> +        rc = gvirt_to_maddr(va, &maddr, flags);
>> +
>> +        if ( ovttbr != p2m->vttbr.vttbr )
>> +        {
>> +            WRITE_SYSREG64(ovttbr, VTTBR_EL2);
>> +            isb();
>> +            local_irq_restore(irq_flags);
>> +        }
>
> The pattern is very similar to what p2m_flush_tlb does. Can we get
> macro/helpers to avoid duplicate code?
>

Sure thing :)

>> +    }
>> +    else
>> +        rc = gvirt_to_maddr(va, &maddr, flags);
>>
>>      if ( rc )
>>          goto err;
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 15/25] arm/altp2m: Extend __p2m_lookup.
  2016-08-04 12:04   ` Julien Grall
@ 2016-08-06 10:44     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 10:44 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 02:04 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> This commit extends the functionality of the function "__p2m_lookup".
>> The function "__p2m_lookup" performs the necessary steps gathering
>> information concerning memory attributes and the p2m table level a
>> specific gfn is mapped to. Thus, we extend the function's prototype so
>> that the caller can optionally get these information for further
>> processing.
>>
>> Also, we extend the function prototype of "__p2m_lookup" to hold an
>> argument of type "struct p2m_domain*", as we need to distinguish between
>> the host's p2m and different altp2m views. While doing so, we needed to
>> extend the function's prototypes of the following functions:
>>
>> * __p2m_get_mem_access
>> * p2m_mem_access_and_get_page
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/p2m.c | 59
>> ++++++++++++++++++++++++++++++++++++------------------
>>  1 file changed, 39 insertions(+), 20 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 784f8da..326e343 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -168,15 +168,22 @@ void p2m_flush_tlb(struct p2m_domain *p2m)
>>      }
>>  }
>>
>> +static int __p2m_get_mem_access(struct p2m_domain*, gfn_t,
>> xenmem_access_t*);
>> +
>>  /*
>>   * Lookup the MFN corresponding to a domain's GFN.
>>   *
>>   * There are no processor functions to do a stage 2 only lookup
>> therefore we
>>   * do a a software walk.
>> + *
>> + * Optionally, __p2m_lookup takes arguments to provide information
>> about the
>> + * p2m type, the p2m table level the paddr is mapped to, associated mem
>> + * attributes, and memory access rights.
>>   */
>> -static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
>> +static mfn_t __p2m_lookup(struct p2m_domain *p2m, gfn_t gfn,
>> p2m_type_t *t,
>> +                          unsigned int *level, unsigned int *mattr,
>> +                          xenmem_access_t *xma)
>
> Please give a look at my series "xen/arm: Rework the P2M code to
> follow break-before-make sequence" [1], it will expose a clean
> interface to retrieve all the necessary information.
>

Alright, thank you.

>>  mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
>>  {
>>      mfn_t ret;
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>
>>      p2m_read_lock(p2m);
>> -    ret = __p2m_lookup(d, gfn, t);
>> +    ret = __p2m_lookup(p2m, gfn, t, NULL, NULL, NULL);
>>      p2m_read_unlock(p2m);
>>
>>      return ret;
>> @@ -479,10 +499,9 @@ static int p2m_create_table(struct p2m_domain
>> *p2m, lpae_t *entry,
>>      return 0;
>>  }
>>
>> -static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
>> +static int __p2m_get_mem_access(struct p2m_domain *p2m, gfn_t gfn,
>>                                  xenmem_access_t *access)
>>  {
>> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>      void *i;
>>      unsigned int index;
>>
>> @@ -525,7 +544,7 @@ static int __p2m_get_mem_access(struct domain *d,
>> gfn_t gfn,
>>           * No setting was found in the Radix tree. Check if the
>>           * entry exists in the page-tables.
>>           */
>> -        mfn_t mfn = __p2m_lookup(d, gfn, NULL);
>> +        mfn_t mfn = __p2m_lookup(p2m, gfn, NULL, NULL, NULL, NULL);
>>
>>          if ( mfn_eq(mfn, INVALID_MFN) )
>>              return -ESRCH;
>> @@ -1519,7 +1538,7 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>>   * we indeed found a conflicting mem_access setting.
>>   */
>>  static struct page_info*
>> -p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag)
>> +p2m_mem_access_check_and_get_page(struct p2m_domain *p2m, vaddr_t
>> gva, unsigned long flag)
>>  {
>>      long rc;
>>      paddr_t ipa;
>> @@ -1539,7 +1558,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
>> unsigned long flag)
>>       * We do this first as this is faster in the default case when no
>>       * permission is set on the page.
>>       */
>> -    rc = __p2m_get_mem_access(current->domain, gfn, &xma);
>> +    rc = __p2m_get_mem_access(p2m, gfn, &xma);
>>      if ( rc < 0 )
>>          goto err;
>>
>> @@ -1588,7 +1607,7 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
>> unsigned long flag)
>>       * We had a mem_access permission limiting the access, but the
>> page type
>>       * could also be limiting, so we need to check that as well.
>>       */
>> -    mfn = __p2m_lookup(current->domain, gfn, &t);
>> +    mfn = __p2m_lookup(p2m, gfn, &t, NULL, NULL, NULL);
>>      if ( mfn_eq(mfn, INVALID_MFN) )
>>          goto err;
>>
>> @@ -1671,7 +1690,7 @@ struct page_info *get_page_from_gva(struct vcpu
>> *v, vaddr_t va,
>>
>>  err:
>>      if ( !page && p2m->mem_access_enabled )
>> -        page = p2m_mem_access_check_and_get_page(va, flags);
>> +        page = p2m_mem_access_check_and_get_page(p2m, va, flags);
>
> p2m_mem_access_check_and_get_page should take a vCPU in parameter and
> not a p2m.
>
> I know the function is already buggy because it assumes that the vCPU
> == current. But let's not broken more this function.
>

Alright.

>>
>>      p2m_read_unlock(p2m);
>>
>> @@ -1948,7 +1967,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t
>> gfn,
>>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>
>>      p2m_read_lock(p2m);
>> -    ret = __p2m_get_mem_access(d, gfn, access);
>> +    ret = __p2m_get_mem_access(p2m, gfn, access);
>>      p2m_read_unlock(p2m);
>>
>>      return ret;
>>
>
> Regards,
>
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02952.html
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 17/25] arm/altp2m: Cosmetic fixes - function prototypes.
  2016-08-04 12:06   ` Julien Grall
@ 2016-08-06 10:46     ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 10:46 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 02:06 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> This commit changes the prototype of the following functions:
>> - apply_p2m_changes
>> - apply_one_level
>> - p2m_insert_mapping
>> - p2m_remove_mapping
>>
>> These changes are required as our implementation reuses most of the
>> existing ARM p2m implementation to set page table attributes of the
>> individual altp2m views. Therefore, exiting function prototypes have
>> been extended to hold another argument (of type struct p2m_domain *).
>> This allows to specify the p2m/altp2m domain that should be processed by
>> the individual function -- instead of accessing the host's default p2m
>> domain.
>
> I would prefer if you rebase this series on top of "xen/arm: Rework
> the P2M code to follow break-before-make sequence" [1]. This will
> offer a cleaner interface for altp2m.
>

Alright, thank you.

>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Adoption of the functions "__p2m_lookup" and "__p2m_get_mem_access"
>>     have been moved out of this commit.
>> ---
>>  xen/arch/arm/p2m.c | 49
>> +++++++++++++++++++++++++------------------------
>>  1 file changed, 25 insertions(+), 24 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 53258e1..d4b7c92 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -702,6 +702,7 @@ static int p2m_shatter_page(struct p2m_domain *p2m,
>>   * -ve == (-Exxx) error.
>>   */
>>  static int apply_one_level(struct domain *d,
>> +                           struct p2m_domain *p2m,
>>                             lpae_t *entry,
>>                             unsigned int level,
>>                             bool_t flush_cache,
>> @@ -717,7 +718,6 @@ static int apply_one_level(struct domain *d,
>>      const paddr_t level_size = level_sizes[level];
>>      const paddr_t level_mask = level_masks[level];
>>
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>      lpae_t pte;
>>      const lpae_t orig_pte = *entry;
>>      int rc;
>> @@ -955,6 +955,7 @@ static void update_reference_mapping(struct
>> page_info *page,
>>  }
>>
>>  static int apply_p2m_changes(struct domain *d,
>> +                     struct p2m_domain *p2m,
>>                       enum p2m_operation op,
>>                       gfn_t sgfn,
>>                       unsigned long nr,
>> @@ -967,7 +968,6 @@ static int apply_p2m_changes(struct domain *d,
>>      paddr_t end_gpaddr = pfn_to_paddr(gfn_x(sgfn) + nr);
>>      paddr_t maddr = pfn_to_paddr(mfn_x(smfn));
>>      int rc, ret;
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>>      lpae_t *mappings[4] = { NULL, NULL, NULL, NULL };
>>      struct page_info *pages[4] = { NULL, NULL, NULL, NULL };
>>      paddr_t addr;
>> @@ -1093,7 +1093,7 @@ static int apply_p2m_changes(struct domain *d,
>>              lpae_t *entry = &mappings[level][offset];
>>              lpae_t old_entry = *entry;
>>
>> -            ret = apply_one_level(d, entry,
>> +            ret = apply_one_level(d, p2m, entry,
>>                                    level, flush_pt, op,
>>                                    start_gpaddr, end_gpaddr,
>>                                    &addr, &maddr, &flush,
>> @@ -1178,7 +1178,7 @@ static int apply_p2m_changes(struct domain *d,
>>  out:
>>      if ( flush )
>>      {
>> -        p2m_flush_tlb(&d->arch.p2m);
>> +        p2m_flush_tlb(p2m);
>>          ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
>>          if ( !rc )
>>              rc = ret;
>> @@ -1205,31 +1205,33 @@ out:
>>           * addr keeps the address of the end of the last
>> successfully-inserted
>>           * mapping.
>>           */
>> -        apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
>> -                          0, p2m_invalid, d->arch.p2m.default_access);
>> +        apply_p2m_changes(d, p2m, REMOVE, sgfn, gfn - gfn_x(sgfn),
>> smfn,
>> +                          0, p2m_invalid, p2m->default_access);
>>      }
>>
>>      return rc;
>>  }
>>
>>  static inline int p2m_insert_mapping(struct domain *d,
>> +                                     struct p2m_domain *p2m,
>>                                       gfn_t start_gfn,
>>                                       unsigned long nr,
>>                                       mfn_t mfn,
>>                                       p2m_type_t t)
>>  {
>> -    return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
>> -                             0, t, d->arch.p2m.default_access);
>> +    return apply_p2m_changes(d, p2m, INSERT, start_gfn, nr, mfn,
>> +                             0, t, p2m->default_access);
>>  }
>>
>>  static inline int p2m_remove_mapping(struct domain *d,
>> +                                     struct p2m_domain *p2m,
>>                                       gfn_t start_gfn,
>>                                       unsigned long nr,
>>                                       mfn_t mfn)
>>  {
>> -    return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
>> +    return apply_p2m_changes(d, p2m, REMOVE, start_gfn, nr, mfn,
>>                               /* arguments below not used when
>> removing mapping */
>> -                             0, p2m_invalid,
>> d->arch.p2m.default_access);
>> +                             0, p2m_invalid, p2m->default_access);
>>  }
>>
>>  int map_regions_rw_cache(struct domain *d,
>> @@ -1237,7 +1239,7 @@ int map_regions_rw_cache(struct domain *d,
>>                           unsigned long nr,
>>                           mfn_t mfn)
>>  {
>> -    return p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_c);
>> +    return p2m_insert_mapping(d, p2m_get_hostp2m(d), gfn, nr, mfn,
>> p2m_mmio_direct_c);
>>  }
>>
>>  int unmap_regions_rw_cache(struct domain *d,
>> @@ -1245,7 +1247,7 @@ int unmap_regions_rw_cache(struct domain *d,
>>                             unsigned long nr,
>>                             mfn_t mfn)
>>  {
>> -    return p2m_remove_mapping(d, gfn, nr, mfn);
>> +    return p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, nr, mfn);
>>  }
>>
>>  int map_mmio_regions(struct domain *d,
>> @@ -1253,7 +1255,7 @@ int map_mmio_regions(struct domain *d,
>>                       unsigned long nr,
>>                       mfn_t mfn)
>>  {
>> -    return p2m_insert_mapping(d, start_gfn, nr, mfn,
>> p2m_mmio_direct_nc);
>> +    return p2m_insert_mapping(d, p2m_get_hostp2m(d), start_gfn, nr,
>> mfn, p2m_mmio_direct_nc);
>>  }
>>
>>  int unmap_mmio_regions(struct domain *d,
>> @@ -1261,7 +1263,7 @@ int unmap_mmio_regions(struct domain *d,
>>                         unsigned long nr,
>>                         mfn_t mfn)
>>  {
>> -    return p2m_remove_mapping(d, start_gfn, nr, mfn);
>> +    return p2m_remove_mapping(d, p2m_get_hostp2m(d), start_gfn, nr,
>> mfn);
>>  }
>>
>>  int map_dev_mmio_region(struct domain *d,
>> @@ -1291,14 +1293,14 @@ int guest_physmap_add_entry(struct domain *d,
>>                              unsigned long page_order,
>>                              p2m_type_t t)
>>  {
>> -    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
>> +    return p2m_insert_mapping(d, p2m_get_hostp2m(d), gfn, (1 <<
>> page_order), mfn, t);
>>  }
>>
>>  void guest_physmap_remove_page(struct domain *d,
>>                                 gfn_t gfn,
>>                                 mfn_t mfn, unsigned int page_order)
>>  {
>> -    p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
>> +    p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, (1 <<
>> page_order), mfn);
>>  }
>>
>>  int p2m_alloc_table(struct p2m_domain *p2m)
>> @@ -1505,26 +1507,25 @@ int p2m_init(struct domain *d)
>>
>>  int relinquish_p2m_mapping(struct domain *d)
>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>      unsigned long nr;
>>
>>      nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
>>
>> -    return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
>> -                             INVALID_MFN, 0, p2m_invalid,
>> -                             d->arch.p2m.default_access);
>> +    return apply_p2m_changes(d, p2m, RELINQUISH,
>> p2m->lowest_mapped_gfn, nr,
>> +                             INVALID_MFN, 0, p2m_invalid,
>> p2m->default_access);
>>  }
>>
>>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
>>  {
>> -    struct p2m_domain *p2m = &d->arch.p2m;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>      gfn_t end = gfn_add(start, nr);
>>
>>      start = gfn_max(start, p2m->lowest_mapped_gfn);
>>      end = gfn_min(end, p2m->max_mapped_gfn);
>>
>> -    return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
>> -                             0, p2m_invalid,
>> d->arch.p2m.default_access);
>> +    return apply_p2m_changes(d, p2m, CACHEFLUSH, start, nr,
>> INVALID_MFN,
>> +                             0, p2m_invalid, p2m->default_access);
>>  }
>>
>>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>> @@ -1963,7 +1964,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t
>> gfn, uint32_t nr,
>>          return 0;
>>      }
>>
>> -    rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
>> +    rc = apply_p2m_changes(d, p2m, MEMACCESS, gfn_add(gfn, start),
>>                             (nr - start), INVALID_MFN, mask, 0, a);
>>      if ( rc < 0 )
>>          return rc;
>>
>
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02952.html
>

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 18/25] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-08-04 14:19   ` Julien Grall
@ 2016-08-06 11:03     ` Sergej Proskurin
  2016-08-06 14:26       ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 11:03 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 04:19 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>>  int p2m_alloc_table(struct p2m_domain *p2m)
>>  {
>>      unsigned int i;
>> @@ -1920,7 +1948,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t
>> gfn, uint32_t nr,
>>                          uint32_t start, uint32_t mask,
>> xenmem_access_t access,
>>                          unsigned int altp2m_idx)
>>  {
>> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
>>      p2m_access_t a;
>>      long rc = 0;
>>
>> @@ -1939,33 +1967,60 @@ long p2m_set_mem_access(struct domain *d,
>> gfn_t gfn, uint32_t nr,
>>  #undef ACCESS
>>      };
>>
>> +    /* altp2m view 0 is treated as the hostp2m */
>> +    if ( altp2m_idx )
>> +    {
>> +        if ( altp2m_idx >= MAX_ALTP2M ||
>> +             d->arch.altp2m_vttbr[altp2m_idx] == INVALID_VTTBR )
>> +            return -EINVAL;
>> +
>> +        ap2m = d->arch.altp2m_p2m[altp2m_idx];
>> +    }
>> +
>>      switch ( access )
>>      {
>>      case 0 ... ARRAY_SIZE(memaccess) - 1:
>>          a = memaccess[access];
>>          break;
>>      case XENMEM_access_default:
>> -        a = p2m->default_access;
>> +        a = hp2m->default_access;
>
> Why the default_access is set the host p2m and not the alt p2m?
>

Currently, we don't have a way of manually setting/getting the
default_access in altp2m. Maybe it would make sense to extend the
interface by explicitly setting default_access of the individual views. 
As I am thinking about that, it would benefit the entire architecture as
the current propagate change operation simply flushes the altp2m views
and expects them to be lazily filled with hostp2m's entries. Because of
this, I believe this would render the altp2m functionality obsolete if
the system would try to change entries in the hostp2m while altp2m was
active. What do you think?

>>          break;
>>      default:
>>          return -EINVAL;
>>      }
>>
>
> [...]
>
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index a6496b7..dc41f93 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -71,4 +71,14 @@ void altp2m_flush(struct domain *d);
>>  int altp2m_destroy_by_id(struct domain *d,
>>                           unsigned int idx);
>>
>> +/* Set memory access attributes of the gfn in the altp2m view. If
>> the altp2m
>> + * view does not contain the particular entry, copy it first from
>> the hostp2m.
>> + *
>> + * Currently supports memory attribute adoptions of only one (4K)
>> page. */
>
> Coding style:
>
> /*
>  *  Foo
>  *  Bar
>  */
>

Thank you.

>> +int altp2m_set_mem_access(struct domain *d,
>> +                          struct p2m_domain *hp2m,
>> +                          struct p2m_domain *ap2m,
>> +                          p2m_access_t a,
>> +                          gfn_t gfn);
>> +
>>  #endif /* __ASM_ARM_ALTP2M_H */
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 32326cb..9859ad1 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -180,6 +180,17 @@ void p2m_dump_info(struct domain *d);
>>  /* Look up the MFN corresponding to a domain's GFN. */
>>  mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
>>
>> +/* Lookup the MFN, memory attributes, and page table level
>> corresponding to a
>> + * domain's GFN. */
>> +mfn_t p2m_lookup_attr(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
>> +                      unsigned int *level, unsigned int *mattr,
>> +                      xenmem_access_t *xma);
>
> I don't want to see a such interface expose outside of p2m. The
> outside world may not know what means the level. And I don't
> understand why you return "mattr" here.
>

In the current implementation, mattr is indeed not needed anymore. Yet,
I did want to hear your opinion first. So, I will gladly remove mattr
from the prototype.

Concerning the exposure of p2m_lookup_attr: Agreed. However, I am not
sure how else we could get the required functionality from altp2m.c
without duplicating big parts of the code. In the previous patch, you
have mentioned that we should rather share code to get the required
values. Now we do...

Do you have another idea how we could solve this issue?

>> +
>> +/* Modify an altp2m view's entry or its attributes. */
>> +int modify_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
>> +                        paddr_t gpa, paddr_t maddr, unsigned int level,
>> +                        p2m_type_t t, p2m_access_t a);
>> +
>>  /* Clean & invalidate caches corresponding to a region of guest
>> address space */
>>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
>>
>> @@ -303,6 +314,16 @@ static inline int get_page_and_type(struct
>> page_info *page,
>>  /* get host p2m table */
>>  #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
>>
>> +static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
>> +{
>> +    return p2m->p2m_class == p2m_host;
>> +}
>> +
>> +static inline bool_t p2m_is_altp2m(const struct p2m_domain *p2m)
>> +{
>> +    return p2m->p2m_class == p2m_alternate;
>> +}
>> +
>
> Why those helpers are only added here? They should be at the same
> place you define the p2m_class.
>

They are used here for the first time. But ok, I can do that.

>>  /* vm_event and mem_access are supported on any ARM guest */
>>  static inline bool_t p2m_mem_access_sanity_check(struct domain *d)
>>  {
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 19/25] arm/altp2m: Add altp2m_propagate_change.
  2016-08-04 14:50   ` Julien Grall
@ 2016-08-06 11:26     ` Sergej Proskurin
  2016-08-06 13:52       ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 11:26 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 04:50 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> This commit introduces the function "altp2m_propagate_change" that is
>> responsible to propagate changes applied to the host's p2m to a specific
>> or even all altp2m views. In this way, Xen can in-/decrease the guest's
>> physmem at run-time without leaving the altp2m views with
>> stalled/invalid entries.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/altp2m.c        | 75
>> ++++++++++++++++++++++++++++++++++++++++++++
>>  xen/arch/arm/p2m.c           | 14 +++++++++
>>  xen/include/asm-arm/altp2m.h |  9 ++++++
>>  xen/include/asm-arm/p2m.h    |  5 +++
>>  4 files changed, 103 insertions(+)
>>
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> index f98fd73..f3c1cff 100644
>> --- a/xen/arch/arm/altp2m.c
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -133,6 +133,81 @@ out:
>>      return rc;
>>  }
>>
>> +static inline void altp2m_reset(struct p2m_domain *p2m)
>> +{
>> +    read_lock(&p2m->lock);
>
> Again read_lock does not protect you against concurrent access. Only
> against someone else update the page table.
>
> This should be p2m_write_lock.
>

Thank you.

>> +
>> +    p2m_flush_table(p2m);
>> +    p2m_flush_tlb(p2m);
>
> altp2m_reset may be called on a p2m used by a running vCPU. What this
> code does is:
>     1) clearing root page table
>     2) free intermediate page table
>     3) invalidate the TLB
>
> Until step 3, the other TLBs may contain entries pointing the
> intermediate page table. But they were freed and could therefore be
> re-used for another purpose. So step 2 and 3 should be inverted.

This is an excellent tip. Thank you!

>
> I will re-iterate same message as in the previous series. Please have
> a think about the locking and memory ordering of all this series. I
> found a lot of race condition and I may have miss someone. If you have
> a doubt don't hesitate to ask.
>

I definitely will, thank you for your offer :)

>> +
>> +    p2m->lowest_mapped_gfn = INVALID_GFN;
>> +    p2m->max_mapped_gfn = _gfn(0);
>> +
>> +    read_unlock(&p2m->lock);
>> +}
>> +
>> +void altp2m_propagate_change(struct domain *d,
>> +                             gfn_t sgfn,
>> +                             unsigned long nr,
>> +                             mfn_t smfn,
>> +                             uint32_t mask,
>> +                             p2m_type_t p2mt,
>> +                             p2m_access_t p2ma)
>> +{
>> +    struct p2m_domain *p2m;
>> +    mfn_t m;
>> +    unsigned int i;
>> +    unsigned int reset_count = 0;
>> +    unsigned int last_reset_idx = ~0;
>> +
>> +    if ( !altp2m_active(d) )
>
> This is not safe. d->arch.altp2m_active maybe be turn off just after
> you read. Maybe you want to protect it with altp2m_lock.
>

This is true. Thanks!

>> +        return;
>> +
>> +    altp2m_lock(d);
>> +
>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>> +    {
>> +        if ( d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
>> +            continue;
>> +
>> +        p2m = d->arch.altp2m_p2m[i];
>> +
>> +        m = p2m_lookup_attr(p2m, sgfn, NULL, NULL, NULL, NULL);
>
> What is the benefits to check if it already exists in the altp2m?
>

The current implementation directly changes the entry in the particular
altp2m view, without having to flush it if an already existing entry has
been changed.

>> +
>> +        /* Check for a dropped page that may impact this altp2m. */
>> +        if ( (mfn_eq(smfn, INVALID_MFN) || p2mt == p2m_invalid) &&
>
> Why the check to p2mt against p2m_invalid?
>

I have encountered p2m entries or rather calls to p2m_apply_change with
valid MFN's, however, invalid types. That is, a page drop would not be
recognized if checked only against an invalid MFN.

>> +             gfn_x(sgfn) >= gfn_x(p2m->lowest_mapped_gfn) &&
>> +             gfn_x(sgfn) <= gfn_x(p2m->max_mapped_gfn) )
>> +        {
>> +            if ( !reset_count++ )
>> +            {
>> +                altp2m_reset(p2m);
>> +                last_reset_idx = i;
>> +            }
>> +            else
>> +            {
>> +                /* At least 2 altp2m's impacted, so reset
>> everything. */
>
> So if you remove a 4KB page in more than 2 altp2m, you will flush all
> the p2m. This sounds really more time consuming (you have to free all
> the intermediate page table) than removing a single 4KB page.

I agree: The solution has been directly adopted from the x86
implementation. It needs further design reconsideration. However, at
this point we would like to finish the overall architecture for ARM
before we go into further reconstructions.

>
>> +                for ( i = 0; i < MAX_ALTP2M; i++ )
>> +                {
>> +                    if ( i == last_reset_idx ||
>> +                         d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
>> +                        continue;
>> +
>> +                    p2m = d->arch.altp2m_p2m[i];
>> +                    altp2m_reset(p2m);
>> +                }
>> +                goto out;
>> +            }
>> +        }
>> +        else if ( !mfn_eq(m, INVALID_MFN) )
>> +            modify_altp2m_range(d, p2m, sgfn, nr, smfn,
>> +                                mask, p2mt, p2ma);
>
> I am a bit concerned about this function. We decided to limit the size
> of the mapping to avoid long running memory operations (see XSA-158).
>
> With this function you multiply up to 10 times the duration of the
> operation.
>

I see your point. However, on an active system, adaptions of the hostp2m
(while altp2m is active) should be very limited. Another solution would
be to simply flush the altp2m views on every hostp2m modification. IMO
this, however, would not really be a huge performance gain due to
de-allocation and freeing of the altp2m tables. Do you have another idea?

> Also, what is modify_altp2m_range has failed?
>

This is true: not considered at this point. I will return an error to
p2m_apply_changes.

>
>> +    }
>> +
>> +out:
>> +    altp2m_unlock(d);
>> +}
>> +
>>  static void altp2m_vcpu_reset(struct vcpu *v)
>>  {
>>      struct altp2mvcpu *av = &vcpu_altp2m(v);
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index e0a7f38..31810e6 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -992,6 +992,7 @@ static int apply_p2m_changes(struct domain *d,
>>      const bool_t preempt = !is_idle_vcpu(current);
>>      bool_t flush = false;
>>      bool_t flush_pt;
>> +    bool_t entry_written = false;
>>      PAGE_LIST_HEAD(free_pages);
>>      struct page_info *pg;
>>
>> @@ -1112,6 +1113,7 @@ static int apply_p2m_changes(struct domain *d,
>>                                    &addr, &maddr, &flush,
>>                                    t, a);
>>              if ( ret < 0 ) { rc = ret ; goto out; }
>> +            if ( ret ) entry_written = 1;
>
> Please don't mix false and 1. This should be true here.
>

I will set the variable to true.

>>              count += ret;
>>
>>              if ( ret != P2M_ONE_PROGRESS_NOP )
>> @@ -1208,6 +1210,9 @@ out:
>>
>>      p2m_write_unlock(p2m);
>>
>> +    if ( rc >= 0 && entry_written && p2m_is_hostp2m(p2m) )
>> +        altp2m_propagate_change(d, sgfn, nr, smfn, mask, t, a);
>
> There are operation which does not require propagation (for instance
> RELINQUISH and CACHEFLUSH).
>

True. Thank you.

>> +
>>      if ( rc < 0 && ( op == INSERT ) &&
>>           addr != start_gpaddr )
>>      {
>> @@ -1331,6 +1336,15 @@ int modify_altp2m_entry(struct domain *d,
>> struct p2m_domain *ap2m,
>>      return apply_p2m_changes(d, ap2m, INSERT, gfn, nr, mfn, 0, t, a);
>>  }
>>
>> +int modify_altp2m_range(struct domain *d, struct p2m_domain *ap2m,
>> +                        gfn_t sgfn, unsigned long nr, mfn_t smfn,
>> +                        uint32_t m, p2m_type_t t, p2m_access_t a)
>> +{
>> +    ASSERT(p2m_is_altp2m(ap2m));
>> +
>> +    return apply_p2m_changes(d, ap2m, INSERT, sgfn, nr, smfn, m, t, a);
>> +}
>> +
>>  int p2m_alloc_table(struct p2m_domain *p2m)
>>  {
>>      unsigned int i;
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index dc41f93..9aeb7d6 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -81,4 +81,13 @@ int altp2m_set_mem_access(struct domain *d,
>>                            p2m_access_t a,
>>                            gfn_t gfn);
>>
>> +/* Propagates changes made to hostp2m to affected altp2m views. */
>> +void altp2m_propagate_change(struct domain *d,
>> +                             gfn_t sgfn,
>> +                             unsigned long nr,
>> +                             mfn_t smfn,
>> +                             uint32_t mask,
>> +                             p2m_type_t p2mt,
>> +                             p2m_access_t p2ma);
>> +
>>  #endif /* __ASM_ARM_ALTP2M_H */
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 9859ad1..59186c9 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -191,6 +191,11 @@ int modify_altp2m_entry(struct domain *d, struct
>> p2m_domain *p2m,
>>                          paddr_t gpa, paddr_t maddr, unsigned int level,
>>                          p2m_type_t t, p2m_access_t a);
>>
>> +/* Modify an altp2m view's range of entries or their attributes. */
>> +int modify_altp2m_range(struct domain *d, struct p2m_domain *p2m,
>> +                        gfn_t sgfn, unsigned long nr, mfn_t smfn,
>> +                        uint32_t mask, p2m_type_t t, p2m_access_t a);
>> +
>>  /* Clean & invalidate caches corresponding to a region of guest
>> address space */
>>  int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
>>
>>
>


Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-04 13:50   ` Julien Grall
@ 2016-08-06 12:51     ` Sergej Proskurin
  2016-08-06 14:14       ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 12:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 03:50 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> This commit adds the function "altp2m_lazy_copy" implementing the altp2m
>> paging mechanism. The function "altp2m_lazy_copy" lazily copies the
>> hostp2m's mapping into the currently active altp2m view on 2nd stage
>> translation violations on instruction or data access. Every altp2m
>> violation generates a vm_event.
>
> I think you want to "translation fault" and not "violation". The
> latter looks more a permission issue whilst it is not the case here.
>

The implemented paging mechanism also covers permission issues, which is
the reason why I chose the term violation. By this, I mean that the
implementation covers traps to Xen occured due to 2nd stage permission
violations (as altp2m view's might have set different access permissions
to trap on). Also, the implementation covers translation faults due to
not present entries in the active altp2m view, as well.

> However I am not sure what you mean by "every altp2m violation
> generates a vm_event". Do you mean the userspace will be aware of it?
>

No. Every time, the altp2m's configuration lets the guest trap into Xen
due to a lack of memory access permissions (e.g., on execution of a rw
page), we fill the associated fields in the req buffer in
mem_access_check so that the management domain receives the information
required to understand what kind of altp2m violation just happened.
Based on this information, it might decide what to do next (perform
additional checks or simply change the altp2m view to continue guest
execution).

>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/altp2m.c        |  86 ++++++++++++++++++++++++++++++
>>  xen/arch/arm/p2m.c           |   6 +++
>>  xen/arch/arm/traps.c         | 124
>> +++++++++++++++++++++++++++++++------------
>>  xen/include/asm-arm/altp2m.h |  15 ++++--
>>  xen/include/asm-arm/p2m.h    |   6 +--
>>  5 files changed, 196 insertions(+), 41 deletions(-)
>>
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> index f3c1cff..78fc1d5 100644
>> --- a/xen/arch/arm/altp2m.c
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -33,6 +33,32 @@ struct p2m_domain *altp2m_get_altp2m(struct vcpu *v)
>>      return v->domain->arch.altp2m_p2m[index];
>>  }
>>
>> +bool_t altp2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int
>> idx)
>> +{
>> +    struct domain *d = v->domain;
>> +    bool_t rc = 0;
>
> Please use true/false rather than 0/1.
>

Ok.

>> +
>> +    if ( idx >= MAX_ALTP2M )
>> +        return rc;
>> +
>> +    altp2m_lock(d);
>> +
>> +    if ( d->arch.altp2m_vttbr[idx] != INVALID_VTTBR )
>> +    {
>> +        if ( idx != vcpu_altp2m(v).p2midx )
>> +        {
>> +            atomic_dec(&altp2m_get_altp2m(v)->active_vcpus);
>> +            vcpu_altp2m(v).p2midx = idx;
>> +            atomic_inc(&altp2m_get_altp2m(v)->active_vcpus);
>> +        }
>> +        rc = 1;
>> +    }
>> +
>> +    altp2m_unlock(d);
>> +
>> +    return rc;
>> +}
>> +
>
> You implement 2 distinct features in this patch which make really
> difficult to read and they are not all described in the commit message:
>
>  * Implementation of altp2m_switch_vcpu_altp2m_by_id and p2m_alp2m_check
>  * Implementation of lazy copy when receiving a data abort
>
> So please split it.
>

Ok.

>>  int altp2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int
>> idx)
>>  {
>>      struct vcpu *v;
>> @@ -133,6 +159,66 @@ out:
>>      return rc;
>>  }
>>
>> +bool_t altp2m_lazy_copy(struct vcpu *v,
>> +                        paddr_t gpa,
>> +                        unsigned long gva,
>
> this should be vaddr_t
>

True. Thank you.

>> +                        struct npfec npfec,
>> +                        struct p2m_domain **ap2m)
>
> Why do you need the parameter ap2m? None of the callers make use of it
> except setting it.
>

True. Another leftover from the x86 implementation. I will change that.

>> +{
>> +    struct domain *d = v->domain;
>> +    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
>
> p2m_get_hostp2m(d);
>

Thanks.

>> +    p2m_type_t p2mt;
>> +    xenmem_access_t xma;
>> +    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
>> +    mfn_t mfn;
>> +    unsigned int level;
>> +    int rc = 0;
>
> Please use true/false rather than 0/1. Also this should be bool_t.
>

Ok.

>> +
>> +    static const p2m_access_t memaccess[] = {
>> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
>> +        ACCESS(n),
>> +        ACCESS(r),
>> +        ACCESS(w),
>> +        ACCESS(rw),
>> +        ACCESS(x),
>> +        ACCESS(rx),
>> +        ACCESS(wx),
>> +        ACCESS(rwx),
>> +        ACCESS(rx2rw),
>> +        ACCESS(n2rwx),
>> +#undef ACCESS
>> +    };
>> +
>> +    *ap2m = altp2m_get_altp2m(v);
>> +    if ( *ap2m == NULL)
>> +        return 0;
>> +
>> +    /* Check if entry is part of the altp2m view */
>> +    mfn = p2m_lookup_attr(*ap2m, gfn, NULL, NULL, NULL, NULL);
>> +    if ( !mfn_eq(mfn, INVALID_MFN) )
>> +        goto out;
>> +
>> +    /* Check if entry is part of the host p2m view */
>> +    mfn = p2m_lookup_attr(hp2m, gfn, &p2mt, &level, NULL, &xma);
>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>> +        goto out;
>
> This is quite racy. The page could be removed from the host p2m by the
> time you have added it in the altp2m because you have no lock.
>

In the last patch series, I proposed to lock the hp2m->lock
(write_lock), which is not a good solution at this point, as it would
potentially create lots of contention on hp2m. Also, we would need to
export __p2m_lookup or remove the lock from p2m_lookup_attr, which is
not a good solution either.

The addition of the altp2m_lock could help in this case: If a page would
be removed from the hostp2m before we added it in the altp2m view, the
propagate function would need to wait until lazy_copy would finish and
eventually remove it from the altp2m view. But on the other hand, it
would highly decrease the performance on a multi core system.

If I understand that correctly, a better solution would be to use a
p2m_read_lock(hp2m) as we would still allow reading but not writing (in
case the hp2m gets entries removed in apply_p2m_changes). That is, I
would set it right before p2m_lookup_attr(hp2m, ...) and release it
right after modify_altp2m_entry. This solution would not present a
bottleneck on the lazy_copy mechanism, and simultaneously prevent hp2m
from changing. What do you think?

>> +
>> +    rc = modify_altp2m_entry(d, *ap2m, gpa,
>> pfn_to_paddr(mfn_x(mfn)), level,
>> +                             p2mt, memaccess[xma]);
>
> Please avoid to mix bool and int even though today we have implicitly
> conversion.
>

Ok.

>> +    if ( rc )
>> +    {
>> +        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m
>> %lx\n",
>> +                (unsigned long)gpa, (unsigned
>> long)(paddr_to_pfn(mfn_x(mfn))),
>
> By using (unsigned long) you will truncate the address on ARM32
> because we are able to support up to 40 bits address.
>
> Also why do you print the full address? The guest physical address may
> not be page-aligned so it will confuse the user.
>

x86 leftover. I will change that.

>> +                (unsigned long)*ap2m);
>
> It does not seem really helpful to print the pointer here. You will
> not be able to exploit it when reading the log. Also this should be
> printed with "%p" and not using a cast.
>

Another x86 leftover. I will change that.

>> +        domain_crash(hp2m->domain);
>> +    }
>> +
>> +    rc = 1;
>> +
>> +out:
>> +    return rc;
>> +}
>> +
>>  static inline void altp2m_reset(struct p2m_domain *p2m)
>>  {
>>      read_lock(&p2m->lock);
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 31810e6..bee8be7 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1812,6 +1812,12 @@ void __init setup_virt_paging(void)
>>      smp_call_function(setup_virt_paging_one, (void *)val, 1);
>>  }
>>
>> +void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>> +{
>> +    if ( altp2m_active(v->domain) )
>> +        altp2m_switch_vcpu_altp2m_by_id(v, idx);
>> +}
>> +
>
> I am not sure why this function lives here.
>

This function is used in ./xen/common/vm_event.c. Since the function is
used from a common place and already named p2m_* I did not want to pull
it out of p2m.c (and have a p2m_* file/function prefix in altp2m.c).
However, I could move the function to altp2m.c rename it globally (also
in the x86 implementation). Or, I could simply move it to altp2m despite
the name. What would you prefer?

>>  bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct
>> npfec npfec)
>>  {
>>      int rc;
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 12be7c9..628abd7 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -48,6 +48,8 @@
>>  #include <asm/vgic.h>
>>  #include <asm/cpuerrata.h>
>>
>> +#include <asm/altp2m.h>
>> +
>>  /* The base of the stack must always be double-word aligned, which
>> means
>>   * that both the kernel half of struct cpu_user_regs (which is
>> pushed in
>>   * entry.S) and struct cpu_info (which lives at the bottom of a Xen
>> @@ -2403,35 +2405,64 @@ static void do_trap_instr_abort_guest(struct
>> cpu_user_regs *regs,
>>      int rc;
>>      register_t gva = READ_SYSREG(FAR_EL2);
>>      uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
>> +    struct vcpu *v = current;
>> +    struct domain *d = v->domain;
>> +    struct p2m_domain *p2m = NULL;
>> +    paddr_t gpa;
>> +
>> +    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
>> +        gpa = get_faulting_ipa(gva);
>> +    else
>> +    {
>> +        /*
>> +         * Flush the TLB to make sure the DTLB is clear before
>> +         * doing GVA->IPA translation. If we got here because of
>> +         * an entry only present in the ITLB, this translation may
>> +         * still be inaccurate.
>> +         */
>> +        flush_tlb_local();
>> +
>> +        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
>> +        if ( rc == -EFAULT )
>> +            return; /* Try again */
>> +    }
>
> This code movement should really be a separate patch.
>

Ok.

>>
>>      switch ( fsc )
>>      {
>> +    case FSC_FLT_TRANS:
>> +    {
>> +        if ( altp2m_active(d) )
>> +        {
>> +            const struct npfec npfec = {
>> +                .insn_fetch = 1,
>> +                .gla_valid = 1,
>> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>> npfec_kind_with_gla
>> +            };
>> +
>> +            /*
>> +             * Copy the entire page of the failing instruction into the
>
> I think "page" is misleading here. altp2m_lazy_copy is able to copy a
> superpage mapping also.
>

Ok.

>> +             * currently active altp2m view.
>> +             */
>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>> +                return;
>> +
>> +            rc = p2m_mem_access_check(gpa, gva, npfec);
>
> Why do you call p2m_mem_access_check here? If you are here it is for a
> translation fault which you handle via altp2m_lazy_copy.
>

Right. I have experienced that the test systems jump into the case
FSC_FLT_TRANS, right after I have lazily-copied the page into the
associated altp2m view. Not sure what the issue might be here.

>> +
>> +            /* Trap was triggered by mem_access, work here is done */
>> +            if ( !rc )
>> +                return;
>> +        }
>> +
>> +        break;
>> +    }
>>      case FSC_FLT_PERM:
>>      {
>> -        paddr_t gpa;
>>          const struct npfec npfec = {
>>              .insn_fetch = 1,
>>              .gla_valid = 1,
>>              .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>> npfec_kind_with_gla
>>          };
>>
>> -        if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
>> -            gpa = get_faulting_ipa(gva);
>> -        else
>> -        {
>> -            /*
>> -             * Flush the TLB to make sure the DTLB is clear before
>> -             * doing GVA->IPA translation. If we got here because of
>> -             * an entry only present in the ITLB, this translation may
>> -             * still be inaccurate.
>> -             */
>> -            flush_tlb_local();
>> -
>> -            rc = gva_to_ipa(gva, &gpa, GV2M_READ);
>> -            if ( rc == -EFAULT )
>> -                return; /* Try again */
>> -        }
>> -
>>          rc = p2m_mem_access_check(gpa, gva, npfec);
>>
>>          /* Trap was triggered by mem_access, work here is done */
>> @@ -2451,6 +2482,8 @@ static void do_trap_data_abort_guest(struct
>> cpu_user_regs *regs,
>>      int rc;
>>      mmio_info_t info;
>>      uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
>> +    struct vcpu *v = current;
>> +    struct p2m_domain *p2m = NULL;
>>
>>      info.dabt = dabt;
>>  #ifdef CONFIG_ARM_32
>> @@ -2459,7 +2492,7 @@ static void do_trap_data_abort_guest(struct
>> cpu_user_regs *regs,
>>      info.gva = READ_SYSREG64(FAR_EL2);
>>  #endif
>>
>> -    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
>> +    if ( hpfar_is_valid(hsr.dabt.s1ptw, fsc) )
>>          info.gpa = get_faulting_ipa(info.gva);
>>      else
>>      {
>> @@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct
>> cpu_user_regs *regs,
>>
>>      switch ( fsc )
>>      {
>> -    case FSC_FLT_PERM:
>> +    case FSC_FLT_TRANS:
>>      {
>> -        const struct npfec npfec = {
>> -            .read_access = !dabt.write,
>> -            .write_access = dabt.write,
>> -            .gla_valid = 1,
>> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>> npfec_kind_with_gla
>> -        };
>> +        if ( altp2m_active(current->domain) )
>
> I would much prefer to this checking altp2m only if the MMIO was not
> emulated (so moving the code afterwards). This will avoid to add
> overhead when access the virtual interrupt controller.

I am not sure whether I understood your request. Could you be more
specific please? What excatly shall be moved where?

>
> Also, how would that fit with break-before-make sequence introduced in
> [1]?

I will try to answer this question after I had a look at [1].

>
>> +        {
>> +            const struct npfec npfec = {
>> +                .read_access = !dabt.write,
>> +                .write_access = dabt.write,
>> +                .gla_valid = 1,
>> +                .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>> npfec_kind_with_gla
>> +            };
>>
>> -        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>> +            /*
>> +             * Copy the entire page of the failing data access into the
>> +             * currently active altp2m view.
>> +             */
>> +            if ( altp2m_lazy_copy(v, info.gpa, info.gva, npfec, &p2m) )
>> +                return;
>> +
>> +            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>
> Similar question here.
>

Same issue as above.

>> +
>> +            /* Trap was triggered by mem_access, work here is done */
>> +            if ( !rc )
>> +                return;
>> +        }
>>
>> -        /* Trap was triggered by mem_access, work here is done */
>> -        if ( !rc )
>> -            return;
>> -        break;
>> -    }
>> -    case FSC_FLT_TRANS:
>>          if ( dabt.s1ptw )
>>              goto bad_data_abort;
>>
>> @@ -2515,6 +2556,23 @@ static void do_trap_data_abort_guest(struct
>> cpu_user_regs *regs,
>>              return;
>>          }
>>          break;
>> +    }
>> +    case FSC_FLT_PERM:
>> +    {
>> +        const struct npfec npfec = {
>> +            .read_access = !dabt.write,
>> +            .write_access = dabt.write,
>> +            .gla_valid = 1,
>> +            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>> npfec_kind_with_gla
>> +        };
>> +
>> +        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>> +
>> +        /* Trap was triggered by mem_access, work here is done */
>> +        if ( !rc )
>> +            return;
>> +        break;
>> +    }
>
> Why did you move the case handling FSC_FLT_PERM?
>

I really important reason: I moved it simply because int(FSC_FLT_TRANS)
< int(FSC_FLT_PERM). I can move it back if you like.

>>      default:
>>          gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
>>                  hsr.bits, dabt.dfsc);
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index 9aeb7d6..8bfbc6a 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -38,9 +38,7 @@ static inline bool_t altp2m_active(const struct
>> domain *d)
>>  /* Alternate p2m VCPU */
>>  static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
>>  {
>> -    /* Not implemented on ARM, should not be reached. */
>> -    BUG();
>> -    return 0;
>> +    return vcpu_altp2m(v).p2midx;
>>  }
>>
>>  int altp2m_init(struct domain *d);
>> @@ -52,6 +50,10 @@ void altp2m_vcpu_destroy(struct vcpu *v);
>>  /* Get current alternate p2m table. */
>>  struct p2m_domain *altp2m_get_altp2m(struct vcpu *v);
>>
>> +/* Switch alternate p2m for a single vcpu. */
>> +bool_t altp2m_switch_vcpu_altp2m_by_id(struct vcpu *v,
>> +                                       unsigned int idx);
>> +
>>  /* Switch alternate p2m for entire domain */
>>  int altp2m_switch_domain_altp2m_by_id(struct domain *d,
>>                                        unsigned int idx);
>> @@ -81,6 +83,13 @@ int altp2m_set_mem_access(struct domain *d,
>>                            p2m_access_t a,
>>                            gfn_t gfn);
>>
>> +/* Alternate p2m paging mechanism. */
>> +bool_t altp2m_lazy_copy(struct vcpu *v,
>> +                        paddr_t gpa,
>> +                        unsigned long gla,
>> +                        struct npfec npfec,
>> +                        struct p2m_domain **ap2m);
>> +
>>  /* Propagates changes made to hostp2m to affected altp2m views. */
>>  void altp2m_propagate_change(struct domain *d,
>>                               gfn_t sgfn,
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 59186c9..16e33ca 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -145,11 +145,7 @@ void p2m_mem_access_emulate_check(struct vcpu *v,
>>      /* Not supported on ARM. */
>>  }
>>
>> -static inline
>> -void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>> -{
>> -    /* Not supported on ARM. */
>> -}
>> +void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
>>
>>  /* Initialise vmid allocator */
>>  void p2m_vmid_allocator_init(void);
>>
>
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02957.html
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-04 16:59   ` Julien Grall
@ 2016-08-06 12:57     ` Sergej Proskurin
  2016-08-06 14:21       ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 12:57 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 06:59 PM, Julien Grall wrote:
> Hi Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 12be7c9..628abd7 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>
> [...]
>
>> @@ -2403,35 +2405,64 @@ static void do_trap_instr_abort_guest(struct
>> cpu_user_regs *regs,
>
> [...]
>
>>      switch ( fsc )
>>      {
>> +    case FSC_FLT_TRANS:
>> +    {
>> +        if ( altp2m_active(d) )
>> +        {
>> +            const struct npfec npfec = {
>> +                .insn_fetch = 1,
>> +                .gla_valid = 1,
>> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>> npfec_kind_with_gla
>> +            };
>> +
>> +            /*
>> +             * Copy the entire page of the failing instruction into the
>> +             * currently active altp2m view.
>> +             */
>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>> +                return;
>
> I forgot to mention that I think there is a race condition here. If
> multiple vCPU (let say A and B) use the same altp2m, they may fault here.
>
> If vCPU A already fixed the fault, this function will return false and
> continue. So this will lead to inject an instruction abort to the guest.
>

I believe this is exactly what I have experienced in the last days. I
have applied Tamas' patch [0] but it did not entirely solve the issue. I
will provide more information about the exact behavior a bit later.

>> +
>> +            rc = p2m_mem_access_check(gpa, gva, npfec);
>> +
>> +            /* Trap was triggered by mem_access, work here is done */
>> +            if ( !rc )
>> +                return;
>> +        }
>> +
>> +        break;
>> +    }
>
> [...]
>
>> @@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct
>> cpu_user_regs *regs,
>>
>>      switch ( fsc )
>>      {
>> -    case FSC_FLT_PERM:
>> +    case FSC_FLT_TRANS:
>>      {
>> -        const struct npfec npfec = {
>> -            .read_access = !dabt.write,
>> -            .write_access = dabt.write,
>> -            .gla_valid = 1,
>> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>> npfec_kind_with_gla
>> -        };
>> +        if ( altp2m_active(current->domain) )
>> +        {
>> +            const struct npfec npfec = {
>> +                .read_access = !dabt.write,
>> +                .write_access = dabt.write,
>> +                .gla_valid = 1,
>> +                .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>> npfec_kind_with_gla
>> +            };
>>
>> -        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>> +            /*
>> +             * Copy the entire page of the failing data access into the
>> +             * currently active altp2m view.
>> +             */
>> +            if ( altp2m_lazy_copy(v, info.gpa, info.gva, npfec, &p2m) )
>> +                return;
>
> Ditto.
>

Ok.

>> +
>> +            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>> +
>> +            /* Trap was triggered by mem_access, work here is done */
>> +            if ( !rc )
>> +                return;
>> +        }

Best regards,
~Sergej

[0] https://github.com/tklengyel/xen branch arm_mem_access_reinject


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 03/25] arm/altp2m: Add struct vttbr.
  2016-08-06  8:54           ` Sergej Proskurin
@ 2016-08-06 13:20             ` Julien Grall
  2016-08-06 13:48               ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-06 13:20 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 06/08/2016 09:54, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/04/2016 06:15 PM, Julien Grall wrote:
>>
>>
>> On 04/08/16 17:11, Sergej Proskurin wrote:
>>>>>> diff --git a/xen/include/asm-arm/processor.h
>>>>>> b/xen/include/asm-arm/processor.h
>>>>>> index 15bf890..f8ca18c 100644
>>>>>> --- a/xen/include/asm-arm/processor.h
>>>>>> +++ b/xen/include/asm-arm/processor.h
>>>>>> @@ -529,6 +529,22 @@ union hsr {
>>>>>>
>>>>>>
>>>>>>  };
>>>>>> +
>>>>>> +/* VTTBR: Virtualization Translation Table Base Register */
>>>>>> +struct vttbr {
>>>>>> +    union {
>>>>>> +        struct {
>>>>>> +            u64 baddr :40, /* variable res0: from 0-(x-1) bit */
>>>>>
>>>>> As mentioned on the previous series, this field is 48 bits for ARMv8
>>>>> (see ARM D7.2.102 in DDI 0487A.j).
>>>>>
>>>
>>> I must have missed it during refactoring. At this point, I will
>>> distinguish between __arm__ and __aarch64__, thank you.
>>
>> After reading this series I see no point having this union. So I would
>> much prefer to see this patch dropped.
>>
>
> I can do that. However, I do not understand why we would prefer using
> error prone bit operations for VTTBR initialization instead of having a
> unified and simple way of initializing and using the VTTBR including the
> VMID and the root table address.

The VTTBR only needs to be initialized in one place and we don't care 
accessing the fields. So I don't see the benefit to introduce a 
structure for that.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions.
  2016-08-06  8:43         ` Sergej Proskurin
@ 2016-08-06 13:26           ` Julien Grall
  2016-08-06 13:50             ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-06 13:26 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

On 06/08/2016 09:43, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/05/2016 11:16 AM, Julien Grall wrote:
>> On 05/08/16 08:26, Sergej Proskurin wrote:
>>> On 08/03/2016 07:40 PM, Julien Grall wrote:

[...]

>>>>> +    p2m->vttbr.vttbr = INVALID_VTTBR;
>>
>> [...]
>>
>>>>>
>>>>>      if ( p2m->root )
>>>>>          free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>>>>
>>>>>      p2m->root = NULL;
>>>>>
>>>>> -    p2m_free_vmid(d);
>>>>> -
>>
>> here. So please don't move code unless there is a good reason. This
>> series is already quite difficult to read.
>>
>
> This move did not have any particular reason except that for me, it
> appeared to be more logical and cleaner to read this way. Apart from
> that: This patch creates two functions out of one (in the case of the
> former p2m_teardown). Because of this, I did not even think of
> preserving a certain function call order as the former function does not
> exit in a way it used to anymore.

In this specific case, the call p2m_free_vmid is at the end of function 
to match the reverse order of p2m_init.

Regardless that I cannot see why moving p2m_free_vmid ealier will be 
more logical.

Anyway, you don't move code within a function unless there is a reason. 
And this should really be outside of a patch doing bigger.

A series like altp2m takes me about a full working day to review. So 
please don't make it more difficult to review.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 11/25] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-08-06  9:54     ` Sergej Proskurin
@ 2016-08-06 13:36       ` Julien Grall
  2016-08-06 13:51         ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-06 13:36 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 06/08/2016 10:54, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/04/2016 01:46 PM, Julien Grall wrote:
>> Hello Sergej,
>>
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>> ---
>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>> Cc: Julien Grall <julien.grall@arm.com>
>>> ---
>>> v2: Substituted the call to tlb_flush for p2m_flush_table.
>>>     Added comments.
>>>     Cosmetic fixes.
>>> ---
>>>  xen/arch/arm/altp2m.c        | 50
>>> ++++++++++++++++++++++++++++++++++++++++++++
>>>  xen/arch/arm/hvm.c           |  2 +-
>>>  xen/include/asm-arm/altp2m.h |  4 ++++
>>>  3 files changed, 55 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>>> index c22d2e4..80ed553 100644
>>> --- a/xen/arch/arm/altp2m.c
>>> +++ b/xen/arch/arm/altp2m.c

[...]

>>> +
>>> +            p2m_flush_table(p2m);
>>> +
>>> +            /*
>>> +             * Reset VTTBR.
>>> +             *
>>> +             * Note that VMID is not freed so that it can be reused
>>> later.
>>> +             */
>>> +            p2m->vttbr.vttbr = INVALID_VTTBR;
>>> +            d->arch.altp2m_vttbr[idx] = INVALID_VTTBR;
>>> +
>>> +            read_unlock(&p2m->lock);
>>
>> Why did you decide to reset the p2m rather than free it? The code
>> would be simpler with the latter.
>>
>
> * First, to be as similar to the x86 implementation as possible.
> * Second, simply to reuse already allocated p2m's without additional
> overhead.

I don't really buy those arguments. The x86 implementation may not fit 
the ARM model. Initializing a p2m on ARM is very quick: allocating the 
p2m and the root pages (up to 2 page).

So the overhead is really minimal. Therefore, I don't see any reason to 
introduce complex bookkeeping when NULL can be used for the same purpose.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-08-06 10:38     ` Sergej Proskurin
@ 2016-08-06 13:45       ` Julien Grall
  2016-08-06 16:58         ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-06 13:45 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 06/08/2016 11:38, Sergej Proskurin wrote:
> Hi Julien,

Hello Serge,

> On 08/04/2016 01:59 PM, Julien Grall wrote:
>> Hello Sergej,
>>
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>> The function get_page_from_gva uses ARM's hardware support to translate
>>> gva's to machine addresses. This function is used, among others, for
>>> memory regulation purposes, e.g, within the context of memory
>>> ballooning.
>>> To ensure correct behavior while altp2m is in use, we use the host's p2m
>>> table for the associated gva to ma translation. This is required at this
>>> point, as altp2m lazily copies pages from the host's p2m and even might
>>> be flushed because of changes to the host's p2m (as it is done within
>>> the context of memory ballooning).
>>
>> I was expecting to see some change in
>> p2m_mem_access_check_and_get_page. Is there any reason to not fix it?
>>
>>
>
> I did not yet encounter any issues with
> p2m_mem_access_check_and_get_page. According to ARM ARM, ATS1C** (see
> gva_to_ipa_par) translates VA to IPA in non-secure privilege levels (as
> it is the the case here). Thus, the 2nd level translation represented by
> the (alt)p2m is not really considered at this point and hence make an
> extension obsolete.
>
> Or did you have anything else in mind?

The stage-1 page tables are living in the guest memory. So every time 
you access an entry in the page table, you have to translate the IPA 
(guest physical address) into a PA.

However, the underlying memory of those page table may have restriction 
permission or does not exist in the altp2m at all. So the translation 
will fail.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn.
  2016-08-04 14:04   ` Julien Grall
@ 2016-08-06 13:45     ` Sergej Proskurin
  2016-08-06 14:34       ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 13:45 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/04/2016 04:04 PM, Julien Grall wrote:
> Hello Sergej,
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>> This commit adds the functionality to change mfn mappings for specified
>> gfn's in altp2m views. This mechanism can be used within the context of
>> VMI, e.g., to establish stealthy debugging.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/altp2m.c        | 116
>> +++++++++++++++++++++++++++++++++++++++++++
>>  xen/arch/arm/hvm.c           |   7 ++-
>>  xen/arch/arm/p2m.c           |  14 ++++++
>>  xen/include/asm-arm/altp2m.h |   6 +++
>>  xen/include/asm-arm/p2m.h    |   4 ++
>>  5 files changed, 146 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>> index 78fc1d5..db86c14 100644
>> --- a/xen/arch/arm/altp2m.c
>> +++ b/xen/arch/arm/altp2m.c
>> @@ -294,6 +294,122 @@ out:
>>      altp2m_unlock(d);
>>  }
>>
>> +int altp2m_change_gfn(struct domain *d,
>> +                      unsigned int idx,
>> +                      gfn_t old_gfn,
>> +                      gfn_t new_gfn)
>> +{
>> +    struct p2m_domain *hp2m, *ap2m;
>> +    paddr_t old_gpa = pfn_to_paddr(gfn_x(old_gfn));
>> +    mfn_t mfn;
>> +    xenmem_access_t xma;
>> +    p2m_type_t p2mt;
>> +    unsigned int level;
>> +    int rc = -EINVAL;
>> +
>> +    static const p2m_access_t memaccess[] = {
>> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
>> +        ACCESS(n),
>> +        ACCESS(r),
>> +        ACCESS(w),
>> +        ACCESS(rw),
>> +        ACCESS(x),
>> +        ACCESS(rx),
>> +        ACCESS(wx),
>> +        ACCESS(rwx),
>> +        ACCESS(rx2rw),
>> +        ACCESS(n2rwx),
>> +#undef ACCESS
>> +    };
>> +
>> +    if ( idx >= MAX_ALTP2M || d->arch.altp2m_vttbr[idx] ==
>> INVALID_VTTBR )
>
> The second check is not safe. Another operation could destroy the
> altp2m at the same time, but because of memory ordering this thread
> may still see altp2m_vttbr as valid.
>

Ok, I will move the alp2m_lock further up right before the check. Thank you.

>> +        return rc;
>> +
>> +    hp2m = p2m_get_hostp2m(d);
>> +    ap2m = d->arch.altp2m_p2m[idx];
>> +
>> +    altp2m_lock(d);
>> +
>> +    /*
>> +     * Flip mem_access_enabled to true when a permission is set, as
>> to prevent
>> +     * allocating or inserting super-pages.
>> +     */
>> +    ap2m->mem_access_enabled = true;
>
> Can you give more details about why you need this?
>

Similar to altp2m_set_mem_access, if we remap a page that is part of a
super page in the hostp2m, we first map the superpage in form of 512
pages into the ap2m and then change only one page. So, we set
mem_access_enabled to true to shatter the superpage on the ap2m side.

>> +
>> +    mfn = p2m_lookup_attr(ap2m, old_gfn, &p2mt, &level, NULL, NULL);
>> +
>> +    /* Check whether the page needs to be reset. */
>> +    if ( gfn_eq(new_gfn, INVALID_GFN) )
>> +    {
>> +        /* If mfn is mapped by old_gpa, remove old_gpa from the
>> altp2m table. */
>> +        if ( !mfn_eq(mfn, INVALID_MFN) )
>> +        {
>> +            rc = remove_altp2m_entry(d, ap2m, old_gpa,
>> pfn_to_paddr(mfn_x(mfn)), level);
>
> remove_altp2m_entry should take a gfn and mfn in parameter and not an
> address. The latter is a call for misusage of the API.
>

Ok. This will also remove the need for level_sizes/level_masks in the
associated function.

>> +            if ( rc )
>> +            {
>> +                rc = -EINVAL;
>> +                goto out;
>> +            }
>> +        }
>> +
>> +        rc = 0;
>> +        goto out;
>> +    }
>> +
>> +    /* Check host p2m if no valid entry in altp2m present. */
>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>> +    {
>> +        mfn = p2m_lookup_attr(hp2m, old_gfn, &p2mt, &level, NULL,
>> &xma);
>> +        if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
>
> Please add a comment to explain why the second check.
>

Ok, I will. It has the same reason as in patch #19: It is not sufficient
so simply check for invalid MFN's as the type might be invalid. Also,
the x86 implementation did not allow to remap a gfn to a shared page.

>> +        {
>> +            rc = -EINVAL;
>> +            goto out;
>> +        }
>> +
>> +        /* If this is a superpage, copy that first. */
>> +        if ( level != 3 )
>> +        {
>> +            rc = modify_altp2m_entry(d, ap2m, old_gpa,
>> pfn_to_paddr(mfn_x(mfn)),
>> +                                     level, p2mt, memaccess[xma]);
>> +            if ( rc )
>> +            {
>> +                rc = -EINVAL;
>> +                goto out;
>> +            }
>> +        }
>> +    }
>> +
>> +    mfn = p2m_lookup_attr(ap2m, new_gfn, &p2mt, &level, NULL, &xma);
>> +
>> +    /* If new_gfn is not part of altp2m, get the mapping information
>> from hp2m */
>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>> +        mfn = p2m_lookup_attr(hp2m, new_gfn, &p2mt, &level, NULL,
>> &xma);
>> +
>> +    if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
>
> Please add a comment to explain why the second check.
>

Same reason as above.

>> +    {
>> +        rc = -EINVAL;
>> +        goto out;
>> +    }
>> +
>> +    /* Set mem access attributes - currently supporting only one
>> (4K) page. */
>> +    level = 3;
>> +    rc = modify_altp2m_entry(d, ap2m, old_gpa,
>> pfn_to_paddr(mfn_x(mfn)),
>
> modify_altp2m_entry should take a gfn and mfn in parameter and not an
> address. The latter is a call for misusage of the API.
>

Ok.

>> +                             level, p2mt, memaccess[xma]);
>> +    if ( rc )
>> +    {
>> +        rc = -EINVAL;
>> +        goto out;
>> +    }
>> +
>> +    rc = 0;
>> +
>> +out:
>> +    altp2m_unlock(d);
>> +
>> +    return rc;
>> +}
>> +
>> +
>>  static void altp2m_vcpu_reset(struct vcpu *v)
>>  {
>>      struct altp2mvcpu *av = &vcpu_altp2m(v);
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index 00a244a..38b32de 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -142,7 +142,12 @@ static int
>> do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>          break;
>>
>>      case HVMOP_altp2m_change_gfn:
>> -        rc = -EOPNOTSUPP;
>> +        if ( a.u.change_gfn.pad1 || a.u.change_gfn.pad2 )
>> +            rc = -EINVAL;
>> +        else
>> +            rc = altp2m_change_gfn(d, a.u.change_gfn.view,
>> +                                   _gfn(a.u.change_gfn.old_gfn),
>> +                                   _gfn(a.u.change_gfn.new_gfn));
>>          break;
>>      }
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index bee8be7..2f4751b 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1321,6 +1321,20 @@ void guest_physmap_remove_page(struct domain *d,
>>      p2m_remove_mapping(d, p2m_get_hostp2m(d), gfn, (1 <<
>> page_order), mfn);
>>  }
>>
>> +int remove_altp2m_entry(struct domain *d, struct p2m_domain *ap2m,
>> +                        paddr_t gpa, paddr_t maddr, unsigned int level)
>
> The interface should take mfn_t and gfn_t in parameter and not address.
>

Ok.

>> +{
>> +    paddr_t size = level_sizes[level];
>> +    paddr_t mask = level_masks[level];
>> +    gfn_t gfn = _gfn(paddr_to_pfn(gpa & mask));
>> +    mfn_t mfn = _mfn(paddr_to_pfn(maddr & mask));
>> +    unsigned long nr = paddr_to_pfn(size);
>> +
>> +    ASSERT(p2m_is_altp2m(ap2m));
>> +
>> +    return p2m_remove_mapping(d, ap2m, gfn, nr, mfn);
>> +}
>> +
>>  int modify_altp2m_entry(struct domain *d, struct p2m_domain *ap2m,
>>                          paddr_t gpa, paddr_t maddr, unsigned int level,
>>                          p2m_type_t t, p2m_access_t a)
>> diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
>> index 8bfbc6a..64fbff7 100644
>> --- a/xen/include/asm-arm/altp2m.h
>> +++ b/xen/include/asm-arm/altp2m.h
>> @@ -99,4 +99,10 @@ void altp2m_propagate_change(struct domain *d,
>>                               p2m_type_t p2mt,
>>                               p2m_access_t p2ma);
>>
>> +/* Change a gfn->mfn mapping */
>> +int altp2m_change_gfn(struct domain *d,
>> +                      unsigned int idx,
>> +                      gfn_t old_gfn,
>> +                      gfn_t new_gfn);
>> +
>>  #endif /* __ASM_ARM_ALTP2M_H */
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 16e33ca..8433d66 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -182,6 +182,10 @@ mfn_t p2m_lookup_attr(struct p2m_domain *p2m,
>> gfn_t gfn, p2m_type_t *t,
>>                        unsigned int *level, unsigned int *mattr,
>>                        xenmem_access_t *xma);
>>
>> +/* Remove an altp2m view's entry. */
>> +int remove_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
>> +                        paddr_t gpa, paddr_t maddr, unsigned int
>> level);
>> +
>>  /* Modify an altp2m view's entry or its attributes. */
>>  int modify_altp2m_entry(struct domain *d, struct p2m_domain *p2m,
>>                          paddr_t gpa, paddr_t maddr, unsigned int level,
>>
>

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 03/25] arm/altp2m: Add struct vttbr.
  2016-08-06 13:20             ` Julien Grall
@ 2016-08-06 13:48               ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 13:48 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini



On 08/06/2016 03:20 PM, Julien Grall wrote:
>
>
> On 06/08/2016 09:54, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>> On 08/04/2016 06:15 PM, Julien Grall wrote:
>>>
>>>
>>> On 04/08/16 17:11, Sergej Proskurin wrote:
>>>>>>> diff --git a/xen/include/asm-arm/processor.h
>>>>>>> b/xen/include/asm-arm/processor.h
>>>>>>> index 15bf890..f8ca18c 100644
>>>>>>> --- a/xen/include/asm-arm/processor.h
>>>>>>> +++ b/xen/include/asm-arm/processor.h
>>>>>>> @@ -529,6 +529,22 @@ union hsr {
>>>>>>>
>>>>>>>
>>>>>>>  };
>>>>>>> +
>>>>>>> +/* VTTBR: Virtualization Translation Table Base Register */
>>>>>>> +struct vttbr {
>>>>>>> +    union {
>>>>>>> +        struct {
>>>>>>> +            u64 baddr :40, /* variable res0: from 0-(x-1) bit */
>>>>>>
>>>>>> As mentioned on the previous series, this field is 48 bits for ARMv8
>>>>>> (see ARM D7.2.102 in DDI 0487A.j).
>>>>>>
>>>>
>>>> I must have missed it during refactoring. At this point, I will
>>>> distinguish between __arm__ and __aarch64__, thank you.
>>>
>>> After reading this series I see no point having this union. So I would
>>> much prefer to see this patch dropped.
>>>
>>
>> I can do that. However, I do not understand why we would prefer using
>> error prone bit operations for VTTBR initialization instead of having a
>> unified and simple way of initializing and using the VTTBR including the
>> VMID and the root table address.
>
> The VTTBR only needs to be initialized in one place and we don't care
> accessing the fields. So I don't see the benefit to introduce a
> structure for that.
>

Ok. I will drop this patch.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions.
  2016-08-06 13:26           ` Julien Grall
@ 2016-08-06 13:50             ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 13:50 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini



On 08/06/2016 03:26 PM, Julien Grall wrote:
> On 06/08/2016 09:43, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>> On 08/05/2016 11:16 AM, Julien Grall wrote:
>>> On 05/08/16 08:26, Sergej Proskurin wrote:
>>>> On 08/03/2016 07:40 PM, Julien Grall wrote:
>
> [...]
>
>>>>>> +    p2m->vttbr.vttbr = INVALID_VTTBR;
>>>
>>> [...]
>>>
>>>>>>
>>>>>>      if ( p2m->root )
>>>>>>          free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
>>>>>>
>>>>>>      p2m->root = NULL;
>>>>>>
>>>>>> -    p2m_free_vmid(d);
>>>>>> -
>>>
>>> here. So please don't move code unless there is a good reason. This
>>> series is already quite difficult to read.
>>>
>>
>> This move did not have any particular reason except that for me, it
>> appeared to be more logical and cleaner to read this way. Apart from
>> that: This patch creates two functions out of one (in the case of the
>> former p2m_teardown). Because of this, I did not even think of
>> preserving a certain function call order as the former function does not
>> exit in a way it used to anymore.
>
> In this specific case, the call p2m_free_vmid is at the end of
> function to match the reverse order of p2m_init.
>
> Regardless that I cannot see why moving p2m_free_vmid ealier will be
> more logical.
>
> Anyway, you don't move code within a function unless there is a
> reason. And this should really be outside of a patch doing bigger.
>
> A series like altp2m takes me about a full working day to review. So
> please don't make it more difficult to review.
>

I will try to avoid such code movements. Thank you.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 11/25] arm/altp2m: Add HVMOP_altp2m_destroy_p2m.
  2016-08-06 13:36       ` Julien Grall
@ 2016-08-06 13:51         ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 13:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini



On 08/06/2016 03:36 PM, Julien Grall wrote:
>
>
> On 06/08/2016 10:54, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>> On 08/04/2016 01:46 PM, Julien Grall wrote:
>>> Hello Sergej,
>>>
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>>> ---
>>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>>> Cc: Julien Grall <julien.grall@arm.com>
>>>> ---
>>>> v2: Substituted the call to tlb_flush for p2m_flush_table.
>>>>     Added comments.
>>>>     Cosmetic fixes.
>>>> ---
>>>>  xen/arch/arm/altp2m.c        | 50
>>>> ++++++++++++++++++++++++++++++++++++++++++++
>>>>  xen/arch/arm/hvm.c           |  2 +-
>>>>  xen/include/asm-arm/altp2m.h |  4 ++++
>>>>  3 files changed, 55 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
>>>> index c22d2e4..80ed553 100644
>>>> --- a/xen/arch/arm/altp2m.c
>>>> +++ b/xen/arch/arm/altp2m.c
>
> [...]
>
>>>> +
>>>> +            p2m_flush_table(p2m);
>>>> +
>>>> +            /*
>>>> +             * Reset VTTBR.
>>>> +             *
>>>> +             * Note that VMID is not freed so that it can be reused
>>>> later.
>>>> +             */
>>>> +            p2m->vttbr.vttbr = INVALID_VTTBR;
>>>> +            d->arch.altp2m_vttbr[idx] = INVALID_VTTBR;
>>>> +
>>>> +            read_unlock(&p2m->lock);
>>>
>>> Why did you decide to reset the p2m rather than free it? The code
>>> would be simpler with the latter.
>>>
>>
>> * First, to be as similar to the x86 implementation as possible.
>> * Second, simply to reuse already allocated p2m's without additional
>> overhead.
>
> I don't really buy those arguments. The x86 implementation may not fit
> the ARM model. Initializing a p2m on ARM is very quick: allocating the
> p2m and the root pages (up to 2 page).
>
> So the overhead is really minimal. Therefore, I don't see any reason
> to introduce complex bookkeeping when NULL can be used for the same
> purpose.
>

Alright, I will adjust the implementation accordingly. Thank you.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 19/25] arm/altp2m: Add altp2m_propagate_change.
  2016-08-06 11:26     ` Sergej Proskurin
@ 2016-08-06 13:52       ` Julien Grall
  2016-08-06 17:06         ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-06 13:52 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 06/08/2016 12:26, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/04/2016 04:50 PM, Julien Grall wrote:
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>> +
>>> +        /* Check for a dropped page that may impact this altp2m. */
>>> +        if ( (mfn_eq(smfn, INVALID_MFN) || p2mt == p2m_invalid) &&
>>
>> Why the check to p2mt against p2m_invalid?
>>
>
> I have encountered p2m entries or rather calls to p2m_apply_change with
> valid MFN's, however, invalid types. That is, a page drop would not be
> recognized if checked only against an invalid MFN.

Because currently REMOVE operation has the MFN valid to be able to check 
whether the wanted mapping has been removed.

However, I don't think this is safe to assume that p2m_invalid will 
always meant "REMOVE". It is actually used in different case.

>
>>> +             gfn_x(sgfn) >= gfn_x(p2m->lowest_mapped_gfn) &&
>>> +             gfn_x(sgfn) <= gfn_x(p2m->max_mapped_gfn) )
>>> +        {
>>> +            if ( !reset_count++ )
>>> +            {
>>> +                altp2m_reset(p2m);
>>> +                last_reset_idx = i;
>>> +            }
>>> +            else
>>> +            {
>>> +                /* At least 2 altp2m's impacted, so reset
>>> everything. */
>>
>> So if you remove a 4KB page in more than 2 altp2m, you will flush all
>> the p2m. This sounds really more time consuming (you have to free all
>> the intermediate page table) than removing a single 4KB page.
>
> I agree: The solution has been directly adopted from the x86
> implementation. It needs further design reconsideration. However, at
> this point we would like to finish the overall architecture for ARM
> before we go into further reconstructions.
>
>>
>>> +                for ( i = 0; i < MAX_ALTP2M; i++ )
>>> +                {
>>> +                    if ( i == last_reset_idx ||
>>> +                         d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
>>> +                        continue;
>>> +
>>> +                    p2m = d->arch.altp2m_p2m[i];
>>> +                    altp2m_reset(p2m);
>>> +                }
>>> +                goto out;
>>> +            }
>>> +        }
>>> +        else if ( !mfn_eq(m, INVALID_MFN) )
>>> +            modify_altp2m_range(d, p2m, sgfn, nr, smfn,
>>> +                                mask, p2mt, p2ma);
>>
>> I am a bit concerned about this function. We decided to limit the size
>> of the mapping to avoid long running memory operations (see XSA-158).
>>
>> With this function you multiply up to 10 times the duration of the
>> operation.
>>
>
> I see your point. However, on an active system, adaptions of the hostp2m
> (while altp2m is active) should be very limited. Another solution would
> be to simply flush the altp2m views on every hostp2m modification. IMO
> this, however, would not really be a huge performance gain due to
> de-allocation and freeing of the altp2m tables. Do you have another idea?

I would need to have a think. However, it is wrong to assume that change 
of hostp2m will be limited when altp2m is active. A guest is free to 
increase/decrease the memory reservation.

You always need to consider the worth case and not the best case (i.e a 
guest behaving nicely).

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-06 12:51     ` Sergej Proskurin
@ 2016-08-06 14:14       ` Julien Grall
  2016-08-06 17:28         ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-06 14:14 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

On 06/08/2016 13:51, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/04/2016 03:50 PM, Julien Grall wrote:
>> Hello Sergej,
>>
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>> This commit adds the function "altp2m_lazy_copy" implementing the altp2m
>>> paging mechanism. The function "altp2m_lazy_copy" lazily copies the
>>> hostp2m's mapping into the currently active altp2m view on 2nd stage
>>> translation violations on instruction or data access. Every altp2m
>>> violation generates a vm_event.
>>
>> I think you want to "translation fault" and not "violation". The
>> latter looks more a permission issue whilst it is not the case here.
>>
>
> The implemented paging mechanism also covers permission issues, which is
> the reason why I chose the term violation. By this, I mean that the
> implementation covers traps to Xen occured due to 2nd stage permission
> violations (as altp2m view's might have set different access permissions
> to trap on). Also, the implementation covers translation faults due to
> not present entries in the active altp2m view, as well.

FSC_FLT_TRANS can only happen for a translation fault. And you don't 
modify FSC_FLT_PERM (except move coding around as usual...). So why are 
you talking about permission violations?

>
>> However I am not sure what you mean by "every altp2m violation
>> generates a vm_event". Do you mean the userspace will be aware of it?
>>
>
> No. Every time, the altp2m's configuration lets the guest trap into Xen
> due to a lack of memory access permissions (e.g., on execution of a rw
> page), we fill the associated fields in the req buffer in
> mem_access_check so that the management domain receives the information
> required to understand what kind of altp2m violation just happened.
> Based on this information, it might decide what to do next (perform
> additional checks or simply change the altp2m view to continue guest
> execution).

You will receive a FSC_FLT_PERM in this case and not a FSC_FLT_TRANS.

[...]

>>> +                        struct npfec npfec,
>>> +                        struct p2m_domain **ap2m)
>>
>> Why do you need the parameter ap2m? None of the callers make use of it
>> except setting it.
>>
>
> True. Another leftover from the x86 implementation. I will change that.
>
>>> +{
>>> +    struct domain *d = v->domain;
>>> +    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
>>
>> p2m_get_hostp2m(d);
>>
>
> Thanks.
>
>>> +    p2m_type_t p2mt;
>>> +    xenmem_access_t xma;
>>> +    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
>>> +    mfn_t mfn;
>>> +    unsigned int level;
>>> +    int rc = 0;
>>
>> Please use true/false rather than 0/1. Also this should be bool_t.
>>
>
> Ok.
>
>>> +
>>> +    static const p2m_access_t memaccess[] = {
>>> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
>>> +        ACCESS(n),
>>> +        ACCESS(r),
>>> +        ACCESS(w),
>>> +        ACCESS(rw),
>>> +        ACCESS(x),
>>> +        ACCESS(rx),
>>> +        ACCESS(wx),
>>> +        ACCESS(rwx),
>>> +        ACCESS(rx2rw),
>>> +        ACCESS(n2rwx),
>>> +#undef ACCESS
>>> +    };
>>> +
>>> +    *ap2m = altp2m_get_altp2m(v);
>>> +    if ( *ap2m == NULL)
>>> +        return 0;
>>> +
>>> +    /* Check if entry is part of the altp2m view */
>>> +    mfn = p2m_lookup_attr(*ap2m, gfn, NULL, NULL, NULL, NULL);
>>> +    if ( !mfn_eq(mfn, INVALID_MFN) )
>>> +        goto out;
>>> +
>>> +    /* Check if entry is part of the host p2m view */
>>> +    mfn = p2m_lookup_attr(hp2m, gfn, &p2mt, &level, NULL, &xma);
>>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>>> +        goto out;
>>
>> This is quite racy. The page could be removed from the host p2m by the
>> time you have added it in the altp2m because you have no lock.
>>
>
> In the last patch series, I proposed to lock the hp2m->lock
> (write_lock), which is not a good solution at this point, as it would
> potentially create lots of contention on hp2m. Also, we would need to
> export __p2m_lookup or remove the lock from p2m_lookup_attr, which is
> not a good solution either.

The P2M lock has been converted to a read-write lock. So it will not 
create contention as multiple read can be done concurrently.

You should be more concerned about security than contention. The former 
may lead to exploit or corrupting Xen. The latter will only impact 
performance.

>
> The addition of the altp2m_lock could help in this case: If a page would
> be removed from the hostp2m before we added it in the altp2m view, the
> propagate function would need to wait until lazy_copy would finish and
> eventually remove it from the altp2m view. But on the other hand, it
> would highly decrease the performance on a multi core system.
>
> If I understand that correctly, a better solution would be to use a
> p2m_read_lock(hp2m) as we would still allow reading but not writing (in
> case the hp2m gets entries removed in apply_p2m_changes). That is, I
> would set it right before p2m_lookup_attr(hp2m, ...) and release it
> right after modify_altp2m_entry. This solution would not present a
> bottleneck on the lazy_copy mechanism, and simultaneously prevent hp2m
> from changing. What do you think?

That's the solution I would like to see and the only safe one.

>
>>> +
>>> +    rc = modify_altp2m_entry(d, *ap2m, gpa,
>>> pfn_to_paddr(mfn_x(mfn)), level,
>>> +                             p2mt, memaccess[xma]);
>>
>> Please avoid to mix bool and int even though today we have implicitly
>> conversion.
>>
>
> Ok.
>
>>> +    if ( rc )
>>> +    {
>>> +        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m
>>> %lx\n",
>>> +                (unsigned long)gpa, (unsigned
>>> long)(paddr_to_pfn(mfn_x(mfn))),
>>
>> By using (unsigned long) you will truncate the address on ARM32
>> because we are able to support up to 40 bits address.
>>
>> Also why do you print the full address? The guest physical address may
>> not be page-aligned so it will confuse the user.
>>
>
> x86 leftover. I will change that.

It is not in the x86 code....

>>> +                (unsigned long)*ap2m);
>>
>> It does not seem really helpful to print the pointer here. You will
>> not be able to exploit it when reading the log. Also this should be
>> printed with "%p" and not using a cast.
>>
>
> Another x86 leftover. I will change that.
>
>>> +        domain_crash(hp2m->domain);
>>> +    }
>>> +
>>> +    rc = 1;
>>> +
>>> +out:
>>> +    return rc;
>>> +}
>>> +
>>>  static inline void altp2m_reset(struct p2m_domain *p2m)
>>>  {
>>>      read_lock(&p2m->lock);
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 31810e6..bee8be7 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -1812,6 +1812,12 @@ void __init setup_virt_paging(void)
>>>      smp_call_function(setup_virt_paging_one, (void *)val, 1);
>>>  }
>>>
>>> +void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>>> +{
>>> +    if ( altp2m_active(v->domain) )
>>> +        altp2m_switch_vcpu_altp2m_by_id(v, idx);
>>> +}
>>> +
>>
>> I am not sure why this function lives here.
>>
>
> This function is used in ./xen/common/vm_event.c. Since the function is
> used from a common place and already named p2m_* I did not want to pull
> it out of p2m.c (and have a p2m_* file/function prefix in altp2m.c).
> However, I could move the function to altp2m.c rename it globally (also
> in the x86 implementation). Or, I could simply move it to altp2m despite
> the name. What would you prefer?

It seems that the altp2m functions on x86 will be move from p2m.c, 
altp2m.c. So you may want to rename the function here.

[...]

>>> +             * currently active altp2m view.
>>> +             */
>>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>>> +                return;
>>> +
>>> +            rc = p2m_mem_access_check(gpa, gva, npfec);
>>
>> Why do you call p2m_mem_access_check here? If you are here it is for a
>> translation fault which you handle via altp2m_lazy_copy.
>>
>
> Right. I have experienced that the test systems jump into the case
> FSC_FLT_TRANS, right after I have lazily-copied the page into the
> associated altp2m view. Not sure what the issue might be here.

I don't understand. What do you mean?

>
>>> +
>>> +            /* Trap was triggered by mem_access, work here is done */
>>> +            if ( !rc )
>>> +                return;
>>> +        }
>>> +
>>> +        break;
>>> +    }
>>>      case FSC_FLT_PERM:
>>>      {
>>> -        paddr_t gpa;
>>>          const struct npfec npfec = {
>>>              .insn_fetch = 1,
>>>              .gla_valid = 1,
>>>              .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>>> npfec_kind_with_gla
>>>          };
>>>
>>> -        if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
>>> -            gpa = get_faulting_ipa(gva);
>>> -        else
>>> -        {
>>> -            /*
>>> -             * Flush the TLB to make sure the DTLB is clear before
>>> -             * doing GVA->IPA translation. If we got here because of
>>> -             * an entry only present in the ITLB, this translation may
>>> -             * still be inaccurate.
>>> -             */
>>> -            flush_tlb_local();
>>> -
>>> -            rc = gva_to_ipa(gva, &gpa, GV2M_READ);
>>> -            if ( rc == -EFAULT )
>>> -                return; /* Try again */
>>> -        }
>>> -
>>>          rc = p2m_mem_access_check(gpa, gva, npfec);
>>>
>>>          /* Trap was triggered by mem_access, work here is done */
>>> @@ -2451,6 +2482,8 @@ static void do_trap_data_abort_guest(struct
>>> cpu_user_regs *regs,
>>>      int rc;
>>>      mmio_info_t info;
>>>      uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
>>> +    struct vcpu *v = current;
>>> +    struct p2m_domain *p2m = NULL;
>>>
>>>      info.dabt = dabt;
>>>  #ifdef CONFIG_ARM_32
>>> @@ -2459,7 +2492,7 @@ static void do_trap_data_abort_guest(struct
>>> cpu_user_regs *regs,
>>>      info.gva = READ_SYSREG64(FAR_EL2);
>>>  #endif
>>>
>>> -    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
>>> +    if ( hpfar_is_valid(hsr.dabt.s1ptw, fsc) )
>>>          info.gpa = get_faulting_ipa(info.gva);
>>>      else
>>>      {
>>> @@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct
>>> cpu_user_regs *regs,
>>>
>>>      switch ( fsc )
>>>      {
>>> -    case FSC_FLT_PERM:
>>> +    case FSC_FLT_TRANS:
>>>      {
>>> -        const struct npfec npfec = {
>>> -            .read_access = !dabt.write,
>>> -            .write_access = dabt.write,
>>> -            .gla_valid = 1,
>>> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>> npfec_kind_with_gla
>>> -        };
>>> +        if ( altp2m_active(current->domain) )
>>
>> I would much prefer to this checking altp2m only if the MMIO was not
>> emulated (so moving the code afterwards). This will avoid to add
>> overhead when access the virtual interrupt controller.
>
> I am not sure whether I understood your request. Could you be more
> specific please? What excatly shall be moved where?

With this patch, the translation fault will do:
	1) Check altp2m
	2) Emulate MMIO

So if altp2m is enabled, you will add overhead for any MMIO access (such 
as the virtual interrupt controller).

I would much prefer to see 2) then 1).

[...]

>>> +
>>> +            /* Trap was triggered by mem_access, work here is done */
>>> +            if ( !rc )
>>> +                return;
>>> +        }
>>>
>>> -        /* Trap was triggered by mem_access, work here is done */
>>> -        if ( !rc )
>>> -            return;
>>> -        break;
>>> -    }
>>> -    case FSC_FLT_TRANS:
>>>          if ( dabt.s1ptw )
>>>              goto bad_data_abort;
>>>
>>> @@ -2515,6 +2556,23 @@ static void do_trap_data_abort_guest(struct
>>> cpu_user_regs *regs,
>>>              return;
>>>          }
>>>          break;
>>> +    }
>>> +    case FSC_FLT_PERM:
>>> +    {
>>> +        const struct npfec npfec = {
>>> +            .read_access = !dabt.write,
>>> +            .write_access = dabt.write,
>>> +            .gla_valid = 1,
>>> +            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>> npfec_kind_with_gla
>>> +        };
>>> +
>>> +        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>>> +
>>> +        /* Trap was triggered by mem_access, work here is done */
>>> +        if ( !rc )
>>> +            return;
>>> +        break;
>>> +    }
>>
>> Why did you move the case handling FSC_FLT_PERM?
>>
>
> I really important reason: I moved it simply because int(FSC_FLT_TRANS)
> < int(FSC_FLT_PERM). I can move it back if you like.

Again, you should not make a such change in the same patch introduce new 
code. It took me a while to understand that you only move code.

This series is already difficult enough to review. So please don't have 
extra work by mixing code movement and new code in the same patch.


Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-08-06  9:36     ` Sergej Proskurin
@ 2016-08-06 14:18       ` Julien Grall
  2016-08-06 14:21       ` Julien Grall
  2016-08-11  9:08       ` Julien Grall
  2 siblings, 0 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-06 14:18 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 06/08/2016 10:36, Sergej Proskurin wrote:
> (I did not finish answering all questions in the previous mail)
> On 08/03/2016 08:41 PM, Julien Grall wrote:
>> On 01/08/16 18:10, Sergej Proskurin wrote:

[...]

>>> +
>>> +    /* Initialize the new altp2m view. */
>>> +    rc = p2m_init_one(d, p2m);
>>> +    if ( rc )
>>> +        goto err;
>>> +
>>> +    /* Allocate a root table for the altp2m view. */
>>> +    rc = p2m_alloc_table(p2m);
>>> +    if ( rc )
>>> +        goto err;
>>> +
>>> +    p2m->p2m_class = p2m_alternate;
>>> +    p2m->access_required = 1;
>>
>> Please use true here. Although, I am not sure why you want to enable
>> the access by default.
>>
>
> Will do.
>
> p2m->access_required is true by default in the x86 implementation. Also,
> there is currently no way to manually set access_required on altp2m.
> Besides, I do not see a scenario, where it makes sense to run altp2m
> without access_required set to true.

I am afraid to say that "x86 does it" is not an argument. When I am 
reading an ARM series I don't necessary look at the x86, which by way 
does not give any explanation which it set to true by default.

Please document it in the code.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-06 12:57     ` Sergej Proskurin
@ 2016-08-06 14:21       ` Julien Grall
  2016-08-06 17:35         ` Sergej Proskurin
  2016-08-10  9:32         ` Sergej Proskurin
  0 siblings, 2 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-06 14:21 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 06/08/2016 13:57, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,


> On 08/04/2016 06:59 PM, Julien Grall wrote:
>> Hi Sergej,
>>
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>> index 12be7c9..628abd7 100644
>>> --- a/xen/arch/arm/traps.c
>>> +++ b/xen/arch/arm/traps.c
>>
>> [...]
>>
>>> @@ -2403,35 +2405,64 @@ static void do_trap_instr_abort_guest(struct
>>> cpu_user_regs *regs,
>>
>> [...]
>>
>>>      switch ( fsc )
>>>      {
>>> +    case FSC_FLT_TRANS:
>>> +    {
>>> +        if ( altp2m_active(d) )
>>> +        {
>>> +            const struct npfec npfec = {
>>> +                .insn_fetch = 1,
>>> +                .gla_valid = 1,
>>> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>>> npfec_kind_with_gla
>>> +            };
>>> +
>>> +            /*
>>> +             * Copy the entire page of the failing instruction into the
>>> +             * currently active altp2m view.
>>> +             */
>>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>>> +                return;
>>
>> I forgot to mention that I think there is a race condition here. If
>> multiple vCPU (let say A and B) use the same altp2m, they may fault here.
>>
>> If vCPU A already fixed the fault, this function will return false and
>> continue. So this will lead to inject an instruction abort to the guest.
>>
>
> I believe this is exactly what I have experienced in the last days. I
> have applied Tamas' patch [0] but it did not entirely solve the issue. I
> will provide more information about the exact behavior a bit later.
>
>>> +
>>> +            rc = p2m_mem_access_check(gpa, gva, npfec);
>>> +
>>> +            /* Trap was triggered by mem_access, work here is done */
>>> +            if ( !rc )
>>> +                return;
>>> +        }
>>> +
>>> +        break;
>>> +    }
>>
>> [...]
>>
>>> @@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct
>>> cpu_user_regs *regs,
>>>
>>>      switch ( fsc )
>>>      {
>>> -    case FSC_FLT_PERM:
>>> +    case FSC_FLT_TRANS:
>>>      {
>>> -        const struct npfec npfec = {
>>> -            .read_access = !dabt.write,
>>> -            .write_access = dabt.write,
>>> -            .gla_valid = 1,
>>> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>> npfec_kind_with_gla
>>> -        };
>>> +        if ( altp2m_active(current->domain) )
>>> +        {
>>> +            const struct npfec npfec = {
>>> +                .read_access = !dabt.write,
>>> +                .write_access = dabt.write,
>>> +                .gla_valid = 1,
>>> +                .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>> npfec_kind_with_gla
>>> +            };
>>>
>>> -        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>>> +            /*
>>> +             * Copy the entire page of the failing data access into the
>>> +             * currently active altp2m view.
>>> +             */
>>> +            if ( altp2m_lazy_copy(v, info.gpa, info.gva, npfec, &p2m) )
>>> +                return;
>>
>> Ditto.
>>
>
> Ok.
>
>>> +
>>> +            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>>> +
>>> +            /* Trap was triggered by mem_access, work here is done */
>>> +            if ( !rc )
>>> +                return;
>>> +        }
>
> Best regards,
> ~Sergej
>
> [0] https://github.com/tklengyel/xen branch arm_mem_access_reinject
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-08-06  9:36     ` Sergej Proskurin
  2016-08-06 14:18       ` Julien Grall
@ 2016-08-06 14:21       ` Julien Grall
  2016-08-11  9:08       ` Julien Grall
  2 siblings, 0 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-06 14:21 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 06/08/2016 10:36, Sergej Proskurin wrote:
> (I did not finish answering all questions in the previous mail)
> On 08/03/2016 08:41 PM, Julien Grall wrote:
>> On 01/08/16 18:10, Sergej Proskurin wrote:

[...]

>>> +
>>> +    /* Initialize the new altp2m view. */
>>> +    rc = p2m_init_one(d, p2m);
>>> +    if ( rc )
>>> +        goto err;
>>> +
>>> +    /* Allocate a root table for the altp2m view. */
>>> +    rc = p2m_alloc_table(p2m);
>>> +    if ( rc )
>>> +        goto err;
>>> +
>>> +    p2m->p2m_class = p2m_alternate;
>>> +    p2m->access_required = 1;
>>
>> Please use true here. Although, I am not sure why you want to enable
>> the access by default.
>>
>
> Will do.
>
> p2m->access_required is true by default in the x86 implementation. Also,
> there is currently no way to manually set access_required on altp2m.
> Besides, I do not see a scenario, where it makes sense to run altp2m
> without access_required set to true.

I am afraid to say that "x86 does it" is not an argument. When I am 
reading an ARM series I don't necessary look at the x86, which by way 
does not give any explanation which it set to true by default.

Please document it in the code.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 18/25] arm/altp2m: Add HVMOP_altp2m_set_mem_access.
  2016-08-06 11:03     ` Sergej Proskurin
@ 2016-08-06 14:26       ` Julien Grall
  0 siblings, 0 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-06 14:26 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 06/08/2016 12:03, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/04/2016 04:19 PM, Julien Grall wrote:
>> Hello Sergej,
>>
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>  int p2m_alloc_table(struct p2m_domain *p2m)
>>>  {
>>>      unsigned int i;
>>> @@ -1920,7 +1948,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t
>>> gfn, uint32_t nr,
>>>                          uint32_t start, uint32_t mask,
>>> xenmem_access_t access,
>>>                          unsigned int altp2m_idx)
>>>  {
>>> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>> +    struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
>>>      p2m_access_t a;
>>>      long rc = 0;
>>>
>>> @@ -1939,33 +1967,60 @@ long p2m_set_mem_access(struct domain *d,
>>> gfn_t gfn, uint32_t nr,
>>>  #undef ACCESS
>>>      };
>>>
>>> +    /* altp2m view 0 is treated as the hostp2m */
>>> +    if ( altp2m_idx )
>>> +    {
>>> +        if ( altp2m_idx >= MAX_ALTP2M ||
>>> +             d->arch.altp2m_vttbr[altp2m_idx] == INVALID_VTTBR )
>>> +            return -EINVAL;
>>> +
>>> +        ap2m = d->arch.altp2m_p2m[altp2m_idx];
>>> +    }
>>> +
>>>      switch ( access )
>>>      {
>>>      case 0 ... ARRAY_SIZE(memaccess) - 1:
>>>          a = memaccess[access];
>>>          break;
>>>      case XENMEM_access_default:
>>> -        a = p2m->default_access;
>>> +        a = hp2m->default_access;
>>
>> Why the default_access is set the host p2m and not the alt p2m?
>>
>
> Currently, we don't have a way of manually setting/getting the
> default_access in altp2m. Maybe it would make sense to extend the
> interface by explicitly setting default_access of the individual views.
> As I am thinking about that, it would benefit the entire architecture as
> the current propagate change operation simply flushes the altp2m views
> and expects them to be lazily filled with hostp2m's entries. Because of
> this, I believe this would render the altp2m functionality obsolete if
> the system would try to change entries in the hostp2m while altp2m was
> active. What do you think?

Sounds good. However, this needs to be documented in the code.

[...]

>>> +int altp2m_set_mem_access(struct domain *d,
>>> +                          struct p2m_domain *hp2m,
>>> +                          struct p2m_domain *ap2m,
>>> +                          p2m_access_t a,
>>> +                          gfn_t gfn);
>>> +
>>>  #endif /* __ASM_ARM_ALTP2M_H */
>>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>>> index 32326cb..9859ad1 100644
>>> --- a/xen/include/asm-arm/p2m.h
>>> +++ b/xen/include/asm-arm/p2m.h
>>> @@ -180,6 +180,17 @@ void p2m_dump_info(struct domain *d);
>>>  /* Look up the MFN corresponding to a domain's GFN. */
>>>  mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
>>>
>>> +/* Lookup the MFN, memory attributes, and page table level
>>> corresponding to a
>>> + * domain's GFN. */
>>> +mfn_t p2m_lookup_attr(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
>>> +                      unsigned int *level, unsigned int *mattr,
>>> +                      xenmem_access_t *xma);
>>
>> I don't want to see a such interface expose outside of p2m. The
>> outside world may not know what means the level. And I don't
>> understand why you return "mattr" here.
>>
>
> In the current implementation, mattr is indeed not needed anymore. Yet,
> I did want to hear your opinion first. So, I will gladly remove mattr
> from the prototype.
>
> Concerning the exposure of p2m_lookup_attr: Agreed. However, I am not
> sure how else we could get the required functionality from altp2m.c
> without duplicating big parts of the code. In the previous patch, you
> have mentioned that we should rather share code to get the required
> values. Now we do...
>
> Do you have another idea how we could solve this issue?

Please give a look to the functions p2m_get_entry and p2m_set_entry I 
introduced in [1].

Regards,

[1] 
https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg02952.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn.
  2016-08-06 13:45     ` Sergej Proskurin
@ 2016-08-06 14:34       ` Julien Grall
  2016-08-06 17:42         ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-06 14:34 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 06/08/2016 14:45, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/04/2016 04:04 PM, Julien Grall wrote:
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>> +        return rc;
>>> +
>>> +    hp2m = p2m_get_hostp2m(d);
>>> +    ap2m = d->arch.altp2m_p2m[idx];
>>> +
>>> +    altp2m_lock(d);
>>> +
>>> +    /*
>>> +     * Flip mem_access_enabled to true when a permission is set, as
>>> to prevent
>>> +     * allocating or inserting super-pages.
>>> +     */
>>> +    ap2m->mem_access_enabled = true;
>>
>> Can you give more details about why you need this?
>>
>
> Similar to altp2m_set_mem_access, if we remap a page that is part of a
> super page in the hostp2m, we first map the superpage in form of 512
> pages into the ap2m and then change only one page. So, we set
> mem_access_enabled to true to shatter the superpage on the ap2m side.

mem_access_enabled should only be set when mem access is enabled and 
nothing.

I don't understand why you want to avoid superpage in the altp2m. If you 
copy a host mapping is a superpage, then a altp2m mapping should be a 
superpage.

The code is able to cope with inserting a mapping in the middle of a 
superpage without mem_access_enabled.

>
>>> +
>>> +    mfn = p2m_lookup_attr(ap2m, old_gfn, &p2mt, &level, NULL, NULL);
>>> +
>>> +    /* Check whether the page needs to be reset. */
>>> +    if ( gfn_eq(new_gfn, INVALID_GFN) )
>>> +    {
>>> +        /* If mfn is mapped by old_gpa, remove old_gpa from the
>>> altp2m table. */
>>> +        if ( !mfn_eq(mfn, INVALID_MFN) )
>>> +        {
>>> +            rc = remove_altp2m_entry(d, ap2m, old_gpa,
>>> pfn_to_paddr(mfn_x(mfn)), level);
>>
>> remove_altp2m_entry should take a gfn and mfn in parameter and not an
>> address. The latter is a call for misusage of the API.
>>
>
> Ok. This will also remove the need for level_sizes/level_masks in the
> associated function.
>
>>> +            if ( rc )
>>> +            {
>>> +                rc = -EINVAL;
>>> +                goto out;
>>> +            }
>>> +        }
>>> +
>>> +        rc = 0;
>>> +        goto out;
>>> +    }
>>> +
>>> +    /* Check host p2m if no valid entry in altp2m present. */
>>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>>> +    {
>>> +        mfn = p2m_lookup_attr(hp2m, old_gfn, &p2mt, &level, NULL,
>>> &xma);
>>> +        if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
>>
>> Please add a comment to explain why the second check.
>>
>
> Ok, I will. It has the same reason as in patch #19: It is not sufficient
> so simply check for invalid MFN's as the type might be invalid. Also,
> the x86 implementation did not allow to remap a gfn to a shared page.

Patch #19 has a different check which does not explain this one. (p2mt 
!= p2m_ram_rw) only guest read-write RAM can effectively be remapped 
which is different than shared page cannot be remapped.

BTW, ARM does not support shared page.

This also lead to my question, why not allowing p2m_ram_ro?

>
>>> +        {
>>> +            rc = -EINVAL;
>>> +            goto out;
>>> +        }
>>> +
>>> +        /* If this is a superpage, copy that first. */
>>> +        if ( level != 3 )
>>> +        {
>>> +            rc = modify_altp2m_entry(d, ap2m, old_gpa,
>>> pfn_to_paddr(mfn_x(mfn)),
>>> +                                     level, p2mt, memaccess[xma]);
>>> +            if ( rc )
>>> +            {
>>> +                rc = -EINVAL;
>>> +                goto out;
>>> +            }
>>> +        }
>>> +    }
>>> +
>>> +    mfn = p2m_lookup_attr(ap2m, new_gfn, &p2mt, &level, NULL, &xma);
>>> +
>>> +    /* If new_gfn is not part of altp2m, get the mapping information
>>> from hp2m */
>>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>>> +        mfn = p2m_lookup_attr(hp2m, new_gfn, &p2mt, &level, NULL,
>>> &xma);
>>> +
>>> +    if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
>>
>> Please add a comment to explain why the second check.
>>
>
> Same reason as above.

Then add a comment in the code.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-08-06 13:45       ` Julien Grall
@ 2016-08-06 16:58         ` Sergej Proskurin
  2016-08-11  8:33           ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 16:58 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/06/2016 03:45 PM, Julien Grall wrote:
>
>
> On 06/08/2016 11:38, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Serge,
>
>> On 08/04/2016 01:59 PM, Julien Grall wrote:
>>> Hello Sergej,
>>>
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>> The function get_page_from_gva uses ARM's hardware support to
>>>> translate
>>>> gva's to machine addresses. This function is used, among others, for
>>>> memory regulation purposes, e.g, within the context of memory
>>>> ballooning.
>>>> To ensure correct behavior while altp2m is in use, we use the
>>>> host's p2m
>>>> table for the associated gva to ma translation. This is required at
>>>> this
>>>> point, as altp2m lazily copies pages from the host's p2m and even
>>>> might
>>>> be flushed because of changes to the host's p2m (as it is done within
>>>> the context of memory ballooning).
>>>
>>> I was expecting to see some change in
>>> p2m_mem_access_check_and_get_page. Is there any reason to not fix it?
>>>
>>>
>>
>> I did not yet encounter any issues with
>> p2m_mem_access_check_and_get_page. According to ARM ARM, ATS1C** (see
>> gva_to_ipa_par) translates VA to IPA in non-secure privilege levels (as
>> it is the the case here). Thus, the 2nd level translation represented by
>> the (alt)p2m is not really considered at this point and hence make an
>> extension obsolete.
>>
>> Or did you have anything else in mind?
>
> The stage-1 page tables are living in the guest memory. So every time
> you access an entry in the page table, you have to translate the IPA
> (guest physical address) into a PA.
>
> However, the underlying memory of those page table may have
> restriction permission or does not exist in the altp2m at all. So the
> translation will fail.
>

Please correct me if I am wrong but as far as I understand: the function
p2m_mem_access_check_and_get_page is called only from get_page_from_gva.
Also it is called only if the page translation within the function
get_page_from_gva was not successful. Because of the fact that we use
the hostp2m's 2nd stage translation table including the original memory
access permissions (please note the short sequence, where we temporarily
reset the VTTBR_EL2 of the hostp2m if altp2m is active), potential
faults (which would lead to the call of the function
p2m_mem_access_check_and_get_page) must have reasons beyond altp2m.

Best regards,
~Sergej



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 19/25] arm/altp2m: Add altp2m_propagate_change.
  2016-08-06 13:52       ` Julien Grall
@ 2016-08-06 17:06         ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 17:06 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/06/2016 03:52 PM, Julien Grall wrote:
>
>
> On 06/08/2016 12:26, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>> On 08/04/2016 04:50 PM, Julien Grall wrote:
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>> +
>>>> +        /* Check for a dropped page that may impact this altp2m. */
>>>> +        if ( (mfn_eq(smfn, INVALID_MFN) || p2mt == p2m_invalid) &&
>>>
>>> Why the check to p2mt against p2m_invalid?
>>>
>>
>> I have encountered p2m entries or rather calls to p2m_apply_change with
>> valid MFN's, however, invalid types. That is, a page drop would not be
>> recognized if checked only against an invalid MFN.
>
> Because currently REMOVE operation has the MFN valid to be able to
> check whether the wanted mapping has been removed.
>
> However, I don't think this is safe to assume that p2m_invalid will
> always meant "REMOVE". It is actually used in different case.
>

Alright. To cope with this issue, I could simply check for INSERT/REMOVE
operations before calling altp2m_propagate_change. Then, it should
indeed be sufficient to check only for an invalid mfn. Thank you.

>>
>>>> +             gfn_x(sgfn) >= gfn_x(p2m->lowest_mapped_gfn) &&
>>>> +             gfn_x(sgfn) <= gfn_x(p2m->max_mapped_gfn) )
>>>> +        {
>>>> +            if ( !reset_count++ )
>>>> +            {
>>>> +                altp2m_reset(p2m);
>>>> +                last_reset_idx = i;
>>>> +            }
>>>> +            else
>>>> +            {
>>>> +                /* At least 2 altp2m's impacted, so reset
>>>> everything. */
>>>
>>> So if you remove a 4KB page in more than 2 altp2m, you will flush all
>>> the p2m. This sounds really more time consuming (you have to free all
>>> the intermediate page table) than removing a single 4KB page.
>>
>> I agree: The solution has been directly adopted from the x86
>> implementation. It needs further design reconsideration. However, at
>> this point we would like to finish the overall architecture for ARM
>> before we go into further reconstructions.
>>
>>>
>>>> +                for ( i = 0; i < MAX_ALTP2M; i++ )
>>>> +                {
>>>> +                    if ( i == last_reset_idx ||
>>>> +                         d->arch.altp2m_vttbr[i] == INVALID_VTTBR )
>>>> +                        continue;
>>>> +
>>>> +                    p2m = d->arch.altp2m_p2m[i];
>>>> +                    altp2m_reset(p2m);
>>>> +                }
>>>> +                goto out;
>>>> +            }
>>>> +        }
>>>> +        else if ( !mfn_eq(m, INVALID_MFN) )
>>>> +            modify_altp2m_range(d, p2m, sgfn, nr, smfn,
>>>> +                                mask, p2mt, p2ma);
>>>
>>> I am a bit concerned about this function. We decided to limit the size
>>> of the mapping to avoid long running memory operations (see XSA-158).
>>>
>>> With this function you multiply up to 10 times the duration of the
>>> operation.
>>>
>>
>> I see your point. However, on an active system, adaptions of the hostp2m
>> (while altp2m is active) should be very limited. Another solution would
>> be to simply flush the altp2m views on every hostp2m modification. IMO
>> this, however, would not really be a huge performance gain due to
>> de-allocation and freeing of the altp2m tables. Do you have another
>> idea?
>
> I would need to have a think. However, it is wrong to assume that
> change of hostp2m will be limited when altp2m is active. A guest is
> free to increase/decrease the memory reservation.
>
> You always need to consider the worth case and not the best case (i.e
> a guest behaving nicely).

Fair enough.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-06 14:14       ` Julien Grall
@ 2016-08-06 17:28         ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 17:28 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/06/2016 04:14 PM, Julien Grall wrote:
> On 06/08/2016 13:51, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>> On 08/04/2016 03:50 PM, Julien Grall wrote:
>>> Hello Sergej,
>>>
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>> This commit adds the function "altp2m_lazy_copy" implementing the
>>>> altp2m
>>>> paging mechanism. The function "altp2m_lazy_copy" lazily copies the
>>>> hostp2m's mapping into the currently active altp2m view on 2nd stage
>>>> translation violations on instruction or data access. Every altp2m
>>>> violation generates a vm_event.
>>>
>>> I think you want to "translation fault" and not "violation". The
>>> latter looks more a permission issue whilst it is not the case here.
>>>
>>
>> The implemented paging mechanism also covers permission issues, which is
>> the reason why I chose the term violation. By this, I mean that the
>> implementation covers traps to Xen occured due to 2nd stage permission
>> violations (as altp2m view's might have set different access permissions
>> to trap on). Also, the implementation covers translation faults due to
>> not present entries in the active altp2m view, as well.
>
> FSC_FLT_TRANS can only happen for a translation fault. And you don't
> modify FSC_FLT_PERM (except move coding around as usual...). So why
> are you talking about permission violations?
>

I'm afraid there was a misunderstanding from my side. You are right, I
will change this in the next patch.

>>
>>> However I am not sure what you mean by "every altp2m violation
>>> generates a vm_event". Do you mean the userspace will be aware of it?
>>>
>>
>> No. Every time, the altp2m's configuration lets the guest trap into Xen
>> due to a lack of memory access permissions (e.g., on execution of a rw
>> page), we fill the associated fields in the req buffer in
>> mem_access_check so that the management domain receives the information
>> required to understand what kind of altp2m violation just happened.
>> Based on this information, it might decide what to do next (perform
>> additional checks or simply change the altp2m view to continue guest
>> execution).
>
> You will receive a FSC_FLT_PERM in this case and not a FSC_FLT_TRANS.
>

I do agree now.

> [...]
>
>>>> +                        struct npfec npfec,
>>>> +                        struct p2m_domain **ap2m)
>>>
>>> Why do you need the parameter ap2m? None of the callers make use of it
>>> except setting it.
>>>
>>
>> True. Another leftover from the x86 implementation. I will change that.
>>
>>>> +{
>>>> +    struct domain *d = v->domain;
>>>> +    struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain);
>>>
>>> p2m_get_hostp2m(d);
>>>
>>
>> Thanks.
>>
>>>> +    p2m_type_t p2mt;
>>>> +    xenmem_access_t xma;
>>>> +    gfn_t gfn = _gfn(paddr_to_pfn(gpa));
>>>> +    mfn_t mfn;
>>>> +    unsigned int level;
>>>> +    int rc = 0;
>>>
>>> Please use true/false rather than 0/1. Also this should be bool_t.
>>>
>>
>> Ok.
>>
>>>> +
>>>> +    static const p2m_access_t memaccess[] = {
>>>> +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
>>>> +        ACCESS(n),
>>>> +        ACCESS(r),
>>>> +        ACCESS(w),
>>>> +        ACCESS(rw),
>>>> +        ACCESS(x),
>>>> +        ACCESS(rx),
>>>> +        ACCESS(wx),
>>>> +        ACCESS(rwx),
>>>> +        ACCESS(rx2rw),
>>>> +        ACCESS(n2rwx),
>>>> +#undef ACCESS
>>>> +    };
>>>> +
>>>> +    *ap2m = altp2m_get_altp2m(v);
>>>> +    if ( *ap2m == NULL)
>>>> +        return 0;
>>>> +
>>>> +    /* Check if entry is part of the altp2m view */
>>>> +    mfn = p2m_lookup_attr(*ap2m, gfn, NULL, NULL, NULL, NULL);
>>>> +    if ( !mfn_eq(mfn, INVALID_MFN) )
>>>> +        goto out;
>>>> +
>>>> +    /* Check if entry is part of the host p2m view */
>>>> +    mfn = p2m_lookup_attr(hp2m, gfn, &p2mt, &level, NULL, &xma);
>>>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>>>> +        goto out;
>>>
>>> This is quite racy. The page could be removed from the host p2m by the
>>> time you have added it in the altp2m because you have no lock.
>>>
>>
>> In the last patch series, I proposed to lock the hp2m->lock
>> (write_lock), which is not a good solution at this point, as it would
>> potentially create lots of contention on hp2m. Also, we would need to
>> export __p2m_lookup or remove the lock from p2m_lookup_attr, which is
>> not a good solution either.
>
> The P2M lock has been converted to a read-write lock. So it will not
> create contention as multiple read can be done concurrently.
>
> You should be more concerned about security than contention. The
> former may lead to exploit or corrupting Xen. The latter will only
> impact performance.
>
>>
>> The addition of the altp2m_lock could help in this case: If a page would
>> be removed from the hostp2m before we added it in the altp2m view, the
>> propagate function would need to wait until lazy_copy would finish and
>> eventually remove it from the altp2m view. But on the other hand, it
>> would highly decrease the performance on a multi core system.
>>
>> If I understand that correctly, a better solution would be to use a
>> p2m_read_lock(hp2m) as we would still allow reading but not writing (in
>> case the hp2m gets entries removed in apply_p2m_changes). That is, I
>> would set it right before p2m_lookup_attr(hp2m, ...) and release it
>> right after modify_altp2m_entry. This solution would not present a
>> bottleneck on the lazy_copy mechanism, and simultaneously prevent hp2m
>> from changing. What do you think?
>
> That's the solution I would like to see and the only safe one.
>

Great, I will do that. Thank you.

>>
>>>> +
>>>> +    rc = modify_altp2m_entry(d, *ap2m, gpa,
>>>> pfn_to_paddr(mfn_x(mfn)), level,
>>>> +                             p2mt, memaccess[xma]);
>>>
>>> Please avoid to mix bool and int even though today we have implicitly
>>> conversion.
>>>
>>
>> Ok.
>>
>>>> +    if ( rc )
>>>> +    {
>>>> +        gdprintk(XENLOG_ERR, "failed to set entry for %lx -> %lx p2m
>>>> %lx\n",
>>>> +                (unsigned long)gpa, (unsigned
>>>> long)(paddr_to_pfn(mfn_x(mfn))),
>>>
>>> By using (unsigned long) you will truncate the address on ARM32
>>> because we are able to support up to 40 bits address.
>>>
>>> Also why do you print the full address? The guest physical address may
>>> not be page-aligned so it will confuse the user.
>>>
>>
>> x86 leftover. I will change that.
>
> It is not in the x86 code....
>
>>>> +                (unsigned long)*ap2m);
>>>
>>> It does not seem really helpful to print the pointer here. You will
>>> not be able to exploit it when reading the log. Also this should be
>>> printed with "%p" and not using a cast.
>>>
>>
>> Another x86 leftover. I will change that.
>>
>>>> +        domain_crash(hp2m->domain);
>>>> +    }
>>>> +
>>>> +    rc = 1;
>>>> +
>>>> +out:
>>>> +    return rc;
>>>> +}
>>>> +
>>>>  static inline void altp2m_reset(struct p2m_domain *p2m)
>>>>  {
>>>>      read_lock(&p2m->lock);
>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>> index 31810e6..bee8be7 100644
>>>> --- a/xen/arch/arm/p2m.c
>>>> +++ b/xen/arch/arm/p2m.c
>>>> @@ -1812,6 +1812,12 @@ void __init setup_virt_paging(void)
>>>>      smp_call_function(setup_virt_paging_one, (void *)val, 1);
>>>>  }
>>>>
>>>> +void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>>>> +{
>>>> +    if ( altp2m_active(v->domain) )
>>>> +        altp2m_switch_vcpu_altp2m_by_id(v, idx);
>>>> +}
>>>> +
>>>
>>> I am not sure why this function lives here.
>>>
>>
>> This function is used in ./xen/common/vm_event.c. Since the function is
>> used from a common place and already named p2m_* I did not want to pull
>> it out of p2m.c (and have a p2m_* file/function prefix in altp2m.c).
>> However, I could move the function to altp2m.c rename it globally (also
>> in the x86 implementation). Or, I could simply move it to altp2m despite
>> the name. What would you prefer?
>
> It seems that the altp2m functions on x86 will be move from p2m.c,
> altp2m.c. So you may want to rename the function here.
>

Alright.

> [...]
>
>>>> +             * currently active altp2m view.
>>>> +             */
>>>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>>>> +                return;
>>>> +
>>>> +            rc = p2m_mem_access_check(gpa, gva, npfec);
>>>
>>> Why do you call p2m_mem_access_check here? If you are here it is for a
>>> translation fault which you handle via altp2m_lazy_copy.
>>>
>>
>> Right. I have experienced that the test systems jump into the case
>> FSC_FLT_TRANS, right after I have lazily-copied the page into the
>> associated altp2m view. Not sure what the issue might be here.
>
> I don't understand. What do you mean?
>

Never mind: It was the misunderstanding I was mentioning at the
beginning of this email. The call to p2m_mem_access_check will be
removed in the next patch. Thank you.

>>
>>>> +
>>>> +            /* Trap was triggered by mem_access, work here is done */
>>>> +            if ( !rc )
>>>> +                return;
>>>> +        }
>>>> +
>>>> +        break;
>>>> +    }
>>>>      case FSC_FLT_PERM:
>>>>      {
>>>> -        paddr_t gpa;
>>>>          const struct npfec npfec = {
>>>>              .insn_fetch = 1,
>>>>              .gla_valid = 1,
>>>>              .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>>          };
>>>>
>>>> -        if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
>>>> -            gpa = get_faulting_ipa(gva);
>>>> -        else
>>>> -        {
>>>> -            /*
>>>> -             * Flush the TLB to make sure the DTLB is clear before
>>>> -             * doing GVA->IPA translation. If we got here because of
>>>> -             * an entry only present in the ITLB, this translation
>>>> may
>>>> -             * still be inaccurate.
>>>> -             */
>>>> -            flush_tlb_local();
>>>> -
>>>> -            rc = gva_to_ipa(gva, &gpa, GV2M_READ);
>>>> -            if ( rc == -EFAULT )
>>>> -                return; /* Try again */
>>>> -        }
>>>> -
>>>>          rc = p2m_mem_access_check(gpa, gva, npfec);
>>>>
>>>>          /* Trap was triggered by mem_access, work here is done */
>>>> @@ -2451,6 +2482,8 @@ static void do_trap_data_abort_guest(struct
>>>> cpu_user_regs *regs,
>>>>      int rc;
>>>>      mmio_info_t info;
>>>>      uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
>>>> +    struct vcpu *v = current;
>>>> +    struct p2m_domain *p2m = NULL;
>>>>
>>>>      info.dabt = dabt;
>>>>  #ifdef CONFIG_ARM_32
>>>> @@ -2459,7 +2492,7 @@ static void do_trap_data_abort_guest(struct
>>>> cpu_user_regs *regs,
>>>>      info.gva = READ_SYSREG64(FAR_EL2);
>>>>  #endif
>>>>
>>>> -    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
>>>> +    if ( hpfar_is_valid(hsr.dabt.s1ptw, fsc) )
>>>>          info.gpa = get_faulting_ipa(info.gva);
>>>>      else
>>>>      {
>>>> @@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct
>>>> cpu_user_regs *regs,
>>>>
>>>>      switch ( fsc )
>>>>      {
>>>> -    case FSC_FLT_PERM:
>>>> +    case FSC_FLT_TRANS:
>>>>      {
>>>> -        const struct npfec npfec = {
>>>> -            .read_access = !dabt.write,
>>>> -            .write_access = dabt.write,
>>>> -            .gla_valid = 1,
>>>> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>> -        };
>>>> +        if ( altp2m_active(current->domain) )
>>>
>>> I would much prefer to this checking altp2m only if the MMIO was not
>>> emulated (so moving the code afterwards). This will avoid to add
>>> overhead when access the virtual interrupt controller.
>>
>> I am not sure whether I understood your request. Could you be more
>> specific please? What excatly shall be moved where?
>
> With this patch, the translation fault will do:
>     1) Check altp2m
>     2) Emulate MMIO
>
> So if altp2m is enabled, you will add overhead for any MMIO access
> (such as the virtual interrupt controller).
>
> I would much prefer to see 2) then 1).
>

I see. I will change that. Thanks.

> [...]
>
>>>> +
>>>> +            /* Trap was triggered by mem_access, work here is done */
>>>> +            if ( !rc )
>>>> +                return;
>>>> +        }
>>>>
>>>> -        /* Trap was triggered by mem_access, work here is done */
>>>> -        if ( !rc )
>>>> -            return;
>>>> -        break;
>>>> -    }
>>>> -    case FSC_FLT_TRANS:
>>>>          if ( dabt.s1ptw )
>>>>              goto bad_data_abort;
>>>>
>>>> @@ -2515,6 +2556,23 @@ static void do_trap_data_abort_guest(struct
>>>> cpu_user_regs *regs,
>>>>              return;
>>>>          }
>>>>          break;
>>>> +    }
>>>> +    case FSC_FLT_PERM:
>>>> +    {
>>>> +        const struct npfec npfec = {
>>>> +            .read_access = !dabt.write,
>>>> +            .write_access = dabt.write,
>>>> +            .gla_valid = 1,
>>>> +            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>> +        };
>>>> +
>>>> +        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>>>> +
>>>> +        /* Trap was triggered by mem_access, work here is done */
>>>> +        if ( !rc )
>>>> +            return;
>>>> +        break;
>>>> +    }
>>>
>>> Why did you move the case handling FSC_FLT_PERM?
>>>
>>
>> I really important reason: I moved it simply because int(FSC_FLT_TRANS)
>> < int(FSC_FLT_PERM). I can move it back if you like.
>
> Again, you should not make a such change in the same patch introduce
> new code. It took me a while to understand that you only move code.
>
> This series is already difficult enough to review. So please don't
> have extra work by mixing code movement and new code in the same patch.
>

I will try to avoid unnecessary code movement.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-06 14:21       ` Julien Grall
@ 2016-08-06 17:35         ` Sergej Proskurin
  2016-08-10  9:32         ` Sergej Proskurin
  1 sibling, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 17:35 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,

I just wanted to indicate that this email did not have any contents from
your side.

On 08/06/2016 04:21 PM, Julien Grall wrote:
>
>
> On 06/08/2016 13:57, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>
>> On 08/04/2016 06:59 PM, Julien Grall wrote:
>>> Hi Sergej,
>>>
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>> index 12be7c9..628abd7 100644
>>>> --- a/xen/arch/arm/traps.c
>>>> +++ b/xen/arch/arm/traps.c
>>>
>>> [...]
>>>
>>>> @@ -2403,35 +2405,64 @@ static void do_trap_instr_abort_guest(struct
>>>> cpu_user_regs *regs,
>>>
>>> [...]
>>>
>>>>      switch ( fsc )
>>>>      {
>>>> +    case FSC_FLT_TRANS:
>>>> +    {
>>>> +        if ( altp2m_active(d) )
>>>> +        {
>>>> +            const struct npfec npfec = {
>>>> +                .insn_fetch = 1,
>>>> +                .gla_valid = 1,
>>>> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>> +            };
>>>> +
>>>> +            /*
>>>> +             * Copy the entire page of the failing instruction
>>>> into the
>>>> +             * currently active altp2m view.
>>>> +             */
>>>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>>>> +                return;
>>>
>>> I forgot to mention that I think there is a race condition here. If
>>> multiple vCPU (let say A and B) use the same altp2m, they may fault
>>> here.
>>>
>>> If vCPU A already fixed the fault, this function will return false and
>>> continue. So this will lead to inject an instruction abort to the
>>> guest.
>>>
>>
>> I believe this is exactly what I have experienced in the last days. I
>> have applied Tamas' patch [0] but it did not entirely solve the issue. I
>> will provide more information about the exact behavior a bit later.
>>
>>>> +
>>>> +            rc = p2m_mem_access_check(gpa, gva, npfec);
>>>> +
>>>> +            /* Trap was triggered by mem_access, work here is done */
>>>> +            if ( !rc )
>>>> +                return;
>>>> +        }
>>>> +
>>>> +        break;
>>>> +    }
>>>
>>> [...]
>>>
>>>> @@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct
>>>> cpu_user_regs *regs,
>>>>
>>>>      switch ( fsc )
>>>>      {
>>>> -    case FSC_FLT_PERM:
>>>> +    case FSC_FLT_TRANS:
>>>>      {
>>>> -        const struct npfec npfec = {
>>>> -            .read_access = !dabt.write,
>>>> -            .write_access = dabt.write,
>>>> -            .gla_valid = 1,
>>>> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>> -        };
>>>> +        if ( altp2m_active(current->domain) )
>>>> +        {
>>>> +            const struct npfec npfec = {
>>>> +                .read_access = !dabt.write,
>>>> +                .write_access = dabt.write,
>>>> +                .gla_valid = 1,
>>>> +                .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>> +            };
>>>>
>>>> -        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>>>> +            /*
>>>> +             * Copy the entire page of the failing data access
>>>> into the
>>>> +             * currently active altp2m view.
>>>> +             */
>>>> +            if ( altp2m_lazy_copy(v, info.gpa, info.gva, npfec,
>>>> &p2m) )
>>>> +                return;
>>>
>>> Ditto.
>>>
>>
>> Ok.
>>
>>>> +
>>>> +            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>>>> +
>>>> +            /* Trap was triggered by mem_access, work here is done */
>>>> +            if ( !rc )
>>>> +                return;
>>>> +        }
>>
>> Best regards,
>> ~Sergej
>>
>> [0] https://github.com/tklengyel/xen branch arm_mem_access_reinject
>>
>

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn.
  2016-08-06 14:34       ` Julien Grall
@ 2016-08-06 17:42         ` Sergej Proskurin
  2016-08-11  9:21           ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-06 17:42 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/06/2016 04:34 PM, Julien Grall wrote:
>
>
> On 06/08/2016 14:45, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>> On 08/04/2016 04:04 PM, Julien Grall wrote:
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>> +        return rc;
>>>> +
>>>> +    hp2m = p2m_get_hostp2m(d);
>>>> +    ap2m = d->arch.altp2m_p2m[idx];
>>>> +
>>>> +    altp2m_lock(d);
>>>> +
>>>> +    /*
>>>> +     * Flip mem_access_enabled to true when a permission is set, as
>>>> to prevent
>>>> +     * allocating or inserting super-pages.
>>>> +     */
>>>> +    ap2m->mem_access_enabled = true;
>>>
>>> Can you give more details about why you need this?
>>>
>>
>> Similar to altp2m_set_mem_access, if we remap a page that is part of a
>> super page in the hostp2m, we first map the superpage in form of 512
>> pages into the ap2m and then change only one page. So, we set
>> mem_access_enabled to true to shatter the superpage on the ap2m side.
>
> mem_access_enabled should only be set when mem access is enabled and
> nothing.
>
> I don't understand why you want to avoid superpage in the altp2m. If
> you copy a host mapping is a superpage, then a altp2m mapping should
> be a superpage.
>
> The code is able to cope with inserting a mapping in the middle of a
> superpage without mem_access_enabled.
>

Alright, I will try it out in the next patch. Thank you.

>>
>>>> +
>>>> +    mfn = p2m_lookup_attr(ap2m, old_gfn, &p2mt, &level, NULL, NULL);
>>>> +
>>>> +    /* Check whether the page needs to be reset. */
>>>> +    if ( gfn_eq(new_gfn, INVALID_GFN) )
>>>> +    {
>>>> +        /* If mfn is mapped by old_gpa, remove old_gpa from the
>>>> altp2m table. */
>>>> +        if ( !mfn_eq(mfn, INVALID_MFN) )
>>>> +        {
>>>> +            rc = remove_altp2m_entry(d, ap2m, old_gpa,
>>>> pfn_to_paddr(mfn_x(mfn)), level);
>>>
>>> remove_altp2m_entry should take a gfn and mfn in parameter and not an
>>> address. The latter is a call for misusage of the API.
>>>
>>
>> Ok. This will also remove the need for level_sizes/level_masks in the
>> associated function.
>>
>>>> +            if ( rc )
>>>> +            {
>>>> +                rc = -EINVAL;
>>>> +                goto out;
>>>> +            }
>>>> +        }
>>>> +
>>>> +        rc = 0;
>>>> +        goto out;
>>>> +    }
>>>> +
>>>> +    /* Check host p2m if no valid entry in altp2m present. */
>>>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>>>> +    {
>>>> +        mfn = p2m_lookup_attr(hp2m, old_gfn, &p2mt, &level, NULL,
>>>> &xma);
>>>> +        if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
>>>
>>> Please add a comment to explain why the second check.
>>>
>>
>> Ok, I will. It has the same reason as in patch #19: It is not sufficient
>> so simply check for invalid MFN's as the type might be invalid. Also,
>> the x86 implementation did not allow to remap a gfn to a shared page.
>
> Patch #19 has a different check which does not explain this one. (p2mt
> != p2m_ram_rw) only guest read-write RAM can effectively be remapped
> which is different than shared page cannot be remapped.
>
> BTW, ARM does not support shared page.
>
> This also lead to my question, why not allowing p2m_ram_ro?
>

I don't see a reason why not. Thank you. I will remove the check.

>>
>>>> +        {
>>>> +            rc = -EINVAL;
>>>> +            goto out;
>>>> +        }
>>>> +
>>>> +        /* If this is a superpage, copy that first. */
>>>> +        if ( level != 3 )
>>>> +        {
>>>> +            rc = modify_altp2m_entry(d, ap2m, old_gpa,
>>>> pfn_to_paddr(mfn_x(mfn)),
>>>> +                                     level, p2mt, memaccess[xma]);
>>>> +            if ( rc )
>>>> +            {
>>>> +                rc = -EINVAL;
>>>> +                goto out;
>>>> +            }
>>>> +        }
>>>> +    }
>>>> +
>>>> +    mfn = p2m_lookup_attr(ap2m, new_gfn, &p2mt, &level, NULL, &xma);
>>>> +
>>>> +    /* If new_gfn is not part of altp2m, get the mapping information
>>>> from hp2m */
>>>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>>>> +        mfn = p2m_lookup_attr(hp2m, new_gfn, &p2mt, &level, NULL,
>>>> &xma);
>>>> +
>>>> +    if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
>>>
>>> Please add a comment to explain why the second check.
>>>
>>
>> Same reason as above.
>
> Then add a comment in the code.

I will also remove this check.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.
  2016-08-04 11:13                                             ` George Dunlap
@ 2016-08-08  4:44                                               ` Tamas K Lengyel
  0 siblings, 0 replies; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-08  4:44 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, George Dunlap, Andrew Cooper, Xen-devel,
	Julien Grall, Sergej Proskurin

On Thu, Aug 4, 2016 at 5:13 AM, George Dunlap <george.dunlap@citrix.com> wrote:
> On 03/08/16 19:21, Tamas K Lengyel wrote:
>>> Although the behavior is very different compare to what x86 does. By default
>>> the guest will be able to play with altp2m.
>>
>> Personally my life would be a lot easier on x86 too if the default XSM
>> behavior was external-use only for altp2m.
>
> I'm a patch that allows altp2m to be exposed only to external tools
> sounds like something that would be a useful improvement.
>

Alright, to do this what I would actually prefer is extending the
current altp2m setup to distinguish between the various use-cases.
Right now it's an all or nothing (ie. exposed to both in-guest and
external tools) if we flip the altp2m_supported boolean in the
hvm_function_table, triggered by specifying altp2m=1 in the domain
config. What we would need is a way to specify that the intended mode
is external-only. With being able to specify this we could do the XSM
check with XSM_PRIV when the mode is external-only. If the mode is to
enable access to in-guest tools as well then we would do the check as
it is right now (XSM_TARGET). What are your thoughts on that route?

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines.
  2016-08-05  6:53     ` Sergej Proskurin
  2016-08-05  9:20       ` Julien Grall
@ 2016-08-09  9:44       ` Sergej Proskurin
  1 sibling, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-09  9:44 UTC (permalink / raw)
  To: xen-devel, julien.grall; +Cc: sstabellini

Hi Julien,


>>> @@ -1460,7 +1469,16 @@ static int p2m_init_hostp2m(struct domain *d)
>>>
>>>  int p2m_init(struct domain *d)
>>>  {
>>> -    return p2m_init_hostp2m(d);
>>> +    int rc;
>>> +
>>> +    rc = p2m_init_hostp2m(d);
>>> +    if ( rc )
>>> +        return rc;
>>> +
>>> +    if ( altp2m_enabled(d) )
>> I am a bit skeptical that you can fully use altp2m with this check.
>> p2m_init is created at the very beginning when the domain is
>> allocated. So the HVM_PARAM_ALTP2M will not be set.
>>
> I will respond to this answer, after I have made sure this is the case.
> You are right, the need for this call depends on which point in the
> domain initialization process libxl sets the parameter HVM_PARAM_ALTP2M.
> Thank you.
>

You were right. I have removed this check from the next patch series.
Thank you.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-03 16:54   ` Julien Grall
  2016-08-04 16:01     ` Sergej Proskurin
@ 2016-08-09 19:16     ` Tamas K Lengyel
  2016-08-10  9:52       ` Julien Grall
  1 sibling, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-09 19:16 UTC (permalink / raw)
  To: Julien Grall; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel

On Wed, Aug 3, 2016 at 10:54 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hello Sergej,
>
>
> On 01/08/16 18:10, Sergej Proskurin wrote:
>>
>> This commit moves the altp2m-related code from x86 to ARM. Functions
>> that are no yet supported notify the caller or print a BUG message
>> stating their absence.
>>
>> Also, the struct arch_domain is extended with the altp2m_active
>> attribute, representing the current altp2m activity configuration of the
>> domain.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>> v2: Removed altp2m command-line option: Guard through HVM_PARAM_ALTP2M.
>>     Removed not used altp2m helper stubs in altp2m.h.
>> ---
>>  xen/arch/arm/hvm.c           | 79
>> ++++++++++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/altp2m.h |  4 +--
>>  xen/include/asm-arm/domain.h |  3 ++
>>  3 files changed, 84 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index d999bde..eb524ae 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -32,6 +32,81 @@
>>
>>  #include <asm/hypercall.h>
>>
>> +#include <asm/altp2m.h>
>> +
>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>> +{
>> +    struct xen_hvm_altp2m_op a;
>> +    struct domain *d = NULL;
>> +    int rc = 0;
>> +
>> +    if ( copy_from_guest(&a, arg, 1) )
>> +        return -EFAULT;
>> +
>> +    if ( a.pad1 || a.pad2 ||
>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>> +        return -EINVAL;
>> +
>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>> +        rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
>> +
>> +    if ( d == NULL )
>> +        return -ESRCH;
>> +
>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>> +         !d->arch.altp2m_active )
>
>
> Why not using altp2m_active(d) here?
>
> Also this check looks quite racy. What does prevent another CPU to disable
> altp2m at the same time? How the code would behave?

There is a rcu_lock_domain_by_any_id before we get to this check here,
so any other CPU looking to disable altp2m would be waiting there for
the current op to finish up, so there is no race condition AFAICT.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-06 14:21       ` Julien Grall
  2016-08-06 17:35         ` Sergej Proskurin
@ 2016-08-10  9:32         ` Sergej Proskurin
  2016-08-11  8:47           ` Julien Grall
  1 sibling, 1 reply; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-10  9:32 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


>>> [...]
>>>
>>>>      switch ( fsc )
>>>>      {
>>>> +    case FSC_FLT_TRANS:
>>>> +    {
>>>> +        if ( altp2m_active(d) )
>>>> +        {
>>>> +            const struct npfec npfec = {
>>>> +                .insn_fetch = 1,
>>>> +                .gla_valid = 1,
>>>> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>> +            };
>>>> +
>>>> +            /*
>>>> +             * Copy the entire page of the failing instruction
>>>> into the
>>>> +             * currently active altp2m view.
>>>> +             */
>>>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>>>> +                return;
>>>
>>> I forgot to mention that I think there is a race condition here. If
>>> multiple vCPU (let say A and B) use the same altp2m, they may fault
>>> here.
>>>
>>> If vCPU A already fixed the fault, this function will return false and
>>> continue. So this will lead to inject an instruction abort to the
>>> guest.
>>>

I have solved this issue as well:

In altp2m_lazy_copy, we check whether the faulting address is already
mapped in the current altp2m view. The only reason why the current
altp2m should have a valid entry for the apparently faulting address is
that it was previously (almost simultaneously) mapped by another vcpu.
That is, if the mapping for the faulting address is valid in the altp2m,
we return true and hence let the guest retry (without injecting an
instruction/data abort exception) to access the address in question.

Best regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-09 19:16     ` Tamas K Lengyel
@ 2016-08-10  9:52       ` Julien Grall
  2016-08-10 14:49         ` Tamas K Lengyel
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-10  9:52 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel

Hello Tamas,

On 09/08/2016 21:16, Tamas K Lengyel wrote:
> On Wed, Aug 3, 2016 at 10:54 AM, Julien Grall <julien.grall@arm.com> wrote:
>> Hello Sergej,
>>
>>
>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>
>>> This commit moves the altp2m-related code from x86 to ARM. Functions
>>> that are no yet supported notify the caller or print a BUG message
>>> stating their absence.
>>>
>>> Also, the struct arch_domain is extended with the altp2m_active
>>> attribute, representing the current altp2m activity configuration of the
>>> domain.
>>>
>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>> ---
>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>> Cc: Julien Grall <julien.grall@arm.com>
>>> ---
>>> v2: Removed altp2m command-line option: Guard through HVM_PARAM_ALTP2M.
>>>     Removed not used altp2m helper stubs in altp2m.h.
>>> ---
>>>  xen/arch/arm/hvm.c           | 79
>>> ++++++++++++++++++++++++++++++++++++++++++++
>>>  xen/include/asm-arm/altp2m.h |  4 +--
>>>  xen/include/asm-arm/domain.h |  3 ++
>>>  3 files changed, 84 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>>> index d999bde..eb524ae 100644
>>> --- a/xen/arch/arm/hvm.c
>>> +++ b/xen/arch/arm/hvm.c
>>> @@ -32,6 +32,81 @@
>>>
>>>  #include <asm/hypercall.h>
>>>
>>> +#include <asm/altp2m.h>
>>> +
>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>> +{
>>> +    struct xen_hvm_altp2m_op a;
>>> +    struct domain *d = NULL;
>>> +    int rc = 0;
>>> +
>>> +    if ( copy_from_guest(&a, arg, 1) )
>>> +        return -EFAULT;
>>> +
>>> +    if ( a.pad1 || a.pad2 ||
>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>> +        return -EINVAL;
>>> +
>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>> +        rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
>>> +
>>> +    if ( d == NULL )
>>> +        return -ESRCH;
>>> +
>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>> +         !d->arch.altp2m_active )
>>
>>
>> Why not using altp2m_active(d) here?
>>
>> Also this check looks quite racy. What does prevent another CPU to disable
>> altp2m at the same time? How the code would behave?
>
> There is a rcu_lock_domain_by_any_id before we get to this check here,
> so any other CPU looking to disable altp2m would be waiting there for
> the current op to finish up, so there is no race condition AFAICT.

No, rcu_lock_domain_by_any_id only prevents the domain to be fully 
destroyed by "locking" the rcu. It does not prevent multiple concurrent 
access. You can look at the code if you are not convinced.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-10  9:52       ` Julien Grall
@ 2016-08-10 14:49         ` Tamas K Lengyel
  2016-08-11  8:17           ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-10 14:49 UTC (permalink / raw)
  To: Julien Grall; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3180 bytes --]

On Aug 10, 2016 03:52, "Julien Grall" <julien.grall@arm.com> wrote:
>
> Hello Tamas,
>
>
> On 09/08/2016 21:16, Tamas K Lengyel wrote:
>>
>> On Wed, Aug 3, 2016 at 10:54 AM, Julien Grall <julien.grall@arm.com>
wrote:
>>>
>>> Hello Sergej,
>>>
>>>
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>
>>>>
>>>> This commit moves the altp2m-related code from x86 to ARM. Functions
>>>> that are no yet supported notify the caller or print a BUG message
>>>> stating their absence.
>>>>
>>>> Also, the struct arch_domain is extended with the altp2m_active
>>>> attribute, representing the current altp2m activity configuration of
the
>>>> domain.
>>>>
>>>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>>>> ---
>>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>>> Cc: Julien Grall <julien.grall@arm.com>
>>>> ---
>>>> v2: Removed altp2m command-line option: Guard through HVM_PARAM_ALTP2M.
>>>>     Removed not used altp2m helper stubs in altp2m.h.
>>>> ---
>>>>  xen/arch/arm/hvm.c           | 79
>>>> ++++++++++++++++++++++++++++++++++++++++++++
>>>>  xen/include/asm-arm/altp2m.h |  4 +--
>>>>  xen/include/asm-arm/domain.h |  3 ++
>>>>  3 files changed, 84 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>>>> index d999bde..eb524ae 100644
>>>> --- a/xen/arch/arm/hvm.c
>>>> +++ b/xen/arch/arm/hvm.c
>>>> @@ -32,6 +32,81 @@
>>>>
>>>>  #include <asm/hypercall.h>
>>>>
>>>> +#include <asm/altp2m.h>
>>>> +
>>>> +static int do_altp2m_op(XEN_GUEST_HANDLE_PARAM(void) arg)
>>>> +{
>>>> +    struct xen_hvm_altp2m_op a;
>>>> +    struct domain *d = NULL;
>>>> +    int rc = 0;
>>>> +
>>>> +    if ( copy_from_guest(&a, arg, 1) )
>>>> +        return -EFAULT;
>>>> +
>>>> +    if ( a.pad1 || a.pad2 ||
>>>> +         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
>>>> +         (a.cmd < HVMOP_altp2m_get_domain_state) ||
>>>> +         (a.cmd > HVMOP_altp2m_change_gfn) )
>>>> +        return -EINVAL;
>>>> +
>>>> +    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
>>>> +        rcu_lock_domain_by_any_id(a.domain) :
rcu_lock_current_domain();
>>>> +
>>>> +    if ( d == NULL )
>>>> +        return -ESRCH;
>>>> +
>>>> +    if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
>>>> +         (a.cmd != HVMOP_altp2m_set_domain_state) &&
>>>> +         !d->arch.altp2m_active )
>>>
>>>
>>>
>>> Why not using altp2m_active(d) here?
>>>
>>> Also this check looks quite racy. What does prevent another CPU to
disable
>>> altp2m at the same time? How the code would behave?
>>
>>
>> There is a rcu_lock_domain_by_any_id before we get to this check here,
>> so any other CPU looking to disable altp2m would be waiting there for
>> the current op to finish up, so there is no race condition AFAICT.
>
>
> No, rcu_lock_domain_by_any_id only prevents the domain to be fully
destroyed by "locking" the rcu. It does not prevent multiple concurrent
access. You can look at the code if you are not convinced.
>

Ah thanks for clarifying. Then indeed there could be concurrency issues if
there are multiple tools accessing this interface. Normally that doesn't
happen though but probably a good idea to enforce it anyway.

Thanks,
Tamas

[-- Attachment #1.2: Type: text/html, Size: 4941 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-10 14:49         ` Tamas K Lengyel
@ 2016-08-11  8:17           ` Julien Grall
  2016-08-11 14:41             ` Tamas K Lengyel
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-11  8:17 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel

Hello Tamas,

On 10/08/2016 16:49, Tamas K Lengyel wrote:
> On Aug 10, 2016 03:52, "Julien Grall" <julien.grall@arm.com
> <mailto:julien.grall@arm.com>> wrote:
>> On 09/08/2016 21:16, Tamas K Lengyel wrote:
>>> On Wed, Aug 3, 2016 at 10:54 AM, Julien Grall <julien.grall@arm.com
> <mailto:julien.grall@arm.com>> wrote:
>>> There is a rcu_lock_domain_by_any_id before we get to this check here,
>>> so any other CPU looking to disable altp2m would be waiting there for
>>> the current op to finish up, so there is no race condition AFAICT.
>>
>>
>> No, rcu_lock_domain_by_any_id only prevents the domain to be fully
> destroyed by "locking" the rcu. It does not prevent multiple concurrent
> access. You can look at the code if you are not convinced.
>>
>
> Ah thanks for clarifying. Then indeed there could be concurrency issues
> if there are multiple tools accessing this interface. Normally that
> doesn't happen though but probably a good idea to enforce it anyway.

Well, you need to think about the worst case scenario when you implement 
an interface. If you don't lock properly, the state in Xen may be 
corrupted. For instance Xen may think altp2m is active whilst it is not 
properly initialized.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva ready for altp2m.
  2016-08-06 16:58         ` Sergej Proskurin
@ 2016-08-11  8:33           ` Julien Grall
  0 siblings, 0 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-11  8:33 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 06/08/2016 18:58, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/06/2016 03:45 PM, Julien Grall wrote:
>>
>>
>> On 06/08/2016 11:38, Sergej Proskurin wrote:
>>> Hi Julien,
>>
>> Hello Serge,
>>
>>> On 08/04/2016 01:59 PM, Julien Grall wrote:
>>>> Hello Sergej,
>>>>
>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>> The function get_page_from_gva uses ARM's hardware support to
>>>>> translate
>>>>> gva's to machine addresses. This function is used, among others, for
>>>>> memory regulation purposes, e.g, within the context of memory
>>>>> ballooning.
>>>>> To ensure correct behavior while altp2m is in use, we use the
>>>>> host's p2m
>>>>> table for the associated gva to ma translation. This is required at
>>>>> this
>>>>> point, as altp2m lazily copies pages from the host's p2m and even
>>>>> might
>>>>> be flushed because of changes to the host's p2m (as it is done within
>>>>> the context of memory ballooning).
>>>>
>>>> I was expecting to see some change in
>>>> p2m_mem_access_check_and_get_page. Is there any reason to not fix it?
>>>>
>>>>
>>>
>>> I did not yet encounter any issues with
>>> p2m_mem_access_check_and_get_page. According to ARM ARM, ATS1C** (see
>>> gva_to_ipa_par) translates VA to IPA in non-secure privilege levels (as
>>> it is the the case here). Thus, the 2nd level translation represented by
>>> the (alt)p2m is not really considered at this point and hence make an
>>> extension obsolete.
>>>
>>> Or did you have anything else in mind?
>>
>> The stage-1 page tables are living in the guest memory. So every time
>> you access an entry in the page table, you have to translate the IPA
>> (guest physical address) into a PA.
>>
>> However, the underlying memory of those page table may have
>> restriction permission or does not exist in the altp2m at all. So the
>> translation will fail.
>>
>
> Please correct me if I am wrong but as far as I understand: the function
> p2m_mem_access_check_and_get_page is called only from get_page_from_gva.
> Also it is called only if the page translation within the function
> get_page_from_gva was not successful. Because of the fact that we use
> the hostp2m's 2nd stage translation table including the original memory
> access permissions (please note the short sequence, where we temporarily
> reset the VTTBR_EL2 of the hostp2m if altp2m is active), potential
> faults (which would lead to the call of the function
> p2m_mem_access_check_and_get_page) must have reasons beyond altp2m.

The translation in get_page_from_gva may fail if the permission in the 
hostp2m has been restricted by memaccess (for instance because 
default_access is not p2m_access_rwx).

So you will fallback to p2m_mem_access_check_and_get_page. This function 
is calling gva_to_ipa that will use the altp2m to do the translation.

Therefore I think you need to modify p2m_mem_access_check_and_get_page 
to cope with altp2m.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-10  9:32         ` Sergej Proskurin
@ 2016-08-11  8:47           ` Julien Grall
  2016-08-11 17:13             ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Julien Grall @ 2016-08-11  8:47 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 10/08/2016 11:32, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

>>>> [...]
>>>>
>>>>>      switch ( fsc )
>>>>>      {
>>>>> +    case FSC_FLT_TRANS:
>>>>> +    {
>>>>> +        if ( altp2m_active(d) )
>>>>> +        {
>>>>> +            const struct npfec npfec = {
>>>>> +                .insn_fetch = 1,
>>>>> +                .gla_valid = 1,
>>>>> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>>>>> npfec_kind_with_gla
>>>>> +            };
>>>>> +
>>>>> +            /*
>>>>> +             * Copy the entire page of the failing instruction
>>>>> into the
>>>>> +             * currently active altp2m view.
>>>>> +             */
>>>>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>>>>> +                return;
>>>>
>>>> I forgot to mention that I think there is a race condition here. If
>>>> multiple vCPU (let say A and B) use the same altp2m, they may fault
>>>> here.
>>>>
>>>> If vCPU A already fixed the fault, this function will return false and
>>>> continue. So this will lead to inject an instruction abort to the
>>>> guest.
>>>>
>
> I have solved this issue as well:
>
> In altp2m_lazy_copy, we check whether the faulting address is already
> mapped in the current altp2m view. The only reason why the current
> altp2m should have a valid entry for the apparently faulting address is
> that it was previously (almost simultaneously) mapped by another vcpu.
> That is, if the mapping for the faulting address is valid in the altp2m,
> we return true and hence let the guest retry (without injecting an
> instruction/data abort exception) to access the address in question.

I am afraid that your description does not match the implementation of 
altp2m_lazy_copy in this version of the patch series.

If you find a valid entry in the altp2m, you will return 0 (i.e false). 
This will lead to inject an abort into the guest.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state.
  2016-08-06  9:36     ` Sergej Proskurin
  2016-08-06 14:18       ` Julien Grall
  2016-08-06 14:21       ` Julien Grall
@ 2016-08-11  9:08       ` Julien Grall
  2 siblings, 0 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-11  9:08 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hello Sergej,

On 06/08/2016 11:36, Sergej Proskurin wrote:
>>> +
>>> +    /* Initialize the new altp2m view. */
>>> +    rc = p2m_init_one(d, p2m);
>>> +    if ( rc )
>>> +        goto err;
>>> +
>>> +    /* Allocate a root table for the altp2m view. */
>>> +    rc = p2m_alloc_table(p2m);
>>> +    if ( rc )
>>> +        goto err;
>>> +
>>> +    p2m->p2m_class = p2m_alternate;
>>> +    p2m->access_required = 1;
>>
>> Please use true here. Although, I am not sure why you want to enable
>> the access by default.
>>
>
> Will do.
>
> p2m->access_required is true by default in the x86 implementation. Also,
> there is currently no way to manually set access_required on altp2m.
> Besides, I do not see a scenario, where it makes sense to run altp2m
> without access_required set to true.

Please add a comment in the code to explain it.

[...]

>>
>>> +
>>> +            /*
>>> +             * The altp2m_active state has been deactivated. It is
>>> now safe to
>>> +             * flush all altp2m views -- including altp2m[0].
>>> +             */
>>> +            if ( ostate )
>>> +                altp2m_flush(d);
>>
>> The function altp2m_flush is defined afterwards (in patch #9). Please
>> make sure that all the patches compile one by one.
>>
>
> The patches compile one by one. Please note that there is an
> altp2m_flush stub inside of this patch.
>
> +/* Flush all the alternate p2m's for a domain */
> +static inline void altp2m_flush(struct domain *d)
> +{
> +    /* Not yet implemented. */
> +}

I don't want to see stubs that are been replaced later on within the 
same series. The patch #9 does not seem to depend on patch #8, so I 
don't see any reason why you can't swap the 2 patches.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn.
  2016-08-06 17:42         ` Sergej Proskurin
@ 2016-08-11  9:21           ` Julien Grall
  0 siblings, 0 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-11  9:21 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 06/08/2016 19:42, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 08/06/2016 04:34 PM, Julien Grall wrote:
>>
>>
>> On 06/08/2016 14:45, Sergej Proskurin wrote:
>>> Hi Julien,
>>
>> Hello Sergej,
>>
>>> On 08/04/2016 04:04 PM, Julien Grall wrote:
>>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>>> +        return rc;
>>>>> +
>>>>> +    hp2m = p2m_get_hostp2m(d);
>>>>> +    ap2m = d->arch.altp2m_p2m[idx];
>>>>> +
>>>>> +    altp2m_lock(d);
>>>>> +
>>>>> +    /*
>>>>> +     * Flip mem_access_enabled to true when a permission is set, as
>>>>> to prevent
>>>>> +     * allocating or inserting super-pages.
>>>>> +     */
>>>>> +    ap2m->mem_access_enabled = true;
>>>>
>>>> Can you give more details about why you need this?
>>>>
>>>
>>> Similar to altp2m_set_mem_access, if we remap a page that is part of a
>>> super page in the hostp2m, we first map the superpage in form of 512
>>> pages into the ap2m and then change only one page. So, we set
>>> mem_access_enabled to true to shatter the superpage on the ap2m side.
>>
>> mem_access_enabled should only be set when mem access is enabled and
>> nothing.
>>
>> I don't understand why you want to avoid superpage in the altp2m. If
>> you copy a host mapping is a superpage, then a altp2m mapping should
>> be a superpage.
>>
>> The code is able to cope with inserting a mapping in the middle of a
>> superpage without mem_access_enabled.
>>
>
> Alright, I will try it out in the next patch. Thank you.
>
>>>
>>>>> +
>>>>> +    mfn = p2m_lookup_attr(ap2m, old_gfn, &p2mt, &level, NULL, NULL);
>>>>> +
>>>>> +    /* Check whether the page needs to be reset. */
>>>>> +    if ( gfn_eq(new_gfn, INVALID_GFN) )
>>>>> +    {
>>>>> +        /* If mfn is mapped by old_gpa, remove old_gpa from the
>>>>> altp2m table. */
>>>>> +        if ( !mfn_eq(mfn, INVALID_MFN) )
>>>>> +        {
>>>>> +            rc = remove_altp2m_entry(d, ap2m, old_gpa,
>>>>> pfn_to_paddr(mfn_x(mfn)), level);
>>>>
>>>> remove_altp2m_entry should take a gfn and mfn in parameter and not an
>>>> address. The latter is a call for misusage of the API.
>>>>
>>>
>>> Ok. This will also remove the need for level_sizes/level_masks in the
>>> associated function.
>>>
>>>>> +            if ( rc )
>>>>> +            {
>>>>> +                rc = -EINVAL;
>>>>> +                goto out;
>>>>> +            }
>>>>> +        }
>>>>> +
>>>>> +        rc = 0;
>>>>> +        goto out;
>>>>> +    }
>>>>> +
>>>>> +    /* Check host p2m if no valid entry in altp2m present. */
>>>>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>>>>> +    {
>>>>> +        mfn = p2m_lookup_attr(hp2m, old_gfn, &p2mt, &level, NULL,
>>>>> &xma);
>>>>> +        if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
>>>>
>>>> Please add a comment to explain why the second check.
>>>>
>>>
>>> Ok, I will. It has the same reason as in patch #19: It is not sufficient
>>> so simply check for invalid MFN's as the type might be invalid. Also,
>>> the x86 implementation did not allow to remap a gfn to a shared page.
>>
>> Patch #19 has a different check which does not explain this one. (p2mt
>> != p2m_ram_rw) only guest read-write RAM can effectively be remapped
>> which is different than shared page cannot be remapped.
>>
>> BTW, ARM does not support shared page.
>>
>> This also lead to my question, why not allowing p2m_ram_ro?
>>
>
> I don't see a reason why not. Thank you. I will remove the check.

Be careful, I never asked to remove the check. The p2m type contains 
more than 2 cases, so if you remove completely the check it would be 
possible to change device memory, grant, foreign mapping,...

The later will open a security issue because we require to have a 
reference any foreign mapping before mapping it in the p2m.

So I think it only make sense to allow changing a gfn for p2m_ram_ro and 
p2m_ram_rw.

>
>>>
>>>>> +        {
>>>>> +            rc = -EINVAL;
>>>>> +            goto out;
>>>>> +        }
>>>>> +
>>>>> +        /* If this is a superpage, copy that first. */
>>>>> +        if ( level != 3 )
>>>>> +        {
>>>>> +            rc = modify_altp2m_entry(d, ap2m, old_gpa,
>>>>> pfn_to_paddr(mfn_x(mfn)),
>>>>> +                                     level, p2mt, memaccess[xma]);
>>>>> +            if ( rc )
>>>>> +            {
>>>>> +                rc = -EINVAL;
>>>>> +                goto out;
>>>>> +            }
>>>>> +        }
>>>>> +    }
>>>>> +
>>>>> +    mfn = p2m_lookup_attr(ap2m, new_gfn, &p2mt, &level, NULL, &xma);
>>>>> +
>>>>> +    /* If new_gfn is not part of altp2m, get the mapping information
>>>>> from hp2m */
>>>>> +    if ( mfn_eq(mfn, INVALID_MFN) )
>>>>> +        mfn = p2m_lookup_attr(hp2m, new_gfn, &p2mt, &level, NULL,
>>>>> &xma);
>>>>> +
>>>>> +    if ( mfn_eq(mfn, INVALID_MFN) || (p2mt != p2m_ram_rw) )
>>>>
>>>> Please add a comment to explain why the second check.
>>>>
>>>
>>> Same reason as above.
>>
>> Then add a comment in the code.
>
> I will also remove this check.

No. See my answer above.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-11  8:17           ` Julien Grall
@ 2016-08-11 14:41             ` Tamas K Lengyel
  2016-08-12  8:10               ` Julien Grall
  0 siblings, 1 reply; 159+ messages in thread
From: Tamas K Lengyel @ 2016-08-11 14:41 UTC (permalink / raw)
  To: Julien Grall; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1542 bytes --]

On Aug 11, 2016 02:18, "Julien Grall" <julien.grall@arm.com> wrote:
>
> Hello Tamas,
>
>
> On 10/08/2016 16:49, Tamas K Lengyel wrote:
>>
>> On Aug 10, 2016 03:52, "Julien Grall" <julien.grall@arm.com
>> <mailto:julien.grall@arm.com>> wrote:
>>>
>>> On 09/08/2016 21:16, Tamas K Lengyel wrote:
>>>>
>>>> On Wed, Aug 3, 2016 at 10:54 AM, Julien Grall <julien.grall@arm.com
>>
>> <mailto:julien.grall@arm.com>> wrote:
>>>>
>>>> There is a rcu_lock_domain_by_any_id before we get to this check here,
>>>> so any other CPU looking to disable altp2m would be waiting there for
>>>> the current op to finish up, so there is no race condition AFAICT.
>>>
>>>
>>>
>>> No, rcu_lock_domain_by_any_id only prevents the domain to be fully
>>
>> destroyed by "locking" the rcu. It does not prevent multiple concurrent
>> access. You can look at the code if you are not convinced.
>>>
>>>
>>
>> Ah thanks for clarifying. Then indeed there could be concurrency issues
>> if there are multiple tools accessing this interface. Normally that
>> doesn't happen though but probably a good idea to enforce it anyway.
>
>
> Well, you need to think about the worst case scenario when you implement
an interface. If you don't lock properly, the state in Xen may be
corrupted. For instance Xen may think altp2m is active whilst it is not
properly initialized.
>

Sure. We largely followed the x86 implementation here and there aren't any
hvmops there that do synchronization like that, only the rcu lock is taken.
Adding a domain_lock() should be fine though.

Tamas

[-- Attachment #1.2: Type: text/html, Size: 2274 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 23/25] arm/altp2m: Extend libxl to activate altp2m on ARM.
  2016-08-02 14:07     ` Sergej Proskurin
@ 2016-08-11 16:00       ` Wei Liu
  2016-08-15 16:07         ` Sergej Proskurin
  0 siblings, 1 reply; 159+ messages in thread
From: Wei Liu @ 2016-08-11 16:00 UTC (permalink / raw)
  To: Sergej Proskurin; +Cc: wei.liu2, xen-devel

Sorry for the late reply.

On Tue, Aug 02, 2016 at 04:07:53PM +0200, Sergej Proskurin wrote:
> Hi Wei,
[...]
> >> @@ -901,8 +903,8 @@ static void initiate_domain_create(libxl__egc *egc,
> >>  
> >>      if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_HVM &&
> >>          (libxl_defbool_val(d_config->b_info.u.hvm.nested_hvm) &&
> >> -         libxl_defbool_val(d_config->b_info.u.hvm.altp2m))) {
> >> -        LOG(ERROR, "nestedhvm and altp2mhvm cannot be used together");
> >> +         libxl_defbool_val(d_config->b_info.altp2m))) {
> >> +        LOG(ERROR, "nestedhvm and altp2m cannot be used together");
> >>          goto error_out;
> >>      }
> >>  
> >> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> >> index ec29060..1550ef8 100644
> >> --- a/tools/libxl/libxl_dom.c
> >> +++ b/tools/libxl/libxl_dom.c
> >> @@ -291,8 +291,6 @@ static void hvm_set_conf_params(xc_interface *handle, uint32_t domid,
> >>                      libxl_defbool_val(info->u.hvm.vpt_align));
> >>      xc_hvm_param_set(handle, domid, HVM_PARAM_NESTEDHVM,
> >>                      libxl_defbool_val(info->u.hvm.nested_hvm));
> >> -    xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M,
> >> -                    libxl_defbool_val(info->u.hvm.altp2m));
> >>  }
> >>  
> >>  int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >> @@ -434,6 +432,8 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
> >>  #endif
> >>      }
> >>  
> >> +    xc_hvm_param_set(ctx->xch, domid, HVM_PARAM_ALTP2M, libxl_defbool_val(info->altp2m));
> >> +
> > And the reason for moving this call to this function is?
> 
> Since this implementation removes the field info->u.hvm.altp2m and
> rather uses the common field info->altp2m, I wanted to set the altp2m
> parameter outside of a function that is associated with an HVM domain
> (as the function hvm_set_conf_params is called only if the field
> info->type == LIBXL_DOMAIN_TYPE_HVM). The idea was to have only one call
> to xc_hvm_param_set independent of the domain type, as we do not
> distinguish between the underlying architecture anymore.If you believe
> that we nevertheless need two calls in the code, I will move the
> function call in question back to hvm_set_conf_params and add an
> additional call to xc_hvm_param_set for the general field info->altp2m.
> Yet, IMHO the architecture would benefit if we would have only one call
> to xc_hvm_param_set.
> 

No problem. I'm fine with your arrangement.

But you do need to wrap the line properly.

> >>      rc = libxl__arch_domain_create(gc, d_config, domid);
> >>  
> >>      return rc;
> >> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> >> index ef614be..42e7c95 100644
> >> --- a/tools/libxl/libxl_types.idl
> >> +++ b/tools/libxl/libxl_types.idl
> >> @@ -512,7 +512,6 @@ libxl_domain_build_info = Struct("domain_build_info",[
> >>                                         ("mmio_hole_memkb",  MemKB),
> >>                                         ("timer_mode",       libxl_timer_mode),
> >>                                         ("nested_hvm",       libxl_defbool),
> >> -                                       ("altp2m",           libxl_defbool),
> > No, you can't remove existing field -- that would break old
> > applications which use the old field.
> >
> > And please handle compatibility in libxl with old applications in mind.
> 
> I did not expect other applications using this field outside of libxl
> but of course you are right. My next patch will contain the legacy
> info->u.hvm.altp2m field in addition to the general/common field
> info->altp2m. Thank you for pointing that out.
> 
> >>                                         ("smbios_firmware",  string),
> >>                                         ("acpi_firmware",    string),
> >>                                         ("hdtype",           libxl_hdtype),
> >> @@ -561,6 +560,9 @@ libxl_domain_build_info = Struct("domain_build_info",[
> >>  
> >>      ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),

But you do need to wrap the line properly.
> >>                                ])),
> >> +    # Alternate p2m is not bound to any architecture or guest type, as it is
> >> +    # supported by x86 HVM and ARM PV guests.
> > Just "ARM guests" would do. ARM doesn't have notion of PV vs HVM.
> 
> I will change that, thank you. I mentioned ARM PV as currently it is the
> type that is registered for guests on ARM.
> 
> >> +    ("altp2m", libxl_defbool),
> >>  
> >>      ], dir=DIR_IN
> >>  )
> >> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> >> index 51dc7a0..f4a49ee 100644
> >> --- a/tools/libxl/xl_cmdimpl.c
> >> +++ b/tools/libxl/xl_cmdimpl.c
> >> @@ -1667,7 +1667,12 @@ static void parse_config_data(const char *config_source,
> >>  
> >>          xlu_cfg_get_defbool(config, "nestedhvm", &b_info->u.hvm.nested_hvm, 0);
> >>  
> >> -        xlu_cfg_get_defbool(config, "altp2mhvm", &b_info->u.hvm.altp2m, 0);
> >> +        /* The config parameter "altp2mhvm" is considered deprecated, however
> >> +         * further considered because of legacy reasons. The config parameter
> >> +         * "altp2m" shall be used instead. */
> >> +        if (!xlu_cfg_get_defbool(config, "altp2mhvm", &b_info->altp2m, 0))
> >> +            fprintf(stderr, "WARNING: Specifying \"altp2mhvm\" is deprecated. "
> >> +                    "Please use a \"altp2m\" instead.\n");
> > In this case you should:
> >
> >  if both altp2mhvm and altp2m are present, use the latter.
> >  if only altp2mhvm is present, honour it.
> 
> This is exactly the behavior right now (see comment below):
> 
> + /* The config parameter "altp2m" replaces the parameter "altp2mhvm".
> +  * For legacy reasons, both parameters are accepted on x86 HVM guests
> +  * (only "altp2m" is accepted on ARM guests). If both parameters are
> +  * given, it must be considered that the config parameter "altp2m" will
> +  * always have priority over "altp2mhvm". */
> 
> The warning is just displayed at this point; "altp2mhvm" is considered
> as a valid parameter.
> 
> >
> > Note that we have not yet removed the old option. Ideally we would give
> > users a transition period before removing the option.
> >
> > Also you need to patch docs/man/xl.pod.1.in for the new option.
> 
> I cannot find any entry concerning the current "altp2mhvm" option.
> Please correct me if I am wrong, but as far as I understand, this
> document holds information about the "xl" tool. Since altp2m is
> currently not controlled through the xl tool, I am actually not sure
> whether it is the right place for it. I believe you meant the file
> docs/man/xl.cfg.pod.5. If yes, I will gladly extend it, thank you.
> 

Yes, I meant xl.cfg.pod.5.in. Sorry for the typo.

> >>  
> >>          xlu_cfg_replace_string(config, "smbios_firmware",
> >>                                 &b_info->u.hvm.smbios_firmware, 0);
> >> @@ -1727,6 +1732,25 @@ static void parse_config_data(const char *config_source,
> >>          abort();
> >>      }
> >>  
> >> +    bool altp2m_support = false;
> >> +#if defined(__i386__) || defined(__x86_64__)
> >> +    /* Alternate p2m support on x86 is available only for HVM guests. */
> >> +    if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
> >> +        altp2m_support = true;
> >> +#elif defined(__arm__) || defined(__aarch64__)
> >> +    /* Alternate p2m support on ARM is available for all guests. */
> >> +    altp2m_support = true;
> >> +#endif
> >> +
> > I don't think you need to care too much about dead option here.
> > Xl should be able to set altp2m field all the time. 
> 
> I am actually not sure what you mean by the dead option. Could you be
> more specific, please?
>  
> Also, in the last patch, we came to the agreement to guard the altp2m
> functionality solely through the HVM param (instead of the additional
> Xen cmd-line option altp2m), which is set through libxl. Because of
> this, the entire domu initialization routines depend on this option to
> be set at the moment of domain creation -- not after it. That is, being
> able to set the altp2m option at all time would actually break the system.
> 

Why is this? It's possible we're talking past each other.

> > And there should be
> > code in libxl to handle situation when altp2m is not available.
> 
> I am not so sure about that either:
> 
> Currently, altp2m is supported on x86 HVM and on ARM.
>  * on x86, altp2m depends on HW to support altp2m. Therefore, the
> cmd-line option "altp2m" is used do activate altp2m. All libxl
> interaction with the altp2m subsystem will be discarded at this point.
>  * on ARM, altp2m is implemented purely in SW. That is, we do not have
> ARM architectures, that would not support altp2m -- so the idea.
>  * All other architectures should not be able to activate altp2m, which
> is why I chose the #defines (__arm__, __x86_64__, ...) to guard the
> upper option.
> 
> >> +    if (altp2m_support) {
> >> +        /* The config parameter "altp2m" replaces the parameter "altp2mhvm".
> >> +         * For legacy reasons, both parameters are accepted on x86 HVM guests
> >> +         * (only "altp2m" is accepted on ARM guests). If both parameters are
> >> +         * given, it must be considered that the config parameter "altp2m" will
> >> +         * always have priority over "altp2mhvm". */
> >> +        xlu_cfg_get_defbool(config, "altp2m", &b_info->altp2m, 0);
> >> +    }
> >> +


What I meant was:

We don't care xl (the application) sets altp2m or not. It's entirely possible
that the xl on its own sets that field. The validation should be done
inside libxl. If a particular configuration is not supported by libxl,
it should reject that. Libxl shouldn't rely on xl (the application) to
pass in fully sanitised data, it should sanitise the input itself.

Basically that means, in xl, you only need:

    +     xlu_cfg_get_defbool(config, "altp2m", &b_info->altp2m, 0);

And then in libxl:initiate_domain_create you check if the configuration
is valid. You can see a bunch of other checks are already there.

Does this make sense to you?

Wei.


> > As always, if what I said above doesn't make sense to you, feel free to
> > ask.
> 
> Thank you very much for your review.
> 
> Best regards,
> ~Sergej
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.
  2016-08-11  8:47           ` Julien Grall
@ 2016-08-11 17:13             ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-11 17:13 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 08/11/2016 10:47 AM, Julien Grall wrote:
>
>
> On 10/08/2016 11:32, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>>>>> [...]
>>>>>
>>>>>>      switch ( fsc )
>>>>>>      {
>>>>>> +    case FSC_FLT_TRANS:
>>>>>> +    {
>>>>>> +        if ( altp2m_active(d) )
>>>>>> +        {
>>>>>> +            const struct npfec npfec = {
>>>>>> +                .insn_fetch = 1,
>>>>>> +                .gla_valid = 1,
>>>>>> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>>>>>> npfec_kind_with_gla
>>>>>> +            };
>>>>>> +
>>>>>> +            /*
>>>>>> +             * Copy the entire page of the failing instruction
>>>>>> into the
>>>>>> +             * currently active altp2m view.
>>>>>> +             */
>>>>>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>>>>>> +                return;
>>>>>
>>>>> I forgot to mention that I think there is a race condition here. If
>>>>> multiple vCPU (let say A and B) use the same altp2m, they may fault
>>>>> here.
>>>>>
>>>>> If vCPU A already fixed the fault, this function will return false
>>>>> and
>>>>> continue. So this will lead to inject an instruction abort to the
>>>>> guest.
>>>>>
>>
>> I have solved this issue as well:
>>
>> In altp2m_lazy_copy, we check whether the faulting address is already
>> mapped in the current altp2m view. The only reason why the current
>> altp2m should have a valid entry for the apparently faulting address is
>> that it was previously (almost simultaneously) mapped by another vcpu.
>> That is, if the mapping for the faulting address is valid in the altp2m,
>> we return true and hence let the guest retry (without injecting an
>> instruction/data abort exception) to access the address in question.
>
> I am afraid that your description does not match the implementation of
> altp2m_lazy_copy in this version of the patch series.
>
> If you find a valid entry in the altp2m, you will return 0 (i.e
> false). This will lead to inject an abort into the guest.

I was describing the way I have solved it in the new patch. I apologize
if I did not make that clear enough.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs.
  2016-08-11 14:41             ` Tamas K Lengyel
@ 2016-08-12  8:10               ` Julien Grall
  0 siblings, 0 replies; 159+ messages in thread
From: Julien Grall @ 2016-08-12  8:10 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: Sergej Proskurin, Stefano Stabellini, Xen-devel



On 11/08/2016 16:41, Tamas K Lengyel wrote:
> On Aug 11, 2016 02:18, "Julien Grall" <julien.grall@arm.com
> <mailto:julien.grall@arm.com>> wrote:
>>
>> Hello Tamas,
>>
>>
>> On 10/08/2016 16:49, Tamas K Lengyel wrote:
>>>
>>> On Aug 10, 2016 03:52, "Julien Grall" <julien.grall@arm.com
> <mailto:julien.grall@arm.com>
>>> <mailto:julien.grall@arm.com <mailto:julien.grall@arm.com>>> wrote:
>>>>
>>>> On 09/08/2016 21:16, Tamas K Lengyel wrote:
>>>>>
>>>>> On Wed, Aug 3, 2016 at 10:54 AM, Julien Grall <julien.grall@arm.com
> <mailto:julien.grall@arm.com>
>>>
>>> <mailto:julien.grall@arm.com <mailto:julien.grall@arm.com>>> wrote:
>>>>>
>>>>> There is a rcu_lock_domain_by_any_id before we get to this check here,
>>>>> so any other CPU looking to disable altp2m would be waiting there for
>>>>> the current op to finish up, so there is no race condition AFAICT.
>>>>
>>>>
>>>>
>>>> No, rcu_lock_domain_by_any_id only prevents the domain to be fully
>>>
>>> destroyed by "locking" the rcu. It does not prevent multiple concurrent
>>> access. You can look at the code if you are not convinced.
>>>>
>>>>
>>>
>>> Ah thanks for clarifying. Then indeed there could be concurrency issues
>>> if there are multiple tools accessing this interface. Normally that
>>> doesn't happen though but probably a good idea to enforce it anyway.
>>
>>
>> Well, you need to think about the worst case scenario when you
> implement an interface. If you don't lock properly, the state in Xen may
> be corrupted. For instance Xen may think altp2m is active whilst it is
> not properly initialized.
>>
>
> Sure. We largely followed the x86 implementation here and there aren't
> any hvmops there that do synchronization like that, only the rcu lock is
> taken. Adding a domain_lock() should be fine though.

I would be curious to know why the x86 implementation does not need this 
lock.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

* Re: [PATCH v2 23/25] arm/altp2m: Extend libxl to activate altp2m on ARM.
  2016-08-11 16:00       ` Wei Liu
@ 2016-08-15 16:07         ` Sergej Proskurin
  0 siblings, 0 replies; 159+ messages in thread
From: Sergej Proskurin @ 2016-08-15 16:07 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: ian.jackson

Hi Wei,

On 08/11/2016 06:00 PM, Wei Liu wrote:
> Sorry for the late reply.
> 

No worries, it's all good :) Thanks for your reply.

[...]

>>>>                                         ("smbios_firmware",  string),
>>>>                                         ("acpi_firmware",    string),
>>>>                                         ("hdtype",           libxl_hdtype),
>>>> @@ -561,6 +560,9 @@ libxl_domain_build_info = Struct("domain_build_info",[
>>>>  
>>>>      ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
> 
> But you do need to wrap the line properly.

I am not sure whether this comment was intended, as the upper line
containing the fields arch_arm and gic_version were not changed by the
patch.

>>>>                                ])),
>>>> +    # Alternate p2m is not bound to any architecture or guest type, as it is
>>>> +    # supported by x86 HVM and ARM PV guests.
>>> Just "ARM guests" would do. ARM doesn't have notion of PV vs HVM.
>>
>> I will change that, thank you. I mentioned ARM PV as currently it is the
>> type that is registered for guests on ARM.
>>
>>>> +    ("altp2m", libxl_defbool),
>>>>  
>>>>      ], dir=DIR_IN
>>>>  )

[...]

>>>>  
>>>>          xlu_cfg_replace_string(config, "smbios_firmware",
>>>>                                 &b_info->u.hvm.smbios_firmware, 0);
>>>> @@ -1727,6 +1732,25 @@ static void parse_config_data(const char *config_source,
>>>>          abort();
>>>>      }
>>>>  
>>>> +    bool altp2m_support = false;
>>>> +#if defined(__i386__) || defined(__x86_64__)
>>>> +    /* Alternate p2m support on x86 is available only for HVM guests. */
>>>> +    if (b_info->type == LIBXL_DOMAIN_TYPE_HVM)
>>>> +        altp2m_support = true;
>>>> +#elif defined(__arm__) || defined(__aarch64__)
>>>> +    /* Alternate p2m support on ARM is available for all guests. */
>>>> +    altp2m_support = true;
>>>> +#endif
>>>> +
>>> I don't think you need to care too much about dead option here.
>>> Xl should be able to set altp2m field all the time. 
>>
>> I am actually not sure what you mean by the dead option. Could you be
>> more specific, please?
>>  
>> Also, in the last patch, we came to the agreement to guard the altp2m
>> functionality solely through the HVM param (instead of the additional
>> Xen cmd-line option altp2m), which is set through libxl. Because of
>> this, the entire domu initialization routines depend on this option to
>> be set at the moment of domain creation -- not after it. That is, being
>> able to set the altp2m option at all time would actually break the system.
>>
> 
> Why is this? It's possible we're talking past each other.
> 

I believe I was not entirely clear in my upper comment. By "we", I meant
all parties involved into the discussion concerning altp2m on ARM in
general. Sorry, for the confusion.

Also, I was wrong: The HVM param can indeed be set at any time without
any troubles.

>>> And there should be
>>> code in libxl to handle situation when altp2m is not available.
>>
>> I am not so sure about that either:
>>
>> Currently, altp2m is supported on x86 HVM and on ARM.
>>  * on x86, altp2m depends on HW to support altp2m. Therefore, the
>> cmd-line option "altp2m" is used do activate altp2m. All libxl
>> interaction with the altp2m subsystem will be discarded at this point.
>>  * on ARM, altp2m is implemented purely in SW. That is, we do not have
>> ARM architectures, that would not support altp2m -- so the idea.
>>  * All other architectures should not be able to activate altp2m, which
>> is why I chose the #defines (__arm__, __x86_64__, ...) to guard the
>> upper option.
>>
>>>> +    if (altp2m_support) {
>>>> +        /* The config parameter "altp2m" replaces the parameter "altp2mhvm".
>>>> +         * For legacy reasons, both parameters are accepted on x86 HVM guests
>>>> +         * (only "altp2m" is accepted on ARM guests). If both parameters are
>>>> +         * given, it must be considered that the config parameter "altp2m" will
>>>> +         * always have priority over "altp2mhvm". */
>>>> +        xlu_cfg_get_defbool(config, "altp2m", &b_info->altp2m, 0);
>>>> +    }
>>>> +
> 
> 
> What I meant was:
> 
> We don't care xl (the application) sets altp2m or not. It's entirely possible
> that the xl on its own sets that field. The validation should be done
> inside libxl. If a particular configuration is not supported by libxl,
> it should reject that. Libxl shouldn't rely on xl (the application) to
> pass in fully sanitised data, it should sanitise the input itself.
> 
> Basically that means, in xl, you only need:
> 
>     +     xlu_cfg_get_defbool(config, "altp2m", &b_info->altp2m, 0);
> 
> And then in libxl:initiate_domain_create you check if the configuration
> is valid. You can see a bunch of other checks are already there.
> 
> Does this make sense to you?
> 
> 

I think so. As you said, in my next patch, I will simply set
b_info->u.hvm.altp2m and b_info->altp2m during parsing and check for
sufficient support in initiate_domain_create. Thank you.


>>> As always, if what I said above doesn't make sense to you, feel free to
>>> ask.
>>

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 159+ messages in thread

end of thread, other threads:[~2016-08-15 16:07 UTC | newest]

Thread overview: 159+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-01 17:10 [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 01/25] arm/altp2m: Add first altp2m HVMOP stubs Sergej Proskurin
2016-08-03 16:54   ` Julien Grall
2016-08-04 16:01     ` Sergej Proskurin
2016-08-04 16:04       ` Julien Grall
2016-08-04 16:22         ` Sergej Proskurin
2016-08-04 16:51           ` Julien Grall
2016-08-05  6:55             ` Sergej Proskurin
2016-08-09 19:16     ` Tamas K Lengyel
2016-08-10  9:52       ` Julien Grall
2016-08-10 14:49         ` Tamas K Lengyel
2016-08-11  8:17           ` Julien Grall
2016-08-11 14:41             ` Tamas K Lengyel
2016-08-12  8:10               ` Julien Grall
2016-08-01 17:10 ` [PATCH v2 02/25] arm/altp2m: Add HVMOP_altp2m_get_domain_state Sergej Proskurin
2016-08-01 17:21   ` Andrew Cooper
2016-08-01 17:34     ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 03/25] arm/altp2m: Add struct vttbr Sergej Proskurin
2016-08-03 17:04   ` Julien Grall
2016-08-03 17:05     ` Julien Grall
2016-08-04 16:11       ` Sergej Proskurin
2016-08-04 16:15         ` Julien Grall
2016-08-06  8:54           ` Sergej Proskurin
2016-08-06 13:20             ` Julien Grall
2016-08-06 13:48               ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 04/25] arm/altp2m: Move hostp2m init/teardown to individual functions Sergej Proskurin
2016-08-03 17:40   ` Julien Grall
2016-08-05  7:26     ` Sergej Proskurin
2016-08-05  9:16       ` Julien Grall
2016-08-06  8:43         ` Sergej Proskurin
2016-08-06 13:26           ` Julien Grall
2016-08-06 13:50             ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 05/25] arm/altp2m: Rename and extend p2m_alloc_table Sergej Proskurin
2016-08-03 17:57   ` Julien Grall
2016-08-06  8:57     ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 06/25] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
2016-08-03 18:02   ` Julien Grall
2016-08-06  9:00     ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 07/25] arm/altp2m: Add altp2m init/teardown routines Sergej Proskurin
2016-08-03 18:12   ` Julien Grall
2016-08-05  6:53     ` Sergej Proskurin
2016-08-05  9:20       ` Julien Grall
2016-08-06  8:30         ` Sergej Proskurin
2016-08-09  9:44       ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 08/25] arm/altp2m: Add HVMOP_altp2m_set_domain_state Sergej Proskurin
2016-08-03 18:41   ` Julien Grall
2016-08-06  9:03     ` Sergej Proskurin
2016-08-06  9:36     ` Sergej Proskurin
2016-08-06 14:18       ` Julien Grall
2016-08-06 14:21       ` Julien Grall
2016-08-11  9:08       ` Julien Grall
2016-08-01 17:10 ` [PATCH v2 09/25] arm/altp2m: Add altp2m table flushing routine Sergej Proskurin
2016-08-03 18:44   ` Julien Grall
2016-08-06  9:45     ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 10/25] arm/altp2m: Add HVMOP_altp2m_create_p2m Sergej Proskurin
2016-08-03 18:48   ` Julien Grall
2016-08-06  9:46     ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 11/25] arm/altp2m: Add HVMOP_altp2m_destroy_p2m Sergej Proskurin
2016-08-04 11:46   ` Julien Grall
2016-08-06  9:54     ` Sergej Proskurin
2016-08-06 13:36       ` Julien Grall
2016-08-06 13:51         ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 12/25] arm/altp2m: Add HVMOP_altp2m_switch_p2m Sergej Proskurin
2016-08-04 11:51   ` Julien Grall
2016-08-06 10:13     ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 13/25] arm/altp2m: Make p2m_restore_state ready for altp2m Sergej Proskurin
2016-08-04 11:55   ` Julien Grall
2016-08-06 10:20     ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 14/25] arm/altp2m: Make get_page_from_gva " Sergej Proskurin
2016-08-04 11:59   ` Julien Grall
2016-08-06 10:38     ` Sergej Proskurin
2016-08-06 13:45       ` Julien Grall
2016-08-06 16:58         ` Sergej Proskurin
2016-08-11  8:33           ` Julien Grall
2016-08-01 17:10 ` [PATCH v2 15/25] arm/altp2m: Extend __p2m_lookup Sergej Proskurin
2016-08-04 12:04   ` Julien Grall
2016-08-06 10:44     ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 16/25] arm/altp2m: Make p2m_mem_access_check ready for altp2m Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 17/25] arm/altp2m: Cosmetic fixes - function prototypes Sergej Proskurin
2016-08-04 12:06   ` Julien Grall
2016-08-06 10:46     ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 18/25] arm/altp2m: Add HVMOP_altp2m_set_mem_access Sergej Proskurin
2016-08-04 14:19   ` Julien Grall
2016-08-06 11:03     ` Sergej Proskurin
2016-08-06 14:26       ` Julien Grall
2016-08-01 17:10 ` [PATCH v2 19/25] arm/altp2m: Add altp2m_propagate_change Sergej Proskurin
2016-08-04 14:50   ` Julien Grall
2016-08-06 11:26     ` Sergej Proskurin
2016-08-06 13:52       ` Julien Grall
2016-08-06 17:06         ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism Sergej Proskurin
2016-08-04 13:50   ` Julien Grall
2016-08-06 12:51     ` Sergej Proskurin
2016-08-06 14:14       ` Julien Grall
2016-08-06 17:28         ` Sergej Proskurin
2016-08-04 16:59   ` Julien Grall
2016-08-06 12:57     ` Sergej Proskurin
2016-08-06 14:21       ` Julien Grall
2016-08-06 17:35         ` Sergej Proskurin
2016-08-10  9:32         ` Sergej Proskurin
2016-08-11  8:47           ` Julien Grall
2016-08-11 17:13             ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 21/25] arm/altp2m: Add HVMOP_altp2m_change_gfn Sergej Proskurin
2016-08-04 14:04   ` Julien Grall
2016-08-06 13:45     ` Sergej Proskurin
2016-08-06 14:34       ` Julien Grall
2016-08-06 17:42         ` Sergej Proskurin
2016-08-11  9:21           ` Julien Grall
2016-08-01 17:10 ` [PATCH v2 22/25] arm/altp2m: Adjust debug information to altp2m Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 23/25] arm/altp2m: Extend libxl to activate altp2m on ARM Sergej Proskurin
2016-08-02 11:59   ` Wei Liu
2016-08-02 14:07     ` Sergej Proskurin
2016-08-11 16:00       ` Wei Liu
2016-08-15 16:07         ` Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 24/25] arm/altp2m: Extend xen-access for " Sergej Proskurin
2016-08-01 17:10 ` [PATCH v2 25/25] arm/altp2m: Add test of xc_altp2m_change_gfn Sergej Proskurin
2016-08-02  9:14   ` Razvan Cojocaru
2016-08-02  9:50     ` Sergej Proskurin
2016-08-01 18:15 ` [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM Julien Grall
2016-08-01 19:20   ` Tamas K Lengyel
2016-08-01 19:55     ` Julien Grall
2016-08-01 20:35       ` Sergej Proskurin
2016-08-01 20:41       ` Tamas K Lengyel
2016-08-02  7:38         ` Julien Grall
2016-08-02 11:17           ` George Dunlap
2016-08-02 15:48             ` Tamas K Lengyel
2016-08-02 16:05               ` George Dunlap
2016-08-02 16:09                 ` Tamas K Lengyel
2016-08-02 16:40                 ` Julien Grall
2016-08-02 17:01                   ` Tamas K Lengyel
2016-08-02 17:22                   ` Tamas K Lengyel
2016-08-02 16:00           ` Tamas K Lengyel
2016-08-02 16:11             ` Julien Grall
2016-08-02 16:22               ` Tamas K Lengyel
2016-08-01 23:14   ` Andrew Cooper
2016-08-02  7:34     ` Julien Grall
2016-08-02 16:08       ` Andrew Cooper
2016-08-02 16:30         ` Tamas K Lengyel
2016-08-03 11:53         ` Julien Grall
2016-08-03 12:00           ` Andrew Cooper
2016-08-03 12:13             ` Julien Grall
2016-08-03 12:18               ` Andrew Cooper
2016-08-03 12:45                 ` Sergej Proskurin
2016-08-03 14:08                   ` Julien Grall
2016-08-03 14:17                     ` Sergej Proskurin
2016-08-03 16:01                     ` Tamas K Lengyel
2016-08-03 16:24                       ` Julien Grall
2016-08-03 16:42                         ` Tamas K Lengyel
2016-08-03 16:51                           ` Julien Grall
2016-08-03 17:30                             ` Andrew Cooper
2016-08-03 17:43                               ` Tamas K Lengyel
2016-08-03 17:45                                 ` Julien Grall
2016-08-03 17:51                                   ` Tamas K Lengyel
2016-08-03 17:56                                     ` Julien Grall
2016-08-03 18:11                                       ` Tamas K Lengyel
2016-08-03 18:16                                         ` Julien Grall
2016-08-03 18:21                                           ` Tamas K Lengyel
2016-08-04 11:13                                             ` George Dunlap
2016-08-08  4:44                                               ` Tamas K Lengyel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.