xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/22] xen/arm: P2M clean-up and fixes
@ 2016-07-20 16:10 Julien Grall
  2016-07-20 16:10 ` [PATCH 01/22] xen/arm: system: Use the correct parameter name in local_irq_restore Julien Grall
                   ` (22 more replies)
  0 siblings, 23 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

Hello all,

This patch series contains a bunch of clean-up and fixes for the P2M code on
ARM. The major changes are:
    - Restrict usage of get_page_from_gva to the current vCPU
    - Deduce the memory attributes from the p2m type
    - Switch to read-write lock to improve performance
    - Simplify the TLB flush for a given domain

I have provided a branch will all the patches applied on my repo:
git://xenbits/xenbits.xen.org/people/julieng/xen-unstable.git p2m-cleanup-v1

Yours sincerely,

Julien Grall (22):
  xen/arm: system: Use the correct parameter name in local_irq_restore
  xen/arm: p2m: Pass the vCPU in parameter to get_page_from_gva
  xen/arm: p2m: Restrict usage of get_page_from_gva to the current vCPU
  xen/arm: p2m: Fix multi-lines coding style comments
  xen/arm: p2m: Clean-up mfn_to_p2m_entry
  xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry
  xen/arm: p2m: Use p2m_is_foreign in get_page_from_gfn to avoid open
    coding
  xen/arm: p2m: Simplify p2m type check by using bitmask
  xen/arm: p2m: Use a whitelist rather than blacklist in
    get_page_from_gfn
  xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO
  xen/arm: p2m: Find the memory attributes based on the p2m type
  xen/arm: p2m: Remove unnecessary locking
  xen/arm: p2m: Introduce p2m_{read,write}_{,un}lock helpers
  xen/arm: p2m: Switch the p2m lock from spinlock to rwlock
  xen/arm: Don't call p2m_alloc_table from arch_domain_create
  xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain
  xen/arm: p2m: Don't need to restore the state for an idle vCPU.
  xen/arm: p2m: Rework the context switch to another VTTBR in
    flush_tlb_domain
  xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state
  xen/arm: Don't export flush_tlb_domain
  xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb
  xen/arm: p2m: Pass the p2m in parameter rather the domain when it is
    possible

 xen/arch/arm/domain.c              |   3 -
 xen/arch/arm/guestcopy.c           |   6 +-
 xen/arch/arm/p2m.c                 | 255 +++++++++++++++++++------------------
 xen/arch/arm/traps.c               |   4 +-
 xen/include/asm-arm/arm32/system.h |   2 +-
 xen/include/asm-arm/arm64/system.h |   2 +-
 xen/include/asm-arm/domain.h       |   1 -
 xen/include/asm-arm/flushtlb.h     |   3 -
 xen/include/asm-arm/mm.h           |   2 +-
 xen/include/asm-arm/p2m.h          |  85 +++++++++----
 10 files changed, 194 insertions(+), 169 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* [PATCH 01/22] xen/arm: system: Use the correct parameter name in local_irq_restore
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-22  1:19   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 02/22] xen/arm: p2m: Pass the vCPU in parameter to get_page_from_gva Julien Grall
                   ` (21 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The parameter to store the flags is called 'x' and not 'flags'.
Thankfully all the user of the macro is passing 'flags'.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    This patch is candidate to be backported up to Xen 4.5.
---
 xen/include/asm-arm/arm32/system.h | 2 +-
 xen/include/asm-arm/arm64/system.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32/system.h
index b47b942..c617b40 100644
--- a/xen/include/asm-arm/arm32/system.h
+++ b/xen/include/asm-arm/arm32/system.h
@@ -24,7 +24,7 @@
     asm volatile (                                               \
             "msr     cpsr_c, %0      @ local_irq_restore\n"      \
             :                                                    \
-            : "r" (flags)                                        \
+            : "r" (x)                                            \
             : "memory", "cc");                                   \
 })
 
diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h
index 6efced3..2e2ee21 100644
--- a/xen/include/asm-arm/arm64/system.h
+++ b/xen/include/asm-arm/arm64/system.h
@@ -40,7 +40,7 @@
     asm volatile (                                               \
         "msr    daif, %0                // local_irq_restore"    \
         :                                                        \
-        : "r" (flags)                                            \
+        : "r" (x)                                                \
         : "memory");                                             \
 })
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 02/22] xen/arm: p2m: Pass the vCPU in parameter to get_page_from_gva
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
  2016-07-20 16:10 ` [PATCH 01/22] xen/arm: system: Use the correct parameter name in local_irq_restore Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-22  1:22   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 03/22] xen/arm: p2m: Restrict usage of get_page_from_gva to the current vCPU Julien Grall
                   ` (20 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The function get_page_from_gva translates a guest virtual address to a
machine address. The translation involves the register VTTBR_EL2,
TTBR0_EL1, TTBR1_EL1 and SCTLR_EL1. Whilst the first register is per
domain (the p2m is common to every vCPUs), the last 3 are per-vCPU.

Therefore, the function should take the vCPU in parameter and not the
domain. Fixing the actual code path will be done a separate patch.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/guestcopy.c | 6 +++---
 xen/arch/arm/p2m.c       | 3 ++-
 xen/arch/arm/traps.c     | 2 +-
 xen/include/asm-arm/mm.h | 2 +-
 4 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index ce1c3c3..413125f 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
         if ( page == NULL )
             return len;
 
@@ -64,7 +64,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
         if ( page == NULL )
             return len;
 
@@ -96,7 +96,7 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
         unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
         struct page_info *page;
 
-        page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
+        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
         if ( page == NULL )
             return len;
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a4bc55a..1111d6f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1521,9 +1521,10 @@ err:
     return page;
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
                                     unsigned long flags)
 {
+    struct domain *d = v->domain;
     struct p2m_domain *p2m = &d->arch.p2m;
     struct page_info *page = NULL;
     paddr_t maddr = 0;
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index a2eb1da..06a8ee5 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -955,7 +955,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
         return;
     }
 
-    page = get_page_from_gva(v->domain, sp, GV2M_READ);
+    page = get_page_from_gva(v, sp, GV2M_READ);
     if ( page == NULL )
     {
         printk("Failed to convert stack to physical address\n");
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 68cf203..19eadd2 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -281,7 +281,7 @@ static inline void *page_to_virt(const struct page_info *pg)
     return mfn_to_virt(page_to_mfn(pg));
 }
 
-struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
                                     unsigned long flags);
 
 /*
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 03/22] xen/arm: p2m: Restrict usage of get_page_from_gva to the current vCPU
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
  2016-07-20 16:10 ` [PATCH 01/22] xen/arm: system: Use the correct parameter name in local_irq_restore Julien Grall
  2016-07-20 16:10 ` [PATCH 02/22] xen/arm: p2m: Pass the vCPU in parameter to get_page_from_gva Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-22  1:25   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 04/22] xen/arm: p2m: Fix multi-lines coding style comments Julien Grall
                   ` (19 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The function get_page_from_gva translates a guest virtual address to a
machine address. The translation involves the register VTTBR_EL2,
TTBR0_EL1, TTBR1_EL1 and SCTLR_EL1.

Currently, only the first register is context switch is the current
domain is not the same. This will result to use the wrong TTBR*_EL1 and
SCTLR_EL1 for the translation.

To fix the code properly, we would have to context switch all the
registers mentioned above when the vCPU in parameter is not the current
one. Similar things would need to be done in the callee
p2m_mem_check_and_get_page.

Given that the only caller of this function with the vCPU that may not
be current is a guest debugging function (show_guest_stack), restrict
the usage to the current vCPU for the time being.

A proper fix will be send separately.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    This patch is candidate to be backported up to Xen 4.5.
---
 xen/arch/arm/p2m.c | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1111d6f..64d84cc 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1530,24 +1530,16 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
     paddr_t maddr = 0;
     int rc;
 
-    spin_lock(&p2m->lock);
-
-    if ( unlikely(d != current->domain) )
-    {
-        unsigned long irq_flags;
-
-        local_irq_save(irq_flags);
-        p2m_load_VTTBR(d);
+    /*
+     * XXX: To support a different vCPU, we would need to load the
+     * VTTBR_EL2, TTBR0_EL1, TTBR1_EL1 and SCTLR_EL1
+     */
+    if ( v != current )
+        return NULL;
 
-        rc = gvirt_to_maddr(va, &maddr, flags);
+    spin_lock(&p2m->lock);
 
-        p2m_load_VTTBR(current->domain);
-        local_irq_restore(irq_flags);
-    }
-    else
-    {
-        rc = gvirt_to_maddr(va, &maddr, flags);
-    }
+    rc = gvirt_to_maddr(va, &maddr, flags);
 
     if ( rc )
         goto err;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 04/22] xen/arm: p2m: Fix multi-lines coding style comments
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (2 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 03/22] xen/arm: p2m: Restrict usage of get_page_from_gva to the current vCPU Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-22  1:26   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 05/22] xen/arm: p2m: Clean-up mfn_to_p2m_entry Julien Grall
                   ` (18 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The start and end markers should be on separate lines.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c        | 35 ++++++++++++++++++++++------------
 xen/include/asm-arm/p2m.h | 48 +++++++++++++++++++++++++++++++----------------
 2 files changed, 55 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 64d84cc..79095f1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -33,9 +33,11 @@ static bool_t p2m_valid(lpae_t pte)
 {
     return pte.p2m.valid;
 }
-/* These two can only be used on L0..L2 ptes because L3 mappings set
+/*
+ * These two can only be used on L0..L2 ptes because L3 mappings set
  * the table bit and therefore these would return the opposite to what
- * you would expect. */
+ * you would expect.
+ */
 static bool_t p2m_table(lpae_t pte)
 {
     return p2m_valid(pte) && pte.p2m.table;
@@ -119,7 +121,8 @@ void flush_tlb_domain(struct domain *d)
 {
     unsigned long flags = 0;
 
-    /* Update the VTTBR if necessary with the domain d. In this case,
+    /*
+     * Update the VTTBR if necessary with the domain d. In this case,
      * it's only necessary to flush TLBs on every CPUs with the current VMID
      * (our domain).
      */
@@ -325,8 +328,10 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
                                p2m_type_t t, p2m_access_t a)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
-    /* sh, xn and write bit will be defined in the following switches
-     * based on mattr and t. */
+    /*
+     * sh, xn and write bit will be defined in the following switches
+     * based on mattr and t.
+     */
     lpae_t e = (lpae_t) {
         .p2m.af = 1,
         .p2m.read = 1,
@@ -552,15 +557,17 @@ enum p2m_operation {
     MEMACCESS,
 };
 
-/* Put any references on the single 4K page referenced by pte.  TODO:
- * Handle superpages, for now we only take special references for leaf
+/*
+ * Put any references on the single 4K page referenced by pte.
+ * TODO: Handle superpages, for now we only take special references for leaf
  * pages (specifically foreign ones, which can't be super mapped today).
  */
 static void p2m_put_l3_page(const lpae_t pte)
 {
     ASSERT(p2m_valid(pte));
 
-    /* TODO: Handle other p2m types
+    /*
+     * TODO: Handle other p2m types
      *
      * It's safe to do the put_page here because page_alloc will
      * flush the TLBs if the page is reallocated before the end of
@@ -932,7 +939,8 @@ static int apply_p2m_changes(struct domain *d,
     PAGE_LIST_HEAD(free_pages);
     struct page_info *pg;
 
-    /* Some IOMMU don't support coherent PT walk. When the p2m is
+    /*
+     * Some IOMMU don't support coherent PT walk. When the p2m is
      * shared with the CPU, Xen has to make sure that the PT changes have
      * reached the memory
      */
@@ -1275,7 +1283,8 @@ int p2m_alloc_table(struct domain *d)
     d->arch.vttbr = page_to_maddr(p2m->root)
         | ((uint64_t)p2m->vmid&0xff)<<48;
 
-    /* Make sure that all TLBs corresponding to the new VMID are flushed
+    /*
+     * Make sure that all TLBs corresponding to the new VMID are flushed
      * before using it
      */
     flush_tlb_domain(d);
@@ -1290,8 +1299,10 @@ int p2m_alloc_table(struct domain *d)
 
 static spinlock_t vmid_alloc_lock = SPIN_LOCK_UNLOCKED;
 
-/* VTTBR_EL2 VMID field is 8 bits. Using a bitmap here limits us to
- * 256 concurrent domains. */
+/*
+ * VTTBR_EL2 VMID field is 8 bits. Using a bitmap here limits us to
+ * 256 concurrent domains.
+ */
 static DECLARE_BITMAP(vmid_mask, MAX_VMID);
 
 void p2m_vmid_allocator_init(void)
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 34096bc..8fe78c1 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -31,12 +31,14 @@ struct p2m_domain {
     /* Current VMID in use */
     uint8_t vmid;
 
-    /* Highest guest frame that's ever been mapped in the p2m
+    /*
+     * Highest guest frame that's ever been mapped in the p2m
      * Only takes into account ram and foreign mapping
      */
     gfn_t max_mapped_gfn;
 
-    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+    /*
+     * Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
      * preemptible manner this is update to track recall where to
      * resume the search. Apart from during teardown this can only
      * decrease. */
@@ -51,24 +53,31 @@ struct p2m_domain {
         unsigned long shattered[4];
     } stats;
 
-    /* If true, and an access fault comes in and there is no vm_event listener,
-     * pause domain. Otherwise, remove access restrictions. */
+    /*
+     * If true, and an access fault comes in and there is no vm_event listener,
+     * pause domain. Otherwise, remove access restrictions.
+     */
     bool_t access_required;
 
     /* Defines if mem_access is in use for the domain. */
     bool_t mem_access_enabled;
 
-    /* Default P2M access type for each page in the the domain: new pages,
+    /*
+     * Default P2M access type for each page in the the domain: new pages,
      * swapped in pages, cleared pages, and pages that are ambiguously
-     * retyped get this access type. See definition of p2m_access_t. */
+     * retyped get this access type. See definition of p2m_access_t.
+     */
     p2m_access_t default_access;
 
-    /* Radix tree to store the p2m_access_t settings as the pte's don't have
-     * enough available bits to store this information. */
+    /*
+     * Radix tree to store the p2m_access_t settings as the pte's don't have
+     * enough available bits to store this information.
+     */
     struct radix_tree_root mem_access_settings;
 };
 
-/* List of possible type for each page in the p2m entry.
+/*
+ * List of possible type for each page in the p2m entry.
  * The number of available bit per page in the pte for this purpose is 4 bits.
  * So it's possible to only have 16 fields. If we run out of value in the
  * future, it's possible to use higher value for pseudo-type and don't store
@@ -116,13 +125,15 @@ int p2m_init(struct domain *d);
 /* Return all the p2m resources to Xen. */
 void p2m_teardown(struct domain *d);
 
-/* Remove mapping refcount on each mapping page in the p2m
+/*
+ * Remove mapping refcount on each mapping page in the p2m
  *
  * TODO: For the moment only foreign mappings are handled
  */
 int relinquish_p2m_mapping(struct domain *d);
 
-/* Allocate a new p2m table for a domain.
+/*
+ * Allocate a new p2m table for a domain.
  *
  * Returns 0 for success or -errno.
  */
@@ -181,8 +192,10 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
  * Populate-on-demand
  */
 
-/* Call when decreasing memory reservation to handle PoD entries properly.
- * Will return '1' if all entries were handled and nothing more need be done.*/
+/*
+ * Call when decreasing memory reservation to handle PoD entries properly.
+ * Will return '1' if all entries were handled and nothing more need be done.
+ */
 int
 p2m_pod_decrease_reservation(struct domain *d,
                              xen_pfn_t gpfn,
@@ -210,7 +223,8 @@ static inline struct page_info *get_page_from_gfn(
         return NULL;
     page = mfn_to_page(mfn);
 
-    /* get_page won't work on foreign mapping because the page doesn't
+    /*
+     * get_page won't work on foreign mapping because the page doesn't
      * belong to the current domain.
      */
     if ( p2mt == p2m_map_foreign )
@@ -257,8 +271,10 @@ static inline bool_t p2m_vm_event_sanity_check(struct domain *d)
     return 1;
 }
 
-/* Send mem event based on the access. Boolean return value indicates if trap
- * needs to be injected into guest. */
+/*
+ * Send mem event based on the access. Boolean return value indicates if trap
+ * needs to be injected into guest.
+ */
 bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
 
 #endif /* _XEN_P2M_H */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 05/22] xen/arm: p2m: Clean-up mfn_to_p2m_entry
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (3 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 04/22] xen/arm: p2m: Fix multi-lines coding style comments Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-26 22:24   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 06/22] xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry Julien Grall
                   ` (17 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The physical address is computed from the machine frame number, so
checking if the physical address is page aligned is pointless.

Furthermore, directly assigned the MFN to the corresponding field in the
entry rather than converting to a physical address and orring the value.
It will avoid to rely on the field position and make the code clearer.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 79095f1..d82349c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -327,7 +327,6 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
 static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
                                p2m_type_t t, p2m_access_t a)
 {
-    paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
     /*
      * sh, xn and write bit will be defined in the following switches
      * based on mattr and t.
@@ -359,10 +358,9 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
 
     p2m_set_permission(&e, t, a);
 
-    ASSERT(!(pa & ~PAGE_MASK));
-    ASSERT(!(pa & ~PADDR_MASK));
+    ASSERT(!(pfn_to_paddr(mfn) & ~PADDR_MASK));
 
-    e.bits |= pa;
+    e.p2m.base = mfn;
 
     return e;
 }
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 06/22] xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (4 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 05/22] xen/arm: p2m: Clean-up mfn_to_p2m_entry Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-26 22:28   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 07/22] xen/arm: p2m: Use p2m_is_foreign in get_page_from_gfn to avoid open coding Julien Grall
                   ` (16 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d82349c..99be9be 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -324,7 +324,7 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
     }
 }
 
-static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
+static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
                                p2m_type_t t, p2m_access_t a)
 {
     /*
@@ -358,9 +358,9 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
 
     p2m_set_permission(&e, t, a);
 
-    ASSERT(!(pfn_to_paddr(mfn) & ~PADDR_MASK));
+    ASSERT(!(pfn_to_paddr(mfn_x(mfn)) & ~PADDR_MASK));
 
-    e.p2m.base = mfn;
+    e.p2m.base = mfn_x(mfn);
 
     return e;
 }
@@ -411,7 +411,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
     if ( splitting )
     {
         p2m_type_t t = entry->p2m.type;
-        unsigned long base_pfn = entry->p2m.base;
+        mfn_t mfn = _mfn(entry->p2m.base);
         int i;
 
         /*
@@ -420,8 +420,9 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
          */
          for ( i=0 ; i < LPAE_ENTRIES; i++ )
          {
-             pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
-                                    MATTR_MEM, t, p2m->default_access);
+             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t, p2m->default_access);
+
+             mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));
 
              /*
               * First and second level super pages set p2m.table = 0, but
@@ -443,7 +444,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
 
     unmap_domain_page(p);
 
-    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid,
+    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), MATTR_MEM, p2m_invalid,
                            p2m->default_access);
 
     p2m_write_pte(entry, pte, flush_cache);
@@ -693,7 +694,7 @@ static int apply_one_level(struct domain *d,
                 return rc;
 
             /* New mapping is superpage aligned, make it */
-            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
+            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), mattr, t, a);
             if ( level < 3 )
                 pte.p2m.table = 0; /* Superpage entry */
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 07/22] xen/arm: p2m: Use p2m_is_foreign in get_page_from_gfn to avoid open coding
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (5 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 06/22] xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-26 22:33   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 08/22] xen/arm: p2m: Simplify p2m type check by using bitmask Julien Grall
                   ` (15 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

No functional change.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/include/asm-arm/p2m.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 8fe78c1..dbbcefe 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -227,7 +227,7 @@ static inline struct page_info *get_page_from_gfn(
      * get_page won't work on foreign mapping because the page doesn't
      * belong to the current domain.
      */
-    if ( p2mt == p2m_map_foreign )
+    if ( p2m_is_foreign(p2mt) )
     {
         struct domain *fdom = page_get_owner_and_reference(page);
         ASSERT(fdom != NULL);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 08/22] xen/arm: p2m: Simplify p2m type check by using bitmask
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (6 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 07/22] xen/arm: p2m: Use p2m_is_foreign in get_page_from_gfn to avoid open coding Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-26 22:36   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 09/22] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn Julien Grall
                   ` (14 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The resulting assembly code for the macros is much simpler and will
never contain more than one instruction branch.

The idea is taken from x86 (see include/asm-x86/p2m.h). Also move the
two helpers earlier to keep all the p2m type definitions together.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/include/asm-arm/p2m.h | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index dbbcefe..3091c04 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -97,6 +97,17 @@ typedef enum {
     p2m_max_real_type,  /* Types after this won't be store in the p2m */
 } p2m_type_t;
 
+/* We use bitmaps and mask to handle groups of types */
+#define p2m_to_mask(_t) (1UL << (_t))
+
+/* RAM types, which map to real machine frames */
+#define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw) |        \
+                       p2m_to_mask(p2m_ram_ro))
+
+/* Useful predicates */
+#define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
+#define p2m_is_foreign(_t) (p2m_to_mask(_t) & p2m_to_mask(p2m_map_foreign))
+
 static inline
 void p2m_mem_access_emulate_check(struct vcpu *v,
                                   const vm_event_response_t *rsp)
@@ -110,9 +121,6 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
     /* Not supported on ARM. */
 }
 
-#define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
-#define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
-
 /* Initialise vmid allocator */
 void p2m_vmid_allocator_init(void);
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 09/22] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (7 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 08/22] xen/arm: p2m: Simplify p2m type check by using bitmask Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-26 22:44   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 10/22] xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO Julien Grall
                   ` (13 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

Currently, the check in get_page_from_gfn is using a blacklist. This is
very fragile because we may forgot to update the check when a new p2m
type is added.

To avoid any possible issue, use a whitelist. All type backed by a RAM
page can could potential be valid. The check is borrowed from x86.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/include/asm-arm/p2m.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 3091c04..78d37ab 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -104,9 +104,16 @@ typedef enum {
 #define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw) |        \
                        p2m_to_mask(p2m_ram_ro))
 
+/* Grant mapping types, which map to a real frame in another VM */
+#define P2M_GRANT_TYPES (p2m_to_mask(p2m_grant_map_rw) |  \
+                         p2m_to_mask(p2m_grant_map_ro))
+
 /* Useful predicates */
 #define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
 #define p2m_is_foreign(_t) (p2m_to_mask(_t) & p2m_to_mask(p2m_map_foreign))
+#define p2m_is_any_ram(_t) (p2m_to_mask(_t) &                   \
+                            (P2M_RAM_TYPES | P2M_GRANT_TYPES |  \
+                             p2m_to_mask(p2m_map_foreign)))
 
 static inline
 void p2m_mem_access_emulate_check(struct vcpu *v,
@@ -224,7 +231,7 @@ static inline struct page_info *get_page_from_gfn(
     if (t)
         *t = p2mt;
 
-    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_direct )
+    if ( !p2m_is_any_ram(p2mt) )
         return NULL;
 
     if ( !mfn_valid(mfn) )
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 10/22] xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (8 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 09/22] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-26 22:47   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type Julien Grall
                   ` (12 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

Currently, the p2m type p2m_mmio_direct is used to map in stage-2
cacheable MMIO (via map_regions_rw_cache) and non-cacheable one (via
map_mmio_regions). The p2m code is relying on the caller to give the
correct memory attribute.

In a follow-up patch, the p2m code will rely on the p2m type to find the
correct memory attribute. In preparation of this, introduce
p2m_mmio_direct_nc and p2m_mimo_direct_c to differentiate the
cacheability of the MMIO.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c        | 7 ++++---
 xen/include/asm-arm/p2m.h | 3 ++-
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 99be9be..999de2b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -272,7 +272,8 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
     case p2m_iommu_map_rw:
     case p2m_map_foreign:
     case p2m_grant_map_rw:
-    case p2m_mmio_direct:
+    case p2m_mmio_direct_nc:
+    case p2m_mmio_direct_c:
         e->p2m.xn = 1;
         e->p2m.write = 1;
         break;
@@ -1195,7 +1196,7 @@ int map_regions_rw_cache(struct domain *d,
                          mfn_t mfn)
 {
     return p2m_insert_mapping(d, gfn, nr, mfn,
-                              MATTR_MEM, p2m_mmio_direct);
+                              MATTR_MEM, p2m_mmio_direct_c);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
@@ -1212,7 +1213,7 @@ int map_mmio_regions(struct domain *d,
                      mfn_t mfn)
 {
     return p2m_insert_mapping(d, start_gfn, nr, mfn,
-                              MATTR_DEV, p2m_mmio_direct);
+                              MATTR_DEV, p2m_mmio_direct_nc);
 }
 
 int unmap_mmio_regions(struct domain *d,
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 78d37ab..20a220ea 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -87,7 +87,8 @@ typedef enum {
     p2m_invalid = 0,    /* Nothing mapped here */
     p2m_ram_rw,         /* Normal read/write guest RAM */
     p2m_ram_ro,         /* Read-only; writes are silently dropped */
-    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
+    p2m_mmio_direct_nc, /* Read/write mapping of genuine MMIO area non-cacheable */
+    p2m_mmio_direct_c,  /* Read/write mapping of genuine MMIO area cacheable */
     p2m_map_foreign,    /* Ram pages from foreign domain */
     p2m_grant_map_rw,   /* Read/write grant mapping */
     p2m_grant_map_ro,   /* Read-only grant mapping */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (9 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 10/22] xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-27  0:41   ` Stefano Stabellini
  2016-07-27 17:15   ` Julien Grall
  2016-07-20 16:10 ` [PATCH 12/22] xen/arm: p2m: Remove unnecessary locking Julien Grall
                   ` (11 subsequent siblings)
  22 siblings, 2 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

Currently, mfn_to_p2m_entry is relying on the caller to provide the
correct memory attribute and will deduce the sharability based on it.

Some of the callers, such as p2m_create_table, are using same memory
attribute regardless the underlying p2m type. For instance, this will
lead to use change the memory attribute from MATTR_DEV to MATTR_MEM when
a MMIO superpage is shattered.

Furthermore, it makes more difficult to support different shareability
with the same memory attribute.

All the memory attributes could be deduced via the p2m type. This will
simplify the code by dropping one parameter.

---
    I am not sure whether p2m_direct_mmio_c (cacheable MMIO) should use
    the outer-shareability or inner-shareability. Any opinions?
---
 xen/arch/arm/p2m.c | 55 ++++++++++++++++++++++++------------------------------
 1 file changed, 24 insertions(+), 31 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 999de2b..2f50b4f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -325,8 +325,7 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
     }
 }
 
-static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
-                               p2m_type_t t, p2m_access_t a)
+static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a)
 {
     /*
      * sh, xn and write bit will be defined in the following switches
@@ -335,7 +334,6 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
     lpae_t e = (lpae_t) {
         .p2m.af = 1,
         .p2m.read = 1,
-        .p2m.mattr = mattr,
         .p2m.table = 1,
         .p2m.valid = 1,
         .p2m.type = t,
@@ -343,18 +341,21 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
 
     BUILD_BUG_ON(p2m_max_real_type > (1 << 4));
 
-    switch (mattr)
+    switch ( t )
     {
-    case MATTR_MEM:
-        e.p2m.sh = LPAE_SH_INNER;
+    case p2m_mmio_direct_nc:
+        e.p2m.mattr = MATTR_DEV;
+        e.p2m.sh = LPAE_SH_OUTER;
         break;
 
-    case MATTR_DEV:
+    case p2m_mmio_direct_c:
+        e.p2m.mattr = MATTR_MEM;
         e.p2m.sh = LPAE_SH_OUTER;
         break;
+
     default:
-        BUG();
-        break;
+        e.p2m.mattr = MATTR_MEM;
+        e.p2m.sh = LPAE_SH_INNER;
     }
 
     p2m_set_permission(&e, t, a);
@@ -421,7 +422,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
          */
          for ( i=0 ; i < LPAE_ENTRIES; i++ )
          {
-             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t, p2m->default_access);
+             pte = mfn_to_p2m_entry(mfn, t, p2m->default_access);
 
              mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));
 
@@ -445,7 +446,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
 
     unmap_domain_page(p);
 
-    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), MATTR_MEM, p2m_invalid,
+    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
                            p2m->default_access);
 
     p2m_write_pte(entry, pte, flush_cache);
@@ -666,7 +667,6 @@ static int apply_one_level(struct domain *d,
                            paddr_t *addr,
                            paddr_t *maddr,
                            bool_t *flush,
-                           int mattr,
                            p2m_type_t t,
                            p2m_access_t a)
 {
@@ -695,7 +695,7 @@ static int apply_one_level(struct domain *d,
                 return rc;
 
             /* New mapping is superpage aligned, make it */
-            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), mattr, t, a);
+            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), t, a);
             if ( level < 3 )
                 pte.p2m.table = 0; /* Superpage entry */
 
@@ -915,7 +915,6 @@ static int apply_p2m_changes(struct domain *d,
                      gfn_t sgfn,
                      unsigned long nr,
                      mfn_t smfn,
-                     int mattr,
                      uint32_t mask,
                      p2m_type_t t,
                      p2m_access_t a)
@@ -1054,7 +1053,7 @@ static int apply_p2m_changes(struct domain *d,
                                   level, flush_pt, op,
                                   start_gpaddr, end_gpaddr,
                                   &addr, &maddr, &flush,
-                                  mattr, t, a);
+                                  t, a);
             if ( ret < 0 ) { rc = ret ; goto out; }
             count += ret;
 
@@ -1163,7 +1162,7 @@ out:
          * mapping.
          */
         apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
-                          mattr, 0, p2m_invalid, d->arch.p2m.default_access);
+                          0, p2m_invalid, d->arch.p2m.default_access);
     }
 
     return rc;
@@ -1173,10 +1172,10 @@ static inline int p2m_insert_mapping(struct domain *d,
                                      gfn_t start_gfn,
                                      unsigned long nr,
                                      mfn_t mfn,
-                                     int mattr, p2m_type_t t)
+                                     p2m_type_t t)
 {
     return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
-                             mattr, 0, t, d->arch.p2m.default_access);
+                             0, t, d->arch.p2m.default_access);
 }
 
 static inline int p2m_remove_mapping(struct domain *d,
@@ -1186,8 +1185,7 @@ static inline int p2m_remove_mapping(struct domain *d,
 {
     return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
                              /* arguments below not used when removing mapping */
-                             MATTR_MEM, 0, p2m_invalid,
-                             d->arch.p2m.default_access);
+                             0, p2m_invalid, d->arch.p2m.default_access);
 }
 
 int map_regions_rw_cache(struct domain *d,
@@ -1195,8 +1193,7 @@ int map_regions_rw_cache(struct domain *d,
                          unsigned long nr,
                          mfn_t mfn)
 {
-    return p2m_insert_mapping(d, gfn, nr, mfn,
-                              MATTR_MEM, p2m_mmio_direct_c);
+    return p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_c);
 }
 
 int unmap_regions_rw_cache(struct domain *d,
@@ -1212,8 +1209,7 @@ int map_mmio_regions(struct domain *d,
                      unsigned long nr,
                      mfn_t mfn)
 {
-    return p2m_insert_mapping(d, start_gfn, nr, mfn,
-                              MATTR_DEV, p2m_mmio_direct_nc);
+    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_nc);
 }
 
 int unmap_mmio_regions(struct domain *d,
@@ -1251,8 +1247,7 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn,
-                              MATTR_MEM, t);
+    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
@@ -1412,7 +1407,7 @@ int relinquish_p2m_mapping(struct domain *d)
     nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
 
     return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
-                             INVALID_MFN, MATTR_MEM, 0, p2m_invalid,
+                             INVALID_MFN, 0, p2m_invalid,
                              d->arch.p2m.default_access);
 }
 
@@ -1425,8 +1420,7 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
     end = gfn_min(end, p2m->max_mapped_gfn);
 
     return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
-                             MATTR_MEM, 0, p2m_invalid,
-                             d->arch.p2m.default_access);
+                             0, p2m_invalid, d->arch.p2m.default_access);
 }
 
 mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
@@ -1827,8 +1821,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     }
 
     rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
-                           (nr - start), INVALID_MFN,
-                           MATTR_MEM, mask, 0, a);
+                           (nr - start), INVALID_MFN, mask, 0, a);
     if ( rc < 0 )
         return rc;
     else if ( rc > 0 )
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 12/22] xen/arm: p2m: Remove unnecessary locking
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (10 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-27  0:47   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 13/22] xen/arm: p2m: Introduce p2m_{read, write}_{, un}lock helpers Julien Grall
                   ` (10 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The p2m is not yet in use when p2m_init and p2m_allocate_table are
called. Furthermore the p2m is not used anymore when p2m_teardown is
called. So taking the p2m lock is not necessary.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 2f50b4f..4c279dc 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1267,8 +1267,6 @@ int p2m_alloc_table(struct domain *d)
     if ( page == NULL )
         return -ENOMEM;
 
-    spin_lock(&p2m->lock);
-
     /* Clear both first level pages */
     for ( i = 0; i < P2M_ROOT_PAGES; i++ )
         clear_and_clean_page(page + i);
@@ -1284,8 +1282,6 @@ int p2m_alloc_table(struct domain *d)
      */
     flush_tlb_domain(d);
 
-    spin_unlock(&p2m->lock);
-
     return 0;
 }
 
@@ -1350,8 +1346,6 @@ void p2m_teardown(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
     struct page_info *pg;
 
-    spin_lock(&p2m->lock);
-
     while ( (pg = page_list_remove_head(&p2m->pages)) )
         free_domheap_page(pg);
 
@@ -1363,8 +1357,6 @@ void p2m_teardown(struct domain *d)
     p2m_free_vmid(d);
 
     radix_tree_destroy(&p2m->mem_access_settings, NULL);
-
-    spin_unlock(&p2m->lock);
 }
 
 int p2m_init(struct domain *d)
@@ -1375,12 +1367,11 @@ int p2m_init(struct domain *d)
     spin_lock_init(&p2m->lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
 
-    spin_lock(&p2m->lock);
     p2m->vmid = INVALID_VMID;
 
     rc = p2m_alloc_vmid(d);
     if ( rc != 0 )
-        goto err;
+        return rc;
 
     d->arch.vttbr = 0;
 
@@ -1393,9 +1384,6 @@ int p2m_init(struct domain *d)
     p2m->mem_access_enabled = false;
     radix_tree_init(&p2m->mem_access_settings);
 
-err:
-    spin_unlock(&p2m->lock);
-
     return rc;
 }
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 13/22] xen/arm: p2m: Introduce p2m_{read, write}_{, un}lock helpers
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (11 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 12/22] xen/arm: p2m: Remove unnecessary locking Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-27  0:50   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 14/22] xen/arm: p2m: Switch the p2m lock from spinlock to rwlock Julien Grall
                   ` (9 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

Some functions in the p2m code do not require to modify the P2M code.
Document it by introducing separate helpers to lock the p2m.

This patch does not change the lock. This will be done in a subsequent
patch.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 49 +++++++++++++++++++++++++++++++++++++------------
 1 file changed, 37 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4c279dc..d74c249 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -47,11 +47,36 @@ static bool_t p2m_mapping(lpae_t pte)
     return p2m_valid(pte) && !pte.p2m.table;
 }
 
+static inline void p2m_write_lock(struct p2m_domain *p2m)
+{
+    spin_lock(&p2m->lock);
+}
+
+static inline void p2m_write_unlock(struct p2m_domain *p2m)
+{
+    spin_unlock(&p2m->lock);
+}
+
+static inline void p2m_read_lock(struct p2m_domain *p2m)
+{
+    spin_lock(&p2m->lock);
+}
+
+static inline void p2m_read_unlock(struct p2m_domain *p2m)
+{
+    spin_unlock(&p2m->lock);
+}
+
+static inline int p2m_is_locked(struct p2m_domain *p2m)
+{
+    return spin_is_locked(&p2m->lock);
+}
+
 void p2m_dump_info(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    spin_lock(&p2m->lock);
+    p2m_read_lock(p2m);
     printk("p2m mappings for domain %d (vmid %d):\n",
            d->domain_id, p2m->vmid);
     BUG_ON(p2m->stats.mappings[0] || p2m->stats.shattered[0]);
@@ -60,7 +85,7 @@ void p2m_dump_info(struct domain *d)
     printk("  2M mappings: %ld (shattered %ld)\n",
            p2m->stats.mappings[2], p2m->stats.shattered[2]);
     printk("  4K mappings: %ld\n", p2m->stats.mappings[3]);
-    spin_unlock(&p2m->lock);
+    p2m_read_unlock(p2m);
 }
 
 void memory_type_changed(struct domain *d)
@@ -166,7 +191,7 @@ static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
     p2m_type_t _t;
     unsigned int level, root_table;
 
-    ASSERT(spin_is_locked(&p2m->lock));
+    ASSERT(p2m_is_locked(p2m));
     BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
 
     /* Allow t to be NULL */
@@ -233,9 +258,9 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
     mfn_t ret;
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    spin_lock(&p2m->lock);
+    p2m_read_lock(p2m);
     ret = __p2m_lookup(d, gfn, t);
-    spin_unlock(&p2m->lock);
+    p2m_read_unlock(p2m);
 
     return ret;
 }
@@ -476,7 +501,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
 #undef ACCESS
     };
 
-    ASSERT(spin_is_locked(&p2m->lock));
+    ASSERT(p2m_is_locked(p2m));
 
     /* If no setting was ever set, just return rwx. */
     if ( !p2m->mem_access_enabled )
@@ -945,7 +970,7 @@ static int apply_p2m_changes(struct domain *d,
      */
     flush_pt = iommu_enabled && !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
 
-    spin_lock(&p2m->lock);
+    p2m_write_lock(p2m);
 
     /* Static mapping. P2M_ROOT_PAGES > 1 are handled below */
     if ( P2M_ROOT_PAGES == 1 )
@@ -1149,7 +1174,7 @@ out:
             unmap_domain_page(mappings[level]);
     }
 
-    spin_unlock(&p2m->lock);
+    p2m_write_unlock(p2m);
 
     if ( rc < 0 && ( op == INSERT ) &&
          addr != start_gpaddr )
@@ -1530,7 +1555,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
     if ( v != current )
         return NULL;
 
-    spin_lock(&p2m->lock);
+    p2m_read_lock(p2m);
 
     rc = gvirt_to_maddr(va, &maddr, flags);
 
@@ -1550,7 +1575,7 @@ err:
     if ( !page && p2m->mem_access_enabled )
         page = p2m_mem_access_check_and_get_page(va, flags);
 
-    spin_unlock(&p2m->lock);
+    p2m_read_unlock(p2m);
 
     return page;
 }
@@ -1824,9 +1849,9 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
     int ret;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
-    spin_lock(&p2m->lock);
+    p2m_read_lock(p2m);
     ret = __p2m_get_mem_access(d, gfn, access);
-    spin_unlock(&p2m->lock);
+    p2m_read_unlock(p2m);
 
     return ret;
 }
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 14/22] xen/arm: p2m: Switch the p2m lock from spinlock to rwlock
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (12 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 13/22] xen/arm: p2m: Introduce p2m_{read, write}_{, un}lock helpers Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-27  0:51   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create Julien Grall
                   ` (8 subsequent siblings)
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

P2M reads do not require to be serialized. This will add contention
when PV drivers are using multi-queue because parallel grant
map/unmaps/copies will happen on DomU's p2m.

Signed-off-by: Julien Grall <julien.grall@arm.com>

---
    I have not done benchark to verify the performance, however a rwlock
    is always an improvement compare to a spinlock when most of the
    access only read data.

    It might be possible to convert the rwlock to a per-cpu rwlock which
    show some improvement on x86.
---
 xen/arch/arm/p2m.c        | 12 ++++++------
 xen/include/asm-arm/p2m.h |  3 ++-
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d74c249..6136767 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -49,27 +49,27 @@ static bool_t p2m_mapping(lpae_t pte)
 
 static inline void p2m_write_lock(struct p2m_domain *p2m)
 {
-    spin_lock(&p2m->lock);
+    write_lock(&p2m->lock);
 }
 
 static inline void p2m_write_unlock(struct p2m_domain *p2m)
 {
-    spin_unlock(&p2m->lock);
+    write_unlock(&p2m->lock);
 }
 
 static inline void p2m_read_lock(struct p2m_domain *p2m)
 {
-    spin_lock(&p2m->lock);
+    read_lock(&p2m->lock);
 }
 
 static inline void p2m_read_unlock(struct p2m_domain *p2m)
 {
-    spin_unlock(&p2m->lock);
+    read_unlock(&p2m->lock);
 }
 
 static inline int p2m_is_locked(struct p2m_domain *p2m)
 {
-    return spin_is_locked(&p2m->lock);
+    return rw_is_locked(&p2m->lock);
 }
 
 void p2m_dump_info(struct domain *d)
@@ -1389,7 +1389,7 @@ int p2m_init(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
     int rc = 0;
 
-    spin_lock_init(&p2m->lock);
+    rwlock_init(&p2m->lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
 
     p2m->vmid = INVALID_VMID;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 20a220ea..abda70c 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -3,6 +3,7 @@
 
 #include <xen/mm.h>
 #include <xen/radix-tree.h>
+#include <xen/rwlock.h>
 #include <public/vm_event.h> /* for vm_event_response_t */
 #include <public/memory.h>
 #include <xen/p2m-common.h>
@@ -20,7 +21,7 @@ extern void memory_type_changed(struct domain *);
 /* Per-p2m-table state */
 struct p2m_domain {
     /* Lock that protects updates to the p2m */
-    spinlock_t lock;
+    rwlock_t lock;
 
     /* Pages used to construct the p2m */
     struct page_list_head pages;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (13 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 14/22] xen/arm: p2m: Switch the p2m lock from spinlock to rwlock Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-22  8:32   ` Sergej Proskurin
  2016-07-27  0:54   ` Stefano Stabellini
  2016-07-20 16:10 ` [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain Julien Grall
                   ` (7 subsequent siblings)
  22 siblings, 2 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The p2m root table does not need to be allocate separately.

Also remove unnecessary fields initialization as the structure is already
memset to 0 and the fields will be override by p2m_alloc_table.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/domain.c     | 3 ---
 xen/arch/arm/p2m.c        | 8 +++-----
 xen/include/asm-arm/p2m.h | 7 -------
 3 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 61fc08e..688adec 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -572,9 +572,6 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     if ( (rc = domain_io_init(d)) != 0 )
         goto fail;
 
-    if ( (rc = p2m_alloc_table(d)) != 0 )
-        goto fail;
-
     switch ( config->gic_version )
     {
     case XEN_DOMCTL_CONFIG_GIC_NATIVE:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6136767..c407e6a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1282,7 +1282,7 @@ void guest_physmap_remove_page(struct domain *d,
     p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
 }
 
-int p2m_alloc_table(struct domain *d)
+static int p2m_alloc_table(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
     struct page_info *page;
@@ -1398,10 +1398,6 @@ int p2m_init(struct domain *d)
     if ( rc != 0 )
         return rc;
 
-    d->arch.vttbr = 0;
-
-    p2m->root = NULL;
-
     p2m->max_mapped_gfn = _gfn(0);
     p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
 
@@ -1409,6 +1405,8 @@ int p2m_init(struct domain *d)
     p2m->mem_access_enabled = false;
     radix_tree_init(&p2m->mem_access_settings);
 
+    rc = p2m_alloc_table(d);
+
     return rc;
 }
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index abda70c..ce28e8a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -149,13 +149,6 @@ void p2m_teardown(struct domain *d);
  */
 int relinquish_p2m_mapping(struct domain *d);
 
-/*
- * Allocate a new p2m table for a domain.
- *
- * Returns 0 for success or -errno.
- */
-int p2m_alloc_table(struct domain *d);
-
 /* Context switch */
 void p2m_save_state(struct vcpu *p);
 void p2m_restore_state(struct vcpu *n);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (14 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-22  7:46   ` Sergej Proskurin
                     ` (2 more replies)
  2016-07-20 16:10 ` [PATCH 17/22] xen/arm: p2m: Don't need to restore the state for an idle vCPU Julien Grall
                   ` (6 subsequent siblings)
  22 siblings, 3 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The field vttbr holds the base address of the translation table for
guest. Its value will depends on how the p2m has been initialized and
will only be used by the code code.

So move the field from arch_domain to p2m_domain. This will also ease
the implementation of altp2m.
---
 xen/arch/arm/p2m.c           | 11 +++++++----
 xen/arch/arm/traps.c         |  2 +-
 xen/include/asm-arm/domain.h |  1 -
 xen/include/asm-arm/p2m.h    |  3 +++
 4 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c407e6a..c52081a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -107,10 +107,14 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
 
 static void p2m_load_VTTBR(struct domain *d)
 {
+    struct p2m_domain *p2m = &d->arch.p2m;
+
     if ( is_idle_domain(d) )
         return;
-    BUG_ON(!d->arch.vttbr);
-    WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
+
+    ASSERT(p2m->vttbr);
+
+    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
     isb(); /* Ensure update is visible */
 }
 
@@ -1298,8 +1302,7 @@ static int p2m_alloc_table(struct domain *d)
 
     p2m->root = page;
 
-    d->arch.vttbr = page_to_maddr(p2m->root)
-        | ((uint64_t)p2m->vmid&0xff)<<48;
+    p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
 
     /*
      * Make sure that all TLBs corresponding to the new VMID are flushed
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 06a8ee5..65c6fb4 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -880,7 +880,7 @@ void vcpu_show_registers(const struct vcpu *v)
     ctxt.ifsr32_el2 = v->arch.ifsr;
 #endif
 
-    ctxt.vttbr_el2 = v->domain->arch.vttbr;
+    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
 
     _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1, v);
 }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 4e9d8bf..9452fcd 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -48,7 +48,6 @@ struct arch_domain
 
     /* Virtual MMU */
     struct p2m_domain p2m;
-    uint64_t vttbr;
 
     struct hvm_domain hvm_domain;
     gfn_t *grant_table_gfn;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index ce28e8a..53c4d78 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -32,6 +32,9 @@ struct p2m_domain {
     /* Current VMID in use */
     uint8_t vmid;
 
+    /* Current Translation Table Base Register for the p2m */
+    uint64_t vttbr;
+
     /*
      * Highest guest frame that's ever been mapped in the p2m
      * Only takes into account ram and foreign mapping
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 17/22] xen/arm: p2m: Don't need to restore the state for an idle vCPU.
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (15 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain Julien Grall
@ 2016-07-20 16:10 ` Julien Grall
  2016-07-22  7:37   ` Sergej Proskurin
  2016-07-27  1:05   ` Stefano Stabellini
  2016-07-20 16:11 ` [PATCH 18/22] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain Julien Grall
                   ` (5 subsequent siblings)
  22 siblings, 2 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The function p2m_restore_state could be called with an idle vCPU in
arguments (when called by construct_dom0). However, we will never return
to EL0/EL1 in this case, so it is not necessary to restore the p2m
registers.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c52081a..d1b6009 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -127,6 +127,9 @@ void p2m_restore_state(struct vcpu *n)
 {
     register_t hcr;
 
+    if ( is_idle_vcpu(n) )
+        return;
+
     hcr = READ_SYSREG(HCR_EL2);
     WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
     isb();
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 18/22] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (16 preceding siblings ...)
  2016-07-20 16:10 ` [PATCH 17/22] xen/arm: p2m: Don't need to restore the state for an idle vCPU Julien Grall
@ 2016-07-20 16:11 ` Julien Grall
  2016-07-22  7:51   ` Sergej Proskurin
  2016-07-27  1:12   ` Stefano Stabellini
  2016-07-20 16:11 ` [PATCH 19/22] xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state Julien Grall
                   ` (4 subsequent siblings)
  22 siblings, 2 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:11 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The current implementation of flush_tlb_domain is relying on the domain
to have a single p2m. With the upcoming feature altp2m, a single domain
may have different p2m. So we would need to switch to the correct p2m in
order to flush the TLBs.

Rather than checking whether the domain is not the current domain, check
whether the VTTBR is different. The resulting assembly code is much
smaller: from 38 instructions (+ 2 functions call) to 22 instructions.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d1b6009..015c1e8 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -151,24 +151,28 @@ void p2m_restore_state(struct vcpu *n)
 
 void flush_tlb_domain(struct domain *d)
 {
+    struct p2m_domain *p2m = &d->arch.p2m;
     unsigned long flags = 0;
+    uint64_t ovttbr;
 
     /*
-     * Update the VTTBR if necessary with the domain d. In this case,
-     * it's only necessary to flush TLBs on every CPUs with the current VMID
-     * (our domain).
+     * ARM only provides an instruction to flush TLBs for the current
+     * VMID. So switch to the VTTBR of a given P2M if different.
      */
-    if ( d != current->domain )
+    ovttbr = READ_SYSREG64(VTTBR_EL2);
+    if ( ovttbr != p2m->vttbr )
     {
         local_irq_save(flags);
-        p2m_load_VTTBR(d);
+        WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
+        isb();
     }
 
     flush_tlb();
 
-    if ( d != current->domain )
+    if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
     {
-        p2m_load_VTTBR(current->domain);
+        WRITE_SYSREG64(ovttbr, VTTBR_EL2);
+        isb();
         local_irq_restore(flags);
     }
 }
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 19/22] xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (17 preceding siblings ...)
  2016-07-20 16:11 ` [PATCH 18/22] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain Julien Grall
@ 2016-07-20 16:11 ` Julien Grall
  2016-07-22  8:07   ` Sergej Proskurin
  2016-07-27  1:13   ` Stefano Stabellini
  2016-07-20 16:11 ` [PATCH 20/22] xen/arm: Don't export flush_tlb_domain Julien Grall
                   ` (3 subsequent siblings)
  22 siblings, 2 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:11 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

p2m_restore_state is the last caller of p2m_load_VTTBR and already check
if the vCPU does not belong to the idle domain.

Note that it is likely possible to remove some isb in the function
p2m_restore_state, however this is not the purpose of this patch. So the
numerous isb have been left.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 16 ++--------------
 1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 015c1e8..c756e0c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -105,19 +105,6 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
                  P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
 }
 
-static void p2m_load_VTTBR(struct domain *d)
-{
-    struct p2m_domain *p2m = &d->arch.p2m;
-
-    if ( is_idle_domain(d) )
-        return;
-
-    ASSERT(p2m->vttbr);
-
-    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
-    isb(); /* Ensure update is visible */
-}
-
 void p2m_save_state(struct vcpu *p)
 {
     p->arch.sctlr = READ_SYSREG(SCTLR_EL1);
@@ -126,6 +113,7 @@ void p2m_save_state(struct vcpu *p)
 void p2m_restore_state(struct vcpu *n)
 {
     register_t hcr;
+    struct p2m_domain *p2m = &n->domain->arch.p2m;
 
     if ( is_idle_vcpu(n) )
         return;
@@ -134,7 +122,7 @@ void p2m_restore_state(struct vcpu *n)
     WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
     isb();
 
-    p2m_load_VTTBR(n->domain);
+    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
     isb();
 
     if ( is_32bit_domain(n->domain) )
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 20/22] xen/arm: Don't export flush_tlb_domain
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (18 preceding siblings ...)
  2016-07-20 16:11 ` [PATCH 19/22] xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state Julien Grall
@ 2016-07-20 16:11 ` Julien Grall
  2016-07-22  8:54   ` Sergej Proskurin
  2016-07-27  1:14   ` Stefano Stabellini
  2016-07-20 16:11 ` [PATCH 21/22] xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb Julien Grall
                   ` (2 subsequent siblings)
  22 siblings, 2 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:11 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The function flush_tlb_domain is not used outside of the file where it
has been declared.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c             | 2 +-
 xen/include/asm-arm/flushtlb.h | 3 ---
 2 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index c756e0c..8541171 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -137,7 +137,7 @@ void p2m_restore_state(struct vcpu *n)
     isb();
 }
 
-void flush_tlb_domain(struct domain *d)
+static void flush_tlb_domain(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
     unsigned long flags = 0;
diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
index c986b3f..329fbb4 100644
--- a/xen/include/asm-arm/flushtlb.h
+++ b/xen/include/asm-arm/flushtlb.h
@@ -25,9 +25,6 @@ do {                                                                    \
 /* Flush specified CPUs' TLBs */
 void flush_tlb_mask(const cpumask_t *mask);
 
-/* Flush CPU's TLBs for the specified domain */
-void flush_tlb_domain(struct domain *d);
-
 #endif /* __ASM_ARM_FLUSHTLB_H__ */
 /*
  * Local variables:
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 21/22] xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (19 preceding siblings ...)
  2016-07-20 16:11 ` [PATCH 20/22] xen/arm: Don't export flush_tlb_domain Julien Grall
@ 2016-07-20 16:11 ` Julien Grall
  2016-07-27  1:15   ` Stefano Stabellini
  2016-07-20 16:11 ` [PATCH 22/22] xen/arm: p2m: Pass the p2m in parameter rather the domain when it is possible Julien Grall
  2016-07-22  1:31 ` [PATCH 00/22] xen/arm: P2M clean-up and fixes Stefano Stabellini
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:11 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

The function to flush the TLBs for a given p2m does not need to know about
the domain. So pass directly the p2m in parameter.

At the same time rename the function to p2m_flush_tlb to match the
parameter change.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8541171..5511d25 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -137,9 +137,8 @@ void p2m_restore_state(struct vcpu *n)
     isb();
 }
 
-static void flush_tlb_domain(struct domain *d)
+static void p2m_flush_tlb(struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
     unsigned long flags = 0;
     uint64_t ovttbr;
 
@@ -1158,7 +1157,7 @@ static int apply_p2m_changes(struct domain *d,
 out:
     if ( flush )
     {
-        flush_tlb_domain(d);
+        p2m_flush_tlb(&d->arch.p2m);
         ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
         if ( !rc )
             rc = ret;
@@ -1303,7 +1302,7 @@ static int p2m_alloc_table(struct domain *d)
      * Make sure that all TLBs corresponding to the new VMID are flushed
      * before using it
      */
-    flush_tlb_domain(d);
+    p2m_flush_tlb(p2m);
 
     return 0;
 }
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* [PATCH 22/22] xen/arm: p2m: Pass the p2m in parameter rather the domain when it is possible
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (20 preceding siblings ...)
  2016-07-20 16:11 ` [PATCH 21/22] xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb Julien Grall
@ 2016-07-20 16:11 ` Julien Grall
  2016-07-27  1:15   ` Stefano Stabellini
  2016-07-22  1:31 ` [PATCH 00/22] xen/arm: P2M clean-up and fixes Stefano Stabellini
  22 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-20 16:11 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, Julien Grall, sstabellini, wei.chen, steve.capper

Some p2m functions do not care about the domain except to get the
associate p2m.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/p2m.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 5511d25..aceafc2 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -415,10 +415,9 @@ static inline void p2m_remove_pte(lpae_t *p, bool_t flush_cache)
  *
  * level_shift is the number of bits at the level we want to create.
  */
-static int p2m_create_table(struct domain *d, lpae_t *entry,
+static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry,
                             int level_shift, bool_t flush_cache)
 {
-    struct p2m_domain *p2m = &d->arch.p2m;
     struct page_info *page;
     lpae_t *p;
     lpae_t pte;
@@ -653,18 +652,17 @@ static const paddr_t level_masks[] =
 static const paddr_t level_shifts[] =
     { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
 
-static int p2m_shatter_page(struct domain *d,
+static int p2m_shatter_page(struct p2m_domain *p2m,
                             lpae_t *entry,
                             unsigned int level,
                             bool_t flush_cache)
 {
     const paddr_t level_shift = level_shifts[level];
-    int rc = p2m_create_table(d, entry,
+    int rc = p2m_create_table(p2m, entry,
                               level_shift - PAGE_SHIFT, flush_cache);
 
     if ( !rc )
     {
-        struct p2m_domain *p2m = &d->arch.p2m;
         p2m->stats.shattered[level]++;
         p2m->stats.mappings[level]--;
         p2m->stats.mappings[level+1] += LPAE_ENTRIES;
@@ -757,7 +755,7 @@ static int apply_one_level(struct domain *d,
             /* Not present -> create table entry and descend */
             if ( !p2m_valid(orig_pte) )
             {
-                rc = p2m_create_table(d, entry, 0, flush_cache);
+                rc = p2m_create_table(p2m, entry, 0, flush_cache);
                 if ( rc < 0 )
                     return rc;
                 return P2M_ONE_DESCEND;
@@ -767,7 +765,7 @@ static int apply_one_level(struct domain *d,
             if ( p2m_mapping(orig_pte) )
             {
                 *flush = true;
-                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                rc = p2m_shatter_page(p2m, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
             } /* else: an existing table mapping -> descend */
@@ -804,7 +802,7 @@ static int apply_one_level(struct domain *d,
                  * and descend.
                  */
                 *flush = true;
-                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                rc = p2m_shatter_page(p2m, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
 
@@ -889,7 +887,7 @@ static int apply_one_level(struct domain *d,
             /* Shatter large pages as we descend */
             if ( p2m_mapping(orig_pte) )
             {
-                rc = p2m_shatter_page(d, entry, level, flush_cache);
+                rc = p2m_shatter_page(p2m, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
             } /* else: an existing table mapping -> descend */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 80+ messages in thread

* Re: [PATCH 01/22] xen/arm: system: Use the correct parameter name in local_irq_restore
  2016-07-20 16:10 ` [PATCH 01/22] xen/arm: system: Use the correct parameter name in local_irq_restore Julien Grall
@ 2016-07-22  1:19   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-22  1:19 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The parameter to store the flags is called 'x' and not 'flags'.
> Thankfully all the user of the macro is passing 'flags'.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     This patch is candidate to be backported up to Xen 4.5.
> ---
>  xen/include/asm-arm/arm32/system.h | 2 +-
>  xen/include/asm-arm/arm64/system.h | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32/system.h
> index b47b942..c617b40 100644
> --- a/xen/include/asm-arm/arm32/system.h
> +++ b/xen/include/asm-arm/arm32/system.h
> @@ -24,7 +24,7 @@
>      asm volatile (                                               \
>              "msr     cpsr_c, %0      @ local_irq_restore\n"      \
>              :                                                    \
> -            : "r" (flags)                                        \
> +            : "r" (x)                                            \
>              : "memory", "cc");                                   \
>  })
>  
> diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h
> index 6efced3..2e2ee21 100644
> --- a/xen/include/asm-arm/arm64/system.h
> +++ b/xen/include/asm-arm/arm64/system.h
> @@ -40,7 +40,7 @@
>      asm volatile (                                               \
>          "msr    daif, %0                // local_irq_restore"    \
>          :                                                        \
> -        : "r" (flags)                                            \
> +        : "r" (x)                                                \
>          : "memory");                                             \
>  })
>  
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 02/22] xen/arm: p2m: Pass the vCPU in parameter to get_page_from_gva
  2016-07-20 16:10 ` [PATCH 02/22] xen/arm: p2m: Pass the vCPU in parameter to get_page_from_gva Julien Grall
@ 2016-07-22  1:22   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-22  1:22 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The function get_page_from_gva translates a guest virtual address to a
> machine address. The translation involves the register VTTBR_EL2,
> TTBR0_EL1, TTBR1_EL1 and SCTLR_EL1. Whilst the first register is per
> domain (the p2m is common to every vCPUs), the last 3 are per-vCPU.
> 
> Therefore, the function should take the vCPU in parameter and not the
> domain. Fixing the actual code path will be done a separate patch.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/guestcopy.c | 6 +++---
>  xen/arch/arm/p2m.c       | 3 ++-
>  xen/arch/arm/traps.c     | 2 +-
>  xen/include/asm-arm/mm.h | 2 +-
>  4 files changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index ce1c3c3..413125f 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
> +        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
>          if ( page == NULL )
>              return len;
>  
> @@ -64,7 +64,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
> +        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
>          if ( page == NULL )
>              return len;
>  
> @@ -96,7 +96,7 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
>          unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
>          struct page_info *page;
>  
> -        page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
> +        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
>          if ( page == NULL )
>              return len;
>  
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a4bc55a..1111d6f 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1521,9 +1521,10 @@ err:
>      return page;
>  }
>  
> -struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
> +struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>                                      unsigned long flags)
>  {
> +    struct domain *d = v->domain;
>      struct p2m_domain *p2m = &d->arch.p2m;
>      struct page_info *page = NULL;
>      paddr_t maddr = 0;
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index a2eb1da..06a8ee5 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -955,7 +955,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
>          return;
>      }
>  
> -    page = get_page_from_gva(v->domain, sp, GV2M_READ);
> +    page = get_page_from_gva(v, sp, GV2M_READ);
>      if ( page == NULL )
>      {
>          printk("Failed to convert stack to physical address\n");
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 68cf203..19eadd2 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -281,7 +281,7 @@ static inline void *page_to_virt(const struct page_info *pg)
>      return mfn_to_virt(page_to_mfn(pg));
>  }
>  
> -struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
> +struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>                                      unsigned long flags);
>  
>  /*
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 03/22] xen/arm: p2m: Restrict usage of get_page_from_gva to the current vCPU
  2016-07-20 16:10 ` [PATCH 03/22] xen/arm: p2m: Restrict usage of get_page_from_gva to the current vCPU Julien Grall
@ 2016-07-22  1:25   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-22  1:25 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The function get_page_from_gva translates a guest virtual address to a
> machine address. The translation involves the register VTTBR_EL2,
> TTBR0_EL1, TTBR1_EL1 and SCTLR_EL1.
> 
> Currently, only the first register is context switch is the current
> domain is not the same. This will result to use the wrong TTBR*_EL1 and
> SCTLR_EL1 for the translation.
> 
> To fix the code properly, we would have to context switch all the
> registers mentioned above when the vCPU in parameter is not the current
> one. Similar things would need to be done in the callee
> p2m_mem_check_and_get_page.
> 
> Given that the only caller of this function with the vCPU that may not
> be current is a guest debugging function (show_guest_stack), restrict
> the usage to the current vCPU for the time being.
> 
> A proper fix will be send separately.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     This patch is candidate to be backported up to Xen 4.5.
> ---
>  xen/arch/arm/p2m.c | 24 ++++++++----------------
>  1 file changed, 8 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 1111d6f..64d84cc 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1530,24 +1530,16 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>      paddr_t maddr = 0;
>      int rc;
>  
> -    spin_lock(&p2m->lock);
> -
> -    if ( unlikely(d != current->domain) )
> -    {
> -        unsigned long irq_flags;
> -
> -        local_irq_save(irq_flags);
> -        p2m_load_VTTBR(d);
> +    /*
> +     * XXX: To support a different vCPU, we would need to load the
> +     * VTTBR_EL2, TTBR0_EL1, TTBR1_EL1 and SCTLR_EL1
> +     */
> +    if ( v != current )
> +        return NULL;
>  
> -        rc = gvirt_to_maddr(va, &maddr, flags);
> +    spin_lock(&p2m->lock);
>  
> -        p2m_load_VTTBR(current->domain);
> -        local_irq_restore(irq_flags);
> -    }
> -    else
> -    {
> -        rc = gvirt_to_maddr(va, &maddr, flags);
> -    }
> +    rc = gvirt_to_maddr(va, &maddr, flags);
>  
>      if ( rc )
>          goto err;
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 04/22] xen/arm: p2m: Fix multi-lines coding style comments
  2016-07-20 16:10 ` [PATCH 04/22] xen/arm: p2m: Fix multi-lines coding style comments Julien Grall
@ 2016-07-22  1:26   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-22  1:26 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The start and end markers should be on separate lines.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c        | 35 ++++++++++++++++++++++------------
>  xen/include/asm-arm/p2m.h | 48 +++++++++++++++++++++++++++++++----------------
>  2 files changed, 55 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 64d84cc..79095f1 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -33,9 +33,11 @@ static bool_t p2m_valid(lpae_t pte)
>  {
>      return pte.p2m.valid;
>  }
> -/* These two can only be used on L0..L2 ptes because L3 mappings set
> +/*
> + * These two can only be used on L0..L2 ptes because L3 mappings set
>   * the table bit and therefore these would return the opposite to what
> - * you would expect. */
> + * you would expect.
> + */
>  static bool_t p2m_table(lpae_t pte)
>  {
>      return p2m_valid(pte) && pte.p2m.table;
> @@ -119,7 +121,8 @@ void flush_tlb_domain(struct domain *d)
>  {
>      unsigned long flags = 0;
>  
> -    /* Update the VTTBR if necessary with the domain d. In this case,
> +    /*
> +     * Update the VTTBR if necessary with the domain d. In this case,
>       * it's only necessary to flush TLBs on every CPUs with the current VMID
>       * (our domain).
>       */
> @@ -325,8 +328,10 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
>                                 p2m_type_t t, p2m_access_t a)
>  {
>      paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
> -    /* sh, xn and write bit will be defined in the following switches
> -     * based on mattr and t. */
> +    /*
> +     * sh, xn and write bit will be defined in the following switches
> +     * based on mattr and t.
> +     */
>      lpae_t e = (lpae_t) {
>          .p2m.af = 1,
>          .p2m.read = 1,
> @@ -552,15 +557,17 @@ enum p2m_operation {
>      MEMACCESS,
>  };
>  
> -/* Put any references on the single 4K page referenced by pte.  TODO:
> - * Handle superpages, for now we only take special references for leaf
> +/*
> + * Put any references on the single 4K page referenced by pte.
> + * TODO: Handle superpages, for now we only take special references for leaf
>   * pages (specifically foreign ones, which can't be super mapped today).
>   */
>  static void p2m_put_l3_page(const lpae_t pte)
>  {
>      ASSERT(p2m_valid(pte));
>  
> -    /* TODO: Handle other p2m types
> +    /*
> +     * TODO: Handle other p2m types
>       *
>       * It's safe to do the put_page here because page_alloc will
>       * flush the TLBs if the page is reallocated before the end of
> @@ -932,7 +939,8 @@ static int apply_p2m_changes(struct domain *d,
>      PAGE_LIST_HEAD(free_pages);
>      struct page_info *pg;
>  
> -    /* Some IOMMU don't support coherent PT walk. When the p2m is
> +    /*
> +     * Some IOMMU don't support coherent PT walk. When the p2m is
>       * shared with the CPU, Xen has to make sure that the PT changes have
>       * reached the memory
>       */
> @@ -1275,7 +1283,8 @@ int p2m_alloc_table(struct domain *d)
>      d->arch.vttbr = page_to_maddr(p2m->root)
>          | ((uint64_t)p2m->vmid&0xff)<<48;
>  
> -    /* Make sure that all TLBs corresponding to the new VMID are flushed
> +    /*
> +     * Make sure that all TLBs corresponding to the new VMID are flushed
>       * before using it
>       */
>      flush_tlb_domain(d);
> @@ -1290,8 +1299,10 @@ int p2m_alloc_table(struct domain *d)
>  
>  static spinlock_t vmid_alloc_lock = SPIN_LOCK_UNLOCKED;
>  
> -/* VTTBR_EL2 VMID field is 8 bits. Using a bitmap here limits us to
> - * 256 concurrent domains. */
> +/*
> + * VTTBR_EL2 VMID field is 8 bits. Using a bitmap here limits us to
> + * 256 concurrent domains.
> + */
>  static DECLARE_BITMAP(vmid_mask, MAX_VMID);
>  
>  void p2m_vmid_allocator_init(void)
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 34096bc..8fe78c1 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -31,12 +31,14 @@ struct p2m_domain {
>      /* Current VMID in use */
>      uint8_t vmid;
>  
> -    /* Highest guest frame that's ever been mapped in the p2m
> +    /*
> +     * Highest guest frame that's ever been mapped in the p2m
>       * Only takes into account ram and foreign mapping
>       */
>      gfn_t max_mapped_gfn;
>  
> -    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
> +    /*
> +     * Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
>       * preemptible manner this is update to track recall where to
>       * resume the search. Apart from during teardown this can only
>       * decrease. */
> @@ -51,24 +53,31 @@ struct p2m_domain {
>          unsigned long shattered[4];
>      } stats;
>  
> -    /* If true, and an access fault comes in and there is no vm_event listener,
> -     * pause domain. Otherwise, remove access restrictions. */
> +    /*
> +     * If true, and an access fault comes in and there is no vm_event listener,
> +     * pause domain. Otherwise, remove access restrictions.
> +     */
>      bool_t access_required;
>  
>      /* Defines if mem_access is in use for the domain. */
>      bool_t mem_access_enabled;
>  
> -    /* Default P2M access type for each page in the the domain: new pages,
> +    /*
> +     * Default P2M access type for each page in the the domain: new pages,
>       * swapped in pages, cleared pages, and pages that are ambiguously
> -     * retyped get this access type. See definition of p2m_access_t. */
> +     * retyped get this access type. See definition of p2m_access_t.
> +     */
>      p2m_access_t default_access;
>  
> -    /* Radix tree to store the p2m_access_t settings as the pte's don't have
> -     * enough available bits to store this information. */
> +    /*
> +     * Radix tree to store the p2m_access_t settings as the pte's don't have
> +     * enough available bits to store this information.
> +     */
>      struct radix_tree_root mem_access_settings;
>  };
>  
> -/* List of possible type for each page in the p2m entry.
> +/*
> + * List of possible type for each page in the p2m entry.
>   * The number of available bit per page in the pte for this purpose is 4 bits.
>   * So it's possible to only have 16 fields. If we run out of value in the
>   * future, it's possible to use higher value for pseudo-type and don't store
> @@ -116,13 +125,15 @@ int p2m_init(struct domain *d);
>  /* Return all the p2m resources to Xen. */
>  void p2m_teardown(struct domain *d);
>  
> -/* Remove mapping refcount on each mapping page in the p2m
> +/*
> + * Remove mapping refcount on each mapping page in the p2m
>   *
>   * TODO: For the moment only foreign mappings are handled
>   */
>  int relinquish_p2m_mapping(struct domain *d);
>  
> -/* Allocate a new p2m table for a domain.
> +/*
> + * Allocate a new p2m table for a domain.
>   *
>   * Returns 0 for success or -errno.
>   */
> @@ -181,8 +192,10 @@ mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
>   * Populate-on-demand
>   */
>  
> -/* Call when decreasing memory reservation to handle PoD entries properly.
> - * Will return '1' if all entries were handled and nothing more need be done.*/
> +/*
> + * Call when decreasing memory reservation to handle PoD entries properly.
> + * Will return '1' if all entries were handled and nothing more need be done.
> + */
>  int
>  p2m_pod_decrease_reservation(struct domain *d,
>                               xen_pfn_t gpfn,
> @@ -210,7 +223,8 @@ static inline struct page_info *get_page_from_gfn(
>          return NULL;
>      page = mfn_to_page(mfn);
>  
> -    /* get_page won't work on foreign mapping because the page doesn't
> +    /*
> +     * get_page won't work on foreign mapping because the page doesn't
>       * belong to the current domain.
>       */
>      if ( p2mt == p2m_map_foreign )
> @@ -257,8 +271,10 @@ static inline bool_t p2m_vm_event_sanity_check(struct domain *d)
>      return 1;
>  }
>  
> -/* Send mem event based on the access. Boolean return value indicates if trap
> - * needs to be injected into guest. */
> +/*
> + * Send mem event based on the access. Boolean return value indicates if trap
> + * needs to be injected into guest.
> + */
>  bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec npfec);
>  
>  #endif /* _XEN_P2M_H */
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 00/22] xen/arm: P2M clean-up and fixes
  2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
                   ` (21 preceding siblings ...)
  2016-07-20 16:11 ` [PATCH 22/22] xen/arm: p2m: Pass the p2m in parameter rather the domain when it is possible Julien Grall
@ 2016-07-22  1:31 ` Stefano Stabellini
  22 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-22  1:31 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> Hello all,
> 
> This patch series contains a bunch of clean-up and fixes for the P2M code on
> ARM. The major changes are:
>     - Restrict usage of get_page_from_gva to the current vCPU
>     - Deduce the memory attributes from the p2m type
>     - Switch to read-write lock to improve performance
>     - Simplify the TLB flush for a given domain
> 
> I have provided a branch will all the patches applied on my repo:
> git://xenbits/xenbits.xen.org/people/julieng/xen-unstable.git p2m-cleanup-v1

I have committed the first 4 patches to staging. I agree that patch #1
and #3 should be backported to the stable trees. I'll do so when they
pass the gate.


> Yours sincerely,
> 
> Julien Grall (22):
>   xen/arm: system: Use the correct parameter name in local_irq_restore
>   xen/arm: p2m: Pass the vCPU in parameter to get_page_from_gva
>   xen/arm: p2m: Restrict usage of get_page_from_gva to the current vCPU
>   xen/arm: p2m: Fix multi-lines coding style comments
>   xen/arm: p2m: Clean-up mfn_to_p2m_entry
>   xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry
>   xen/arm: p2m: Use p2m_is_foreign in get_page_from_gfn to avoid open
>     coding
>   xen/arm: p2m: Simplify p2m type check by using bitmask
>   xen/arm: p2m: Use a whitelist rather than blacklist in
>     get_page_from_gfn
>   xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO
>   xen/arm: p2m: Find the memory attributes based on the p2m type
>   xen/arm: p2m: Remove unnecessary locking
>   xen/arm: p2m: Introduce p2m_{read,write}_{,un}lock helpers
>   xen/arm: p2m: Switch the p2m lock from spinlock to rwlock
>   xen/arm: Don't call p2m_alloc_table from arch_domain_create
>   xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain
>   xen/arm: p2m: Don't need to restore the state for an idle vCPU.
>   xen/arm: p2m: Rework the context switch to another VTTBR in
>     flush_tlb_domain
>   xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state
>   xen/arm: Don't export flush_tlb_domain
>   xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb
>   xen/arm: p2m: Pass the p2m in parameter rather the domain when it is
>     possible
> 
>  xen/arch/arm/domain.c              |   3 -
>  xen/arch/arm/guestcopy.c           |   6 +-
>  xen/arch/arm/p2m.c                 | 255 +++++++++++++++++++------------------
>  xen/arch/arm/traps.c               |   4 +-
>  xen/include/asm-arm/arm32/system.h |   2 +-
>  xen/include/asm-arm/arm64/system.h |   2 +-
>  xen/include/asm-arm/domain.h       |   1 -
>  xen/include/asm-arm/flushtlb.h     |   3 -
>  xen/include/asm-arm/mm.h           |   2 +-
>  xen/include/asm-arm/p2m.h          |  85 +++++++++----
>  10 files changed, 194 insertions(+), 169 deletions(-)
> 
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 17/22] xen/arm: p2m: Don't need to restore the state for an idle vCPU.
  2016-07-20 16:10 ` [PATCH 17/22] xen/arm: p2m: Don't need to restore the state for an idle vCPU Julien Grall
@ 2016-07-22  7:37   ` Sergej Proskurin
  2016-07-27  1:05   ` Stefano Stabellini
  1 sibling, 0 replies; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22  7:37 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 07/20/2016 06:10 PM, Julien Grall wrote:
> The function p2m_restore_state could be called with an idle vCPU in
> arguments (when called by construct_dom0). However, we will never return
> to EL0/EL1 in this case, so it is not necessary to restore the p2m
> registers.
> 

I absolutely agree.

Cheers,
~Sergej

> Signed-off-by: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index c52081a..d1b6009 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -127,6 +127,9 @@ void p2m_restore_state(struct vcpu *n)
>  {
>      register_t hcr;
>  
> +    if ( is_idle_vcpu(n) )
> +        return;
> +
>      hcr = READ_SYSREG(HCR_EL2);
>      WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>      isb();
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain
  2016-07-20 16:10 ` [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain Julien Grall
@ 2016-07-22  7:46   ` Sergej Proskurin
  2016-07-22  9:23     ` Julien Grall
  2016-07-27  0:57   ` Stefano Stabellini
  2016-07-27 17:19   ` Julien Grall
  2 siblings, 1 reply; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22  7:46 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper

Hi Julien,

On 07/20/2016 06:10 PM, Julien Grall wrote:
> The field vttbr holds the base address of the translation table for
> guest. Its value will depends on how the p2m has been initialized and
> will only be used by the code code.
> 
> So move the field from arch_domain to p2m_domain. This will also ease
> the implementation of altp2m.
> ---
>  xen/arch/arm/p2m.c           | 11 +++++++----
>  xen/arch/arm/traps.c         |  2 +-
>  xen/include/asm-arm/domain.h |  1 -
>  xen/include/asm-arm/p2m.h    |  3 +++
>  4 files changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index c407e6a..c52081a 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -107,10 +107,14 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>  
>  static void p2m_load_VTTBR(struct domain *d)
>  {
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +

This is ok to me. Further altp2m implementation can easily extend this
code base.

Also, as your patch (patch #17) as well eliminates the possibility that
the idle domain reaches this function the following check could be
potentially removed (as far as I know, p2m_load_VTTBR is reached only
through p2m_restore_state).

>      if ( is_idle_domain(d) )
>          return;
> -    BUG_ON(!d->arch.vttbr);
> -    WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
> +
> +    ASSERT(p2m->vttbr);
> +
> +    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>      isb(); /* Ensure update is visible */
>  }
>  
> @@ -1298,8 +1302,7 @@ static int p2m_alloc_table(struct domain *d)
>  
>      p2m->root = page;
>  
> -    d->arch.vttbr = page_to_maddr(p2m->root)
> -        | ((uint64_t)p2m->vmid&0xff)<<48;
> +    p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
>  
>      /*
>       * Make sure that all TLBs corresponding to the new VMID are flushed
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 06a8ee5..65c6fb4 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -880,7 +880,7 @@ void vcpu_show_registers(const struct vcpu *v)
>      ctxt.ifsr32_el2 = v->arch.ifsr;
>  #endif
>  
> -    ctxt.vttbr_el2 = v->domain->arch.vttbr;
> +    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
>  
>      _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1, v);
>  }
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 4e9d8bf..9452fcd 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -48,7 +48,6 @@ struct arch_domain
>  
>      /* Virtual MMU */
>      struct p2m_domain p2m;
> -    uint64_t vttbr;
>  
>      struct hvm_domain hvm_domain;
>      gfn_t *grant_table_gfn;
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index ce28e8a..53c4d78 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -32,6 +32,9 @@ struct p2m_domain {
>      /* Current VMID in use */
>      uint8_t vmid;
>  
> +    /* Current Translation Table Base Register for the p2m */
> +    uint64_t vttbr;
> +
>      /*
>       * Highest guest frame that's ever been mapped in the p2m
>       * Only takes into account ram and foreign mapping
> 

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 18/22] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain
  2016-07-20 16:11 ` [PATCH 18/22] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain Julien Grall
@ 2016-07-22  7:51   ` Sergej Proskurin
  2016-07-27  1:12   ` Stefano Stabellini
  1 sibling, 0 replies; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22  7:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper

Hi Julien,

On 07/20/2016 06:11 PM, Julien Grall wrote:
> The current implementation of flush_tlb_domain is relying on the domain
> to have a single p2m. With the upcoming feature altp2m, a single domain
> may have different p2m. So we would need to switch to the correct p2m in
> order to flush the TLBs.
> 
> Rather than checking whether the domain is not the current domain, check
> whether the VTTBR is different. The resulting assembly code is much
> smaller: from 38 instructions (+ 2 functions call) to 22 instructions.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 18 +++++++++++-------
>  1 file changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index d1b6009..015c1e8 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -151,24 +151,28 @@ void p2m_restore_state(struct vcpu *n)
>  
>  void flush_tlb_domain(struct domain *d)
>  {
> +    struct p2m_domain *p2m = &d->arch.p2m;
>      unsigned long flags = 0;
> +    uint64_t ovttbr;
>  
>      /*
> -     * Update the VTTBR if necessary with the domain d. In this case,
> -     * it's only necessary to flush TLBs on every CPUs with the current VMID
> -     * (our domain).
> +     * ARM only provides an instruction to flush TLBs for the current
> +     * VMID. So switch to the VTTBR of a given P2M if different.
>       */
> -    if ( d != current->domain )
> +    ovttbr = READ_SYSREG64(VTTBR_EL2);
> +    if ( ovttbr != p2m->vttbr )
>      {
>          local_irq_save(flags);
> -        p2m_load_VTTBR(d);
> +        WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
> +        isb();
>      }
>  
>      flush_tlb();
>  
> -    if ( d != current->domain )
> +    if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
>      {
> -        p2m_load_VTTBR(current->domain);
> +        WRITE_SYSREG64(ovttbr, VTTBR_EL2);
> +        isb();
>          local_irq_restore(flags);
>      }
>  }
> 

Thank you for this implementation change. Since the upcoming altp2m
feature uses a VMID per p2m view, it makes absolutely sense to directly
look for differing VTTBRs.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 19/22] xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state
  2016-07-20 16:11 ` [PATCH 19/22] xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state Julien Grall
@ 2016-07-22  8:07   ` Sergej Proskurin
  2016-07-22  9:29     ` Julien Grall
  2016-07-27  1:13   ` Stefano Stabellini
  1 sibling, 1 reply; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22  8:07 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper

Hi Julien,

On 07/20/2016 06:11 PM, Julien Grall wrote:
> p2m_restore_state is the last caller of p2m_load_VTTBR and already check
> if the vCPU does not belong to the idle domain.
> 
> Note that it is likely possible to remove some isb in the function
> p2m_restore_state, however this is not the purpose of this patch. So the
> numerous isb have been left.
> 

Right now, I don't see any issues with removing the p2m_load_VTTBR
function in combination with changes applied to flush_tlb_domain in your
patch #18 and #17. However, I am not entirely sure whether it makes
sense to entirely remove the function and replicate the VTTBR loading
functionality across multiple functions. Why don't we just provide a
struct p2m_domain* to p2m_load_VTTBR (potentially with a backpointer to
the associated domain, as it is shown in the arm/altp2m patch) and use
the function inline?

Cheers,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-20 16:10 ` [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create Julien Grall
@ 2016-07-22  8:32   ` Sergej Proskurin
  2016-07-22  9:18     ` Julien Grall
  2016-07-27  0:54   ` Stefano Stabellini
  1 sibling, 1 reply; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22  8:32 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper

Hi Julien,


> -int p2m_alloc_table(struct domain *d)
> +static int p2m_alloc_table(struct domain *d)

While moving parts of the altp2m code out of ./xen/arch/arm/p2m.c, the
function p2m_alloc_table needs to be called from ./xen/arch/arm/altp2m.c
to allocate the individual altp2m views. Hence it should not be static.

However, this requirement is clearly part of an entirely patch, which
will be introduced in the near future and hence can be discussed there.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 20/22] xen/arm: Don't export flush_tlb_domain
  2016-07-20 16:11 ` [PATCH 20/22] xen/arm: Don't export flush_tlb_domain Julien Grall
@ 2016-07-22  8:54   ` Sergej Proskurin
  2016-07-22  9:30     ` Julien Grall
  2016-07-27  1:14   ` Stefano Stabellini
  1 sibling, 1 reply; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22  8:54 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper

Hi Julien,

On 07/20/2016 06:11 PM, Julien Grall wrote:
> The function flush_tlb_domain is not used outside of the file where it
> has been declared.
> 

As for patch #15, the same applies here too:
For altp2m, flush_tlb_domain/p2m_flush_tlb should be made available to
./xen/arch/arm/altp2m.c.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-22  8:32   ` Sergej Proskurin
@ 2016-07-22  9:18     ` Julien Grall
  2016-07-22 10:16       ` Sergej Proskurin
  0 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-22  9:18 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 22/07/16 09:32, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

>> -int p2m_alloc_table(struct domain *d)
>> +static int p2m_alloc_table(struct domain *d)
>
> While moving parts of the altp2m code out of ./xen/arch/arm/p2m.c, the
> function p2m_alloc_table needs to be called from ./xen/arch/arm/altp2m.c
> to allocate the individual altp2m views. Hence it should not be static.

No, this function should not be called outside p2m.c as it will not 
fully initialize the p2m. You need to need to provide a function to 
initialize a p2m (such as p2m_init).

Regards.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain
  2016-07-22  7:46   ` Sergej Proskurin
@ 2016-07-22  9:23     ` Julien Grall
  0 siblings, 0 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-22  9:23 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 22/07/16 08:46, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 07/20/2016 06:10 PM, Julien Grall wrote:
>> The field vttbr holds the base address of the translation table for
>> guest. Its value will depends on how the p2m has been initialized and
>> will only be used by the code code.
>>
>> So move the field from arch_domain to p2m_domain. This will also ease
>> the implementation of altp2m.
>> ---
>>  xen/arch/arm/p2m.c           | 11 +++++++----
>>  xen/arch/arm/traps.c         |  2 +-
>>  xen/include/asm-arm/domain.h |  1 -
>>  xen/include/asm-arm/p2m.h    |  3 +++
>>  4 files changed, 11 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index c407e6a..c52081a 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -107,10 +107,14 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>>
>>  static void p2m_load_VTTBR(struct domain *d)
>>  {
>> +    struct p2m_domain *p2m = &d->arch.p2m;
>> +
>
> This is ok to me. Further altp2m implementation can easily extend this
> code base.
>
> Also, as your patch (patch #17) as well eliminates the possibility that
> the idle domain reaches this function the following check could be
> potentially removed (as far as I know, p2m_load_VTTBR is reached only
> through p2m_restore_state).

I will not remove this check because this is not the goal of this patch 
and each patch should boot Xen without any dependencies on a follow-up 
patch: p2m_load_VTTBR is used flush_tlb_domain until patch #18 and 
p2m_restore_state does not yet have the is_idle_* check (it will be 
added in patch #17).

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 19/22] xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state
  2016-07-22  8:07   ` Sergej Proskurin
@ 2016-07-22  9:29     ` Julien Grall
  0 siblings, 0 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-22  9:29 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 22/07/16 09:07, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 07/20/2016 06:11 PM, Julien Grall wrote:
>> p2m_restore_state is the last caller of p2m_load_VTTBR and already check
>> if the vCPU does not belong to the idle domain.
>>
>> Note that it is likely possible to remove some isb in the function
>> p2m_restore_state, however this is not the purpose of this patch. So the
>> numerous isb have been left.
>>
>
> Right now, I don't see any issues with removing the p2m_load_VTTBR
> function in combination with changes applied to flush_tlb_domain in your
> patch #18 and #17. However, I am not entirely sure whether it makes
> sense to entirely remove the function and replicate the VTTBR loading
> functionality across multiple functions. Why don't we just provide a
> struct p2m_domain* to p2m_load_VTTBR (potentially with a backpointer to
> the associated domain, as it is shown in the arm/altp2m patch) and use
> the function inline?

Because ideally this function should take a p2m in parameter and a p2m 
cannot belong to an idle domain. So the function would be:

WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
isb();

However, in the case of p2m_restore_state the isb() is not necessary and 
will impact the performance. Yes, I know the function contains a lots of 
pointless isb(), this needs to be fixed at some point.

So overall, this function is not necessary.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 20/22] xen/arm: Don't export flush_tlb_domain
  2016-07-22  8:54   ` Sergej Proskurin
@ 2016-07-22  9:30     ` Julien Grall
  2016-07-22 10:25       ` Sergej Proskurin
  0 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-22  9:30 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 22/07/16 09:54, Sergej Proskurin wrote:
> Hi Julien,

Hello Sergej,

> On 07/20/2016 06:11 PM, Julien Grall wrote:
>> The function flush_tlb_domain is not used outside of the file where it
>> has been declared.
>>
>
> As for patch #15, the same applies here too:
> For altp2m, flush_tlb_domain/p2m_flush_tlb should be made available to
> ./xen/arch/arm/altp2m.c.

Based on your previous version, I don't see any reason to flush call 
flush_tlb_domain/p2m_flush_tlb in altp2m.

Please justify why you would need it.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-22  9:18     ` Julien Grall
@ 2016-07-22 10:16       ` Sergej Proskurin
  2016-07-22 10:26         ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22 10:16 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper

Hi Julien,

On 07/22/2016 11:18 AM, Julien Grall wrote:
> 
> 
> On 22/07/16 09:32, Sergej Proskurin wrote:
>> Hi Julien,
> 
> Hello Sergej,
> 
>>> -int p2m_alloc_table(struct domain *d)
>>> +static int p2m_alloc_table(struct domain *d)
>>
>> While moving parts of the altp2m code out of ./xen/arch/arm/p2m.c, the
>> function p2m_alloc_table needs to be called from ./xen/arch/arm/altp2m.c
>> to allocate the individual altp2m views. Hence it should not be static.
> 
> No, this function should not be called outside p2m.c as it will not
> fully initialize the p2m. You need to need to provide a function to
> initialize a p2m (such as p2m_init).
> 

The last time we have discussed reusing existing code, among others, for
individual struct p2m_domain initialization routines. Also, we have
agreed to move altp2m-related parts out of p2m.c into altp2m.c, which
makes it hard not to access parts required for initialization/teardown
(that are equal for both p2m and altp2m).

I agree that functions that, e.g., do not entirely initialize a specific
data structure should not be accessed from elsewhere. But then, we
should not have moved altp2m-related information out of p2m.c as they
simply need the same functionality when it comes to initialization/teardown.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 20/22] xen/arm: Don't export flush_tlb_domain
  2016-07-22  9:30     ` Julien Grall
@ 2016-07-22 10:25       ` Sergej Proskurin
  2016-07-22 10:34         ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22 10:25 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper

Hi Julien,

On 07/22/2016 11:30 AM, Julien Grall wrote:
> 
> 
> On 22/07/16 09:54, Sergej Proskurin wrote:
>> Hi Julien,
> 
> Hello Sergej,
> 
>> On 07/20/2016 06:11 PM, Julien Grall wrote:
>>> The function flush_tlb_domain is not used outside of the file where it
>>> has been declared.
>>>
>>
>> As for patch #15, the same applies here too:
>> For altp2m, flush_tlb_domain/p2m_flush_tlb should be made available to
>> ./xen/arch/arm/altp2m.c.
> 
> Based on your previous version, I don't see any reason to flush call
> flush_tlb_domain/p2m_flush_tlb in altp2m.
> 
> Please justify why you would need it.
> 

The new version considers changes that are made to the hostp2m and
propagates them to all affected altp2m views by either changing
individual altp2m entries or even flushing (but not destroying) the
entire altp2m-tables. This idea has been borrowed from the x86 altp2m
implementation.

To prevent access to old/invalid GPAs, the current implementation
flushes the TLBs associated with the affected altp2m view after such
propagation.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-22 10:16       ` Sergej Proskurin
@ 2016-07-22 10:26         ` Julien Grall
  2016-07-22 10:39           ` Sergej Proskurin
  0 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-22 10:26 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 22/07/16 11:16, Sergej Proskurin wrote:
> Hi Julien,

Hello,

> On 07/22/2016 11:18 AM, Julien Grall wrote:
>>
>>
>> On 22/07/16 09:32, Sergej Proskurin wrote:
>>> Hi Julien,
>>
>> Hello Sergej,
>>
>>>> -int p2m_alloc_table(struct domain *d)
>>>> +static int p2m_alloc_table(struct domain *d)
>>>
>>> While moving parts of the altp2m code out of ./xen/arch/arm/p2m.c, the
>>> function p2m_alloc_table needs to be called from ./xen/arch/arm/altp2m.c
>>> to allocate the individual altp2m views. Hence it should not be static.
>>
>> No, this function should not be called outside p2m.c as it will not
>> fully initialize the p2m. You need to need to provide a function to
>> initialize a p2m (such as p2m_init).
>>
>
> The last time we have discussed reusing existing code, among others, for
> individual struct p2m_domain initialization routines. Also, we have
> agreed to move altp2m-related parts out of p2m.c into altp2m.c, which
> makes it hard not to access parts required for initialization/teardown
> (that are equal for both p2m and altp2m).

I remember this discussion. However, the p2m initialize/teardown should 
exactly be the same for the hostp2m and altp2m (except for the type of 
the p2m). So, a function should be provided to initialize a full p2m to 
avoid code duplication.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 20/22] xen/arm: Don't export flush_tlb_domain
  2016-07-22 10:25       ` Sergej Proskurin
@ 2016-07-22 10:34         ` Julien Grall
  2016-07-22 10:46           ` Sergej Proskurin
  0 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-22 10:34 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 22/07/16 11:25, Sergej Proskurin wrote:
> Hi Julien,
>
> On 07/22/2016 11:30 AM, Julien Grall wrote:
>>
>>
>> On 22/07/16 09:54, Sergej Proskurin wrote:
>>> Hi Julien,
>>
>> Hello Sergej,
>>
>>> On 07/20/2016 06:11 PM, Julien Grall wrote:
>>>> The function flush_tlb_domain is not used outside of the file where it
>>>> has been declared.
>>>>
>>>
>>> As for patch #15, the same applies here too:
>>> For altp2m, flush_tlb_domain/p2m_flush_tlb should be made available to
>>> ./xen/arch/arm/altp2m.c.
>>
>> Based on your previous version, I don't see any reason to flush call
>> flush_tlb_domain/p2m_flush_tlb in altp2m.
>>
>> Please justify why you would need it.
>>
>
> The new version considers changes that are made to the hostp2m and
> propagates them to all affected altp2m views by either changing
> individual altp2m entries or even flushing (but not destroying) the
> entire altp2m-tables. This idea has been borrowed from the x86 altp2m
> implementation.
>
> To prevent access to old/invalid GPAs, the current implementation
> flushes the TLBs associated with the affected altp2m view after such
> propagation.

There is already a flush in apply_p2m_changes and removing all the 
mapping in a p2m could be implemented in p2m.c. So I still don't see why 
you need the flush outside.

I looked at the x86 version of the propagation and I was not able to 
spot any explicit flush. Maybe you can provide some code to show what 
you mean.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-22 10:39           ` Sergej Proskurin
@ 2016-07-22 10:38             ` Julien Grall
  2016-07-22 11:05               ` Sergej Proskurin
  0 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-22 10:38 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 22/07/16 11:39, Sergej Proskurin wrote:
>
>
> On 07/22/2016 12:26 PM, Julien Grall wrote:
>>
>>
>> On 22/07/16 11:16, Sergej Proskurin wrote:
>>> Hi Julien,
>>
>> Hello,
>>
>>> On 07/22/2016 11:18 AM, Julien Grall wrote:
>>>>
>>>>
>>>> On 22/07/16 09:32, Sergej Proskurin wrote:
>>>>> Hi Julien,
>>>>
>>>> Hello Sergej,
>>>>
>>>>>> -int p2m_alloc_table(struct domain *d)
>>>>>> +static int p2m_alloc_table(struct domain *d)
>>>>>
>>>>> While moving parts of the altp2m code out of ./xen/arch/arm/p2m.c, the
>>>>> function p2m_alloc_table needs to be called from
>>>>> ./xen/arch/arm/altp2m.c
>>>>> to allocate the individual altp2m views. Hence it should not be static.
>>>>
>>>> No, this function should not be called outside p2m.c as it will not
>>>> fully initialize the p2m. You need to need to provide a function to
>>>> initialize a p2m (such as p2m_init).
>>>>
>>>
>>> The last time we have discussed reusing existing code, among others, for
>>> individual struct p2m_domain initialization routines. Also, we have
>>> agreed to move altp2m-related parts out of p2m.c into altp2m.c, which
>>> makes it hard not to access parts required for initialization/teardown
>>> (that are equal for both p2m and altp2m).
>>
>> I remember this discussion. However, the p2m initialize/teardown should
>> exactly be the same for the hostp2m and altp2m (except for the type of
>> the p2m). So, a function should be provided to initialize a full p2m to
>> avoid code duplication.
>>
>
> This is exactly what has been done. Nevertheless, altp2m views are
> somewhat more dynamic than hostp2m and hence require frequent
> initialization/teardown of individual views. That is, by moving altp2m
> parts out of p2m.c we simply need to access this shared code multiple
> times at runtime from different altp2m-related functions. This applies
> to more functions that just to p2m_alloc_table.

I am not convinced that you need to reallocate the root page every time 
rather than clearing them. Anyway, I will need to see the code to 
understand what is done.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-22 10:26         ` Julien Grall
@ 2016-07-22 10:39           ` Sergej Proskurin
  2016-07-22 10:38             ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22 10:39 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 07/22/2016 12:26 PM, Julien Grall wrote:
> 
> 
> On 22/07/16 11:16, Sergej Proskurin wrote:
>> Hi Julien,
> 
> Hello,
> 
>> On 07/22/2016 11:18 AM, Julien Grall wrote:
>>>
>>>
>>> On 22/07/16 09:32, Sergej Proskurin wrote:
>>>> Hi Julien,
>>>
>>> Hello Sergej,
>>>
>>>>> -int p2m_alloc_table(struct domain *d)
>>>>> +static int p2m_alloc_table(struct domain *d)
>>>>
>>>> While moving parts of the altp2m code out of ./xen/arch/arm/p2m.c, the
>>>> function p2m_alloc_table needs to be called from
>>>> ./xen/arch/arm/altp2m.c
>>>> to allocate the individual altp2m views. Hence it should not be static.
>>>
>>> No, this function should not be called outside p2m.c as it will not
>>> fully initialize the p2m. You need to need to provide a function to
>>> initialize a p2m (such as p2m_init).
>>>
>>
>> The last time we have discussed reusing existing code, among others, for
>> individual struct p2m_domain initialization routines. Also, we have
>> agreed to move altp2m-related parts out of p2m.c into altp2m.c, which
>> makes it hard not to access parts required for initialization/teardown
>> (that are equal for both p2m and altp2m).
> 
> I remember this discussion. However, the p2m initialize/teardown should
> exactly be the same for the hostp2m and altp2m (except for the type of
> the p2m). So, a function should be provided to initialize a full p2m to
> avoid code duplication.
> 

This is exactly what has been done. Nevertheless, altp2m views are
somewhat more dynamic than hostp2m and hence require frequent
initialization/teardown of individual views. That is, by moving altp2m
parts out of p2m.c we simply need to access this shared code multiple
times at runtime from different altp2m-related functions. This applies
to more functions that just to p2m_alloc_table.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 20/22] xen/arm: Don't export flush_tlb_domain
  2016-07-22 10:34         ` Julien Grall
@ 2016-07-22 10:46           ` Sergej Proskurin
  2016-07-22 10:57             ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22 10:46 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 07/22/2016 12:34 PM, Julien Grall wrote:
> 
> 
> On 22/07/16 11:25, Sergej Proskurin wrote:
>> Hi Julien,
>>
>> On 07/22/2016 11:30 AM, Julien Grall wrote:
>>>
>>>
>>> On 22/07/16 09:54, Sergej Proskurin wrote:
>>>> Hi Julien,
>>>
>>> Hello Sergej,
>>>
>>>> On 07/20/2016 06:11 PM, Julien Grall wrote:
>>>>> The function flush_tlb_domain is not used outside of the file where it
>>>>> has been declared.
>>>>>
>>>>
>>>> As for patch #15, the same applies here too:
>>>> For altp2m, flush_tlb_domain/p2m_flush_tlb should be made available to
>>>> ./xen/arch/arm/altp2m.c.
>>>
>>> Based on your previous version, I don't see any reason to flush call
>>> flush_tlb_domain/p2m_flush_tlb in altp2m.
>>>
>>> Please justify why you would need it.
>>>
>>
>> The new version considers changes that are made to the hostp2m and
>> propagates them to all affected altp2m views by either changing
>> individual altp2m entries or even flushing (but not destroying) the
>> entire altp2m-tables. This idea has been borrowed from the x86 altp2m
>> implementation.
>>
>> To prevent access to old/invalid GPAs, the current implementation
>> flushes the TLBs associated with the affected altp2m view after such
>> propagation.
> 
> There is already a flush in apply_p2m_changes and removing all the
> mapping in a p2m could be implemented in p2m.c. So I still don't see why
> you need the flush outside.
> 

Yes, the flush you are referring to flushes the hostp2m - not the
individual altp2m views.

> I looked at the x86 version of the propagation and I was not able to
> spot any explicit flush. Maybe you can provide some code to show what
> you mean.
>  

Sure thing:

...

static void p2m_reset_altp2m(struct p2m_domain *p2m)
{
    p2m_flush_table(p2m);
    /* Uninit and reinit ept to force TLB shootdown */
    ept_p2m_uninit(p2m);
    ept_p2m_init(p2m);
    p2m->min_remapped_gfn = INVALID_GFN;
    p2m->max_remapped_gfn = 0;
}

...

On x86, the uninit- and re-initialization of the EPTs force the TLBs
associated with the configured VMID of the EPTs to flush.

Regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 20/22] xen/arm: Don't export flush_tlb_domain
  2016-07-22 10:46           ` Sergej Proskurin
@ 2016-07-22 10:57             ` Julien Grall
  2016-07-22 11:22               ` Sergej Proskurin
  0 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-22 10:57 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 22/07/16 11:46, Sergej Proskurin wrote:
>
>
> On 07/22/2016 12:34 PM, Julien Grall wrote:
>>
>>
>> On 22/07/16 11:25, Sergej Proskurin wrote:
>>> Hi Julien,
>>>
>>> On 07/22/2016 11:30 AM, Julien Grall wrote:
>>>>
>>>>
>>>> On 22/07/16 09:54, Sergej Proskurin wrote:
>>>>> Hi Julien,
>>>>
>>>> Hello Sergej,
>>>>
>>>>> On 07/20/2016 06:11 PM, Julien Grall wrote:
>>>>>> The function flush_tlb_domain is not used outside of the file where it
>>>>>> has been declared.
>>>>>>
>>>>>
>>>>> As for patch #15, the same applies here too:
>>>>> For altp2m, flush_tlb_domain/p2m_flush_tlb should be made available to
>>>>> ./xen/arch/arm/altp2m.c.
>>>>
>>>> Based on your previous version, I don't see any reason to flush call
>>>> flush_tlb_domain/p2m_flush_tlb in altp2m.
>>>>
>>>> Please justify why you would need it.
>>>>
>>>
>>> The new version considers changes that are made to the hostp2m and
>>> propagates them to all affected altp2m views by either changing
>>> individual altp2m entries or even flushing (but not destroying) the
>>> entire altp2m-tables. This idea has been borrowed from the x86 altp2m
>>> implementation.
>>>
>>> To prevent access to old/invalid GPAs, the current implementation
>>> flushes the TLBs associated with the affected altp2m view after such
>>> propagation.
>>
>> There is already a flush in apply_p2m_changes and removing all the
>> mapping in a p2m could be implemented in p2m.c. So I still don't see why
>> you need the flush outside.
>>
>
> Yes, the flush you are referring to flushes the hostp2m - not the
> individual altp2m views.

apply_p2m_changes is *not* hostp2m specific. It should work on any p2m 
regardless the type.

The ARM P2M interface is not set in stone, so if it does not fit it will 
need to be changed. We should avoid to hack the code in order to add a 
new feature.

It might be time to mention that I am reworking the whole p2m code it 
does not respect the ARM spec (such as break-before-make semantics) and 
I believe it does not fit the altp2m model. It is very difficult to 
implement the former with the current implementation and without have a 
big performance impact.

Rather than having a function which implement all the operations, I am 
planning to have a simple set of functions that can be used to 
re-implement the operations:
	- p2m_set_entry: Set an entry in the P2M
	- p2m_get_entry: Retrieve the informations of an entry

This is very similar to x86 and make more straight forward to implement 
new operations and co-op with the ARM spec.

I have already a prototype and I am hoping to send it soon.

>
>> I looked at the x86 version of the propagation and I was not able to
>> spot any explicit flush. Maybe you can provide some code to show what
>> you mean.
>>
>
> Sure thing:
>
> ...
>
> static void p2m_reset_altp2m(struct p2m_domain *p2m)
> {
>     p2m_flush_table(p2m);
>     /* Uninit and reinit ept to force TLB shootdown */
>     ept_p2m_uninit(p2m);
>     ept_p2m_init(p2m);
>     p2m->min_remapped_gfn = INVALID_GFN;
>     p2m->max_remapped_gfn = 0;
> }
>
> ...
>
> On x86, the uninit- and re-initialization of the EPTs force the TLBs
> associated with the configured VMID of the EPTs to flush.

As mentioned in my previous mail, p2m_reset can be implemented in p2m.c 
as this is not altp2m.c specific.

When I asked to move altp2m specific code from p2m.c to altp2m.c it was 
for avoiding to have p2m.c too big. However, if the function is not 
altp2m specific, there is little reason to move outside.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-22 10:38             ` Julien Grall
@ 2016-07-22 11:05               ` Sergej Proskurin
  2016-07-22 13:00                 ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22 11:05 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 07/22/2016 12:38 PM, Julien Grall wrote:
> 
> 
> On 22/07/16 11:39, Sergej Proskurin wrote:
>>
>>
>> On 07/22/2016 12:26 PM, Julien Grall wrote:
>>>
>>>
>>> On 22/07/16 11:16, Sergej Proskurin wrote:
>>>> Hi Julien,
>>>
>>> Hello,
>>>
>>>> On 07/22/2016 11:18 AM, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 22/07/16 09:32, Sergej Proskurin wrote:
>>>>>> Hi Julien,
>>>>>
>>>>> Hello Sergej,
>>>>>
>>>>>>> -int p2m_alloc_table(struct domain *d)
>>>>>>> +static int p2m_alloc_table(struct domain *d)
>>>>>>
>>>>>> While moving parts of the altp2m code out of ./xen/arch/arm/p2m.c,
>>>>>> the
>>>>>> function p2m_alloc_table needs to be called from
>>>>>> ./xen/arch/arm/altp2m.c
>>>>>> to allocate the individual altp2m views. Hence it should not be
>>>>>> static.
>>>>>
>>>>> No, this function should not be called outside p2m.c as it will not
>>>>> fully initialize the p2m. You need to need to provide a function to
>>>>> initialize a p2m (such as p2m_init).
>>>>>
>>>>
>>>> The last time we have discussed reusing existing code, among others,
>>>> for
>>>> individual struct p2m_domain initialization routines. Also, we have
>>>> agreed to move altp2m-related parts out of p2m.c into altp2m.c, which
>>>> makes it hard not to access parts required for initialization/teardown
>>>> (that are equal for both p2m and altp2m).
>>>
>>> I remember this discussion. However, the p2m initialize/teardown should
>>> exactly be the same for the hostp2m and altp2m (except for the type of
>>> the p2m). So, a function should be provided to initialize a full p2m to
>>> avoid code duplication.
>>>
>>
>> This is exactly what has been done. Nevertheless, altp2m views are
>> somewhat more dynamic than hostp2m and hence require frequent
>> initialization/teardown of individual views. That is, by moving altp2m
>> parts out of p2m.c we simply need to access this shared code multiple
>> times at runtime from different altp2m-related functions. This applies
>> to more functions that just to p2m_alloc_table.
> 
> I am not convinced that you need to reallocate the root page every time
> rather than clearing them. Anyway, I will need to see the code to
> understand what is done.
> 
> Regards,
> 

In the following, you will find an excerpt of both the p2m.c and
altp2m.c concerning the (alt)p2m initialization. Please note that
altp2m_init_helper is called from different initialization routines in
altp2m.c. Also, please consider that this code has not yet been ported
to your recent patches. Please let me know if you need more information.

---
./xen/arch/p2m.c:

...

int p2m_alloc_table(struct p2m_domain *p2m)
{
    unsigned int i;
    struct page_info *page;
    struct vttbr *vttbr = &p2m->vttbr;

    page = alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0);
    if ( page == NULL )
        return -ENOMEM;

    /* Clear all first level pages */
    for ( i = 0; i < P2M_ROOT_PAGES; i++ )
        clear_and_clean_page(page + i);

    p2m->root = page;

    /* Initialize the VTTBR associated with the allocated p2m table. */
    vttbr->vttbr = 0;
    vttbr->vmid = p2m->vmid & 0xff;
    vttbr->baddr = page_to_maddr(p2m->root);

    return 0;
}

int p2m_init_one(struct domain *d, struct p2m_domain *p2m)
{
    int rc = 0;

    spin_lock_init(&p2m->lock);
    INIT_PAGE_LIST_HEAD(&p2m->pages);

    spin_lock(&p2m->lock);

    p2m->vmid = INVALID_VMID;
    rc = p2m_alloc_vmid(d, p2m);
    if ( rc != 0 )
        goto err;

    p2m->domain = d;
    p2m->access_required = false;
    p2m->mem_access_enabled = false;
    p2m->default_access = p2m_access_rwx;
    p2m->root = NULL;

    p2m->vttbr.vttbr = INVALID_VTTBR;

    p2m->max_mapped_gfn = 0;
    p2m->lowest_mapped_gfn = INVALID_GFN;
    radix_tree_init(&p2m->mem_access_settings);

err:
    spin_unlock(&p2m->lock);

    return rc;
}

static int p2m_init_hostp2m(struct domain *d)
{
    struct p2m_domain *p2m = p2m_get_hostp2m(d);

    d->arch.vttbr = INVALID_VTTBR;
    p2m->p2m_class = p2m_host;

    return p2m_init_one(d, p2m);
}

int p2m_init(struct domain *d)
{
    int rc;

    rc = p2m_init_hostp2m(d);
    if ( rc )
        return rc;

    return altp2m_init(d);
}

...

---
./xen/arch/altp2m.c:

...

static int altp2m_init_helper(struct domain *d, unsigned int idx)
{
    int rc;
    struct p2m_domain *p2m = d->arch.altp2m_p2m[idx];

    if ( p2m == NULL )
    {
        /* Allocate a new altp2m view. */
        p2m = xzalloc(struct p2m_domain);
        if ( p2m == NULL)
        {
            rc = -ENOMEM;
            goto err;
        }
        memset(p2m, 0, sizeof(struct p2m_domain));
    }

    /* Initialize the new altp2m view. */
    rc = p2m_init_one(d, p2m);
    if ( rc )
        goto err;

    /* Allocate a root table for the altp2m view. */
    rc = p2m_alloc_table(p2m);
    if ( rc )
        goto err;

    p2m->p2m_class = p2m_alternate;
    p2m->access_required = 1;
    _atomic_set(&p2m->active_vcpus, 0);

    d->arch.altp2m_p2m[idx] = p2m;
    d->arch.altp2m_vttbr[idx] = p2m->vttbr.vttbr;

    /*
     * Make sure that all TLBs corresponding to the new VMID are flushed
     * before using it.
     */
    flush_tlb_p2m(d, p2m);

    return rc;

err:
    if ( p2m )
        xfree(p2m);

    d->arch.altp2m_p2m[idx] = NULL;

    return rc;
}

int altp2m_init_by_id(struct domain *d, unsigned int idx)
{
    int rc = -EINVAL;

    if ( idx >= MAX_ALTP2M )
        return rc;

    altp2m_lock(d);

    if ( d->arch.altp2m_vttbr[idx] == INVALID_VTTBR )
        rc = altp2m_init_helper(d, idx);

    altp2m_unlock(d);

    return rc;
}

int altp2m_init(struct domain *d)
{
    unsigned int i;

    spin_lock_init(&d->arch.altp2m_lock);

    for ( i = 0; i < MAX_ALTP2M; i++ )
    {
        d->arch.altp2m_p2m[i] = NULL;
        d->arch.altp2m_vttbr[i] = INVALID_VTTBR;
    }

    return 0;
}

...

Regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 20/22] xen/arm: Don't export flush_tlb_domain
  2016-07-22 10:57             ` Julien Grall
@ 2016-07-22 11:22               ` Sergej Proskurin
  0 siblings, 0 replies; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-22 11:22 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 07/22/2016 12:57 PM, Julien Grall wrote:
> 
> 
> On 22/07/16 11:46, Sergej Proskurin wrote:
>>
>>
>> On 07/22/2016 12:34 PM, Julien Grall wrote:
>>>
>>>
>>> On 22/07/16 11:25, Sergej Proskurin wrote:
>>>> Hi Julien,
>>>>
>>>> On 07/22/2016 11:30 AM, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 22/07/16 09:54, Sergej Proskurin wrote:
>>>>>> Hi Julien,
>>>>>
>>>>> Hello Sergej,
>>>>>
>>>>>> On 07/20/2016 06:11 PM, Julien Grall wrote:
>>>>>>> The function flush_tlb_domain is not used outside of the file
>>>>>>> where it
>>>>>>> has been declared.
>>>>>>>
>>>>>>
>>>>>> As for patch #15, the same applies here too:
>>>>>> For altp2m, flush_tlb_domain/p2m_flush_tlb should be made
>>>>>> available to
>>>>>> ./xen/arch/arm/altp2m.c.
>>>>>
>>>>> Based on your previous version, I don't see any reason to flush call
>>>>> flush_tlb_domain/p2m_flush_tlb in altp2m.
>>>>>
>>>>> Please justify why you would need it.
>>>>>
>>>>
>>>> The new version considers changes that are made to the hostp2m and
>>>> propagates them to all affected altp2m views by either changing
>>>> individual altp2m entries or even flushing (but not destroying) the
>>>> entire altp2m-tables. This idea has been borrowed from the x86 altp2m
>>>> implementation.
>>>>
>>>> To prevent access to old/invalid GPAs, the current implementation
>>>> flushes the TLBs associated with the affected altp2m view after such
>>>> propagation.
>>>
>>> There is already a flush in apply_p2m_changes and removing all the
>>> mapping in a p2m could be implemented in p2m.c. So I still don't see why
>>> you need the flush outside.
>>>
>>
>> Yes, the flush you are referring to flushes the hostp2m - not the
>> individual altp2m views.
> 
> apply_p2m_changes is *not* hostp2m specific. It should work on any p2m
> regardless the type.
> 

This true. However, p2m_propagate_change is invoked (from
apply_p2m_changes) only if the p2m that was currently modified is the
hostp2m.

> The ARM P2M interface is not set in stone, so if it does not fit it will
> need to be changed. We should avoid to hack the code in order to add a
> new feature.
> 
> It might be time to mention that I am reworking the whole p2m code it
> does not respect the ARM spec (such as break-before-make semantics) and
> I believe it does not fit the altp2m model. It is very difficult to
> implement the former with the current implementation and without have a
> big performance impact.
> 
> Rather than having a function which implement all the operations, I am
> planning to have a simple set of functions that can be used to
> re-implement the operations:
>     - p2m_set_entry: Set an entry in the P2M
>     - p2m_get_entry: Retrieve the informations of an entry
> 
> This is very similar to x86 and make more straight forward to implement
> new operations and co-op with the ARM spec.
> 
> I have already a prototype and I am hoping to send it soon.
> 
>>
>>> I looked at the x86 version of the propagation and I was not able to
>>> spot any explicit flush. Maybe you can provide some code to show what
>>> you mean.
>>>
>>
>> Sure thing:
>>
>> ...
>>
>> static void p2m_reset_altp2m(struct p2m_domain *p2m)
>> {
>>     p2m_flush_table(p2m);
>>     /* Uninit and reinit ept to force TLB shootdown */
>>     ept_p2m_uninit(p2m);
>>     ept_p2m_init(p2m);
>>     p2m->min_remapped_gfn = INVALID_GFN;
>>     p2m->max_remapped_gfn = 0;
>> }
>>
>> ...
>>
>> On x86, the uninit- and re-initialization of the EPTs force the TLBs
>> associated with the configured VMID of the EPTs to flush.
> 
> As mentioned in my previous mail, p2m_reset can be implemented in p2m.c
> as this is not altp2m.c specific.
> 

Well yes. However, it is not used for the hostp2m, thus it makes it
automatically altp2m specific - but I know what you mean. Yet, I believe
it is cleaner to separate the entire altp2m code and maintain it in
altp2m.c. Nevertheless, I will need to pull back parts of altp2m code
into p2m.c, if we will not share some of the initialization/teardown
functions between both files.

Regards,
~Sergej


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-22 11:05               ` Sergej Proskurin
@ 2016-07-22 13:00                 ` Julien Grall
  2016-07-23 17:59                   ` Sergej Proskurin
  0 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-22 13:00 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: sstabellini, wei.chen, steve.capper



On 22/07/16 12:05, Sergej Proskurin wrote:
>
>
> On 07/22/2016 12:38 PM, Julien Grall wrote:
>>
>>
>> On 22/07/16 11:39, Sergej Proskurin wrote:
>>>
>>>
>>> On 07/22/2016 12:26 PM, Julien Grall wrote:
>>>>
>>>>
>>>> On 22/07/16 11:16, Sergej Proskurin wrote:
>>>>> Hi Julien,
>>>>
>>>> Hello,
>>>>
>>>>> On 07/22/2016 11:18 AM, Julien Grall wrote:
>>>>>>
>>>>>>
>>>>>> On 22/07/16 09:32, Sergej Proskurin wrote:
>>>>>>> Hi Julien,
>>>>>>
>>>>>> Hello Sergej,
>>>>>>
>>>>>>>> -int p2m_alloc_table(struct domain *d)
>>>>>>>> +static int p2m_alloc_table(struct domain *d)
>>>>>>>
>>>>>>> While moving parts of the altp2m code out of ./xen/arch/arm/p2m.c,
>>>>>>> the
>>>>>>> function p2m_alloc_table needs to be called from
>>>>>>> ./xen/arch/arm/altp2m.c
>>>>>>> to allocate the individual altp2m views. Hence it should not be
>>>>>>> static.
>>>>>>
>>>>>> No, this function should not be called outside p2m.c as it will not
>>>>>> fully initialize the p2m. You need to need to provide a function to
>>>>>> initialize a p2m (such as p2m_init).
>>>>>>
>>>>>
>>>>> The last time we have discussed reusing existing code, among others,
>>>>> for
>>>>> individual struct p2m_domain initialization routines. Also, we have
>>>>> agreed to move altp2m-related parts out of p2m.c into altp2m.c, which
>>>>> makes it hard not to access parts required for initialization/teardown
>>>>> (that are equal for both p2m and altp2m).
>>>>
>>>> I remember this discussion. However, the p2m initialize/teardown should
>>>> exactly be the same for the hostp2m and altp2m (except for the type of
>>>> the p2m). So, a function should be provided to initialize a full p2m to
>>>> avoid code duplication.
>>>>
>>>
>>> This is exactly what has been done. Nevertheless, altp2m views are
>>> somewhat more dynamic than hostp2m and hence require frequent
>>> initialization/teardown of individual views. That is, by moving altp2m
>>> parts out of p2m.c we simply need to access this shared code multiple
>>> times at runtime from different altp2m-related functions. This applies
>>> to more functions that just to p2m_alloc_table.
>>
>> I am not convinced that you need to reallocate the root page every time
>> rather than clearing them. Anyway, I will need to see the code to
>> understand what is done.
>>
>> Regards,
>>
>
> In the following, you will find an excerpt of both the p2m.c and
> altp2m.c concerning the (alt)p2m initialization. Please note that
> altp2m_init_helper is called from different initialization routines in
> altp2m.c. Also, please consider that this code has not yet been ported
> to your recent patches. Please let me know if you need more information.

>
> static int altp2m_init_helper(struct domain *d, unsigned int idx)
> {
>     int rc;
>     struct p2m_domain *p2m = d->arch.altp2m_p2m[idx];
>
>     if ( p2m == NULL )
>     {
>         /* Allocate a new altp2m view. */
>         p2m = xzalloc(struct p2m_domain);
>         if ( p2m == NULL)
>         {
>             rc = -ENOMEM;
>             goto err;
>         }
>         memset(p2m, 0, sizeof(struct p2m_domain));
>     }
>
>     /* Initialize the new altp2m view. */
>     rc = p2m_init_one(d, p2m);
>     if ( rc )
>         goto err;
>
>     /* Allocate a root table for the altp2m view. */
>     rc = p2m_alloc_table(p2m);

This patch moved p2m_alloc_table call into p2m_init (i.e your 
p2m_init_one). You complained that the function was not exported 
anymore, but you did not look how it was called in this patch.

>     if ( rc )
>         goto err;
>
>     p2m->p2m_class = p2m_alternate;
>     p2m->access_required = 1;
>     _atomic_set(&p2m->active_vcpus, 0);
>
>     d->arch.altp2m_p2m[idx] = p2m;
>     d->arch.altp2m_vttbr[idx] = p2m->vttbr.vttbr;

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-22 13:00                 ` Julien Grall
@ 2016-07-23 17:59                   ` Sergej Proskurin
  0 siblings, 0 replies; 80+ messages in thread
From: Sergej Proskurin @ 2016-07-23 17:59 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, wei.chen, steve.capper

Hi Julien,

> 
> This patch moved p2m_alloc_table call into p2m_init (i.e your
> p2m_init_one). You complained that the function was not exported
> anymore, but you did not look how it was called in this patch.
> 

Ok. At this point our patches indeed derailed too much from each other.
We should continue our discussion during my next patch series. Thank you.

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 05/22] xen/arm: p2m: Clean-up mfn_to_p2m_entry
  2016-07-20 16:10 ` [PATCH 05/22] xen/arm: p2m: Clean-up mfn_to_p2m_entry Julien Grall
@ 2016-07-26 22:24   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-26 22:24 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The physical address is computed from the machine frame number, so
> checking if the physical address is page aligned is pointless.
> 
> Furthermore, directly assigned the MFN to the corresponding field in the
> entry rather than converting to a physical address and orring the value.
> It will avoid to rely on the field position and make the code clearer.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 79095f1..d82349c 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -327,7 +327,6 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
>  static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
>                                 p2m_type_t t, p2m_access_t a)
>  {
> -    paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
>      /*
>       * sh, xn and write bit will be defined in the following switches
>       * based on mattr and t.
> @@ -359,10 +358,9 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
>  
>      p2m_set_permission(&e, t, a);
>  
> -    ASSERT(!(pa & ~PAGE_MASK));
> -    ASSERT(!(pa & ~PADDR_MASK));
> +    ASSERT(!(pfn_to_paddr(mfn) & ~PADDR_MASK));
>  
> -    e.bits |= pa;
> +    e.p2m.base = mfn;
>  
>      return e;
>  }
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 06/22] xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry
  2016-07-20 16:10 ` [PATCH 06/22] xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry Julien Grall
@ 2016-07-26 22:28   ` Stefano Stabellini
  2016-07-27  9:54     ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-26 22:28 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 17 +++++++++--------
>  1 file changed, 9 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index d82349c..99be9be 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -324,7 +324,7 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
>      }
>  }
>  
> -static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
> +static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
>                                 p2m_type_t t, p2m_access_t a)
>  {
>      /*
> @@ -358,9 +358,9 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
>  
>      p2m_set_permission(&e, t, a);
>  
> -    ASSERT(!(pfn_to_paddr(mfn) & ~PADDR_MASK));
> +    ASSERT(!(pfn_to_paddr(mfn_x(mfn)) & ~PADDR_MASK));
>  
> -    e.p2m.base = mfn;
> +    e.p2m.base = mfn_x(mfn);
>  
>      return e;
>  }
> @@ -411,7 +411,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>      if ( splitting )
>      {
>          p2m_type_t t = entry->p2m.type;
> -        unsigned long base_pfn = entry->p2m.base;
> +        mfn_t mfn = _mfn(entry->p2m.base);
>          int i;
>  
>          /*
> @@ -420,8 +420,9 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>           */
>           for ( i=0 ; i < LPAE_ENTRIES; i++ )
>           {
> -             pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
> -                                    MATTR_MEM, t, p2m->default_access);
> +             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t, p2m->default_access);
> +
> +             mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));

Should we be incrementing mfn before calling mfn_to_p2m_entry?


>               /*
>                * First and second level super pages set p2m.table = 0, but
> @@ -443,7 +444,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>  
>      unmap_domain_page(p);
>  
> -    pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid,
> +    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), MATTR_MEM, p2m_invalid,
>                             p2m->default_access);
>  
>      p2m_write_pte(entry, pte, flush_cache);
> @@ -693,7 +694,7 @@ static int apply_one_level(struct domain *d,
>                  return rc;
>  
>              /* New mapping is superpage aligned, make it */
> -            pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
> +            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), mattr, t, a);
>              if ( level < 3 )
>                  pte.p2m.table = 0; /* Superpage entry */
>  
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 07/22] xen/arm: p2m: Use p2m_is_foreign in get_page_from_gfn to avoid open coding
  2016-07-20 16:10 ` [PATCH 07/22] xen/arm: p2m: Use p2m_is_foreign in get_page_from_gfn to avoid open coding Julien Grall
@ 2016-07-26 22:33   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-26 22:33 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> No functional change.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/include/asm-arm/p2m.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 8fe78c1..dbbcefe 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -227,7 +227,7 @@ static inline struct page_info *get_page_from_gfn(
>       * get_page won't work on foreign mapping because the page doesn't
>       * belong to the current domain.
>       */
> -    if ( p2mt == p2m_map_foreign )
> +    if ( p2m_is_foreign(p2mt) )
>      {
>          struct domain *fdom = page_get_owner_and_reference(page);
>          ASSERT(fdom != NULL);
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 08/22] xen/arm: p2m: Simplify p2m type check by using bitmask
  2016-07-20 16:10 ` [PATCH 08/22] xen/arm: p2m: Simplify p2m type check by using bitmask Julien Grall
@ 2016-07-26 22:36   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-26 22:36 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The resulting assembly code for the macros is much simpler and will
> never contain more than one instruction branch.
> 
> The idea is taken from x86 (see include/asm-x86/p2m.h). Also move the
> two helpers earlier to keep all the p2m type definitions together.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/include/asm-arm/p2m.h | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index dbbcefe..3091c04 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -97,6 +97,17 @@ typedef enum {
>      p2m_max_real_type,  /* Types after this won't be store in the p2m */
>  } p2m_type_t;
>  
> +/* We use bitmaps and mask to handle groups of types */
> +#define p2m_to_mask(_t) (1UL << (_t))
> +
> +/* RAM types, which map to real machine frames */
> +#define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw) |        \
> +                       p2m_to_mask(p2m_ram_ro))
> +
> +/* Useful predicates */
> +#define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
> +#define p2m_is_foreign(_t) (p2m_to_mask(_t) & p2m_to_mask(p2m_map_foreign))
> +
>  static inline
>  void p2m_mem_access_emulate_check(struct vcpu *v,
>                                    const vm_event_response_t *rsp)
> @@ -110,9 +121,6 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
>      /* Not supported on ARM. */
>  }
>  
> -#define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
> -#define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
> -
>  /* Initialise vmid allocator */
>  void p2m_vmid_allocator_init(void);
>  
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 09/22] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn
  2016-07-20 16:10 ` [PATCH 09/22] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn Julien Grall
@ 2016-07-26 22:44   ` Stefano Stabellini
  2016-07-27  9:59     ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-26 22:44 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> Currently, the check in get_page_from_gfn is using a blacklist. This is
> very fragile because we may forgot to update the check when a new p2m
> type is added.
> 
> To avoid any possible issue, use a whitelist. All type backed by a RAM
> page can could potential be valid. The check is borrowed from x86.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> ---
>  xen/include/asm-arm/p2m.h | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 3091c04..78d37ab 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -104,9 +104,16 @@ typedef enum {
>  #define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw) |        \
>                         p2m_to_mask(p2m_ram_ro))
>  
> +/* Grant mapping types, which map to a real frame in another VM */
> +#define P2M_GRANT_TYPES (p2m_to_mask(p2m_grant_map_rw) |  \
> +                         p2m_to_mask(p2m_grant_map_ro))
> +
>  /* Useful predicates */
>  #define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
>  #define p2m_is_foreign(_t) (p2m_to_mask(_t) & p2m_to_mask(p2m_map_foreign))
> +#define p2m_is_any_ram(_t) (p2m_to_mask(_t) &                   \
> +                            (P2M_RAM_TYPES | P2M_GRANT_TYPES |  \
> +                             p2m_to_mask(p2m_map_foreign)))
>  
>  static inline
>  void p2m_mem_access_emulate_check(struct vcpu *v,
> @@ -224,7 +231,7 @@ static inline struct page_info *get_page_from_gfn(
>      if (t)
>          *t = p2mt;
>  
> -    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_direct )
> +    if ( !p2m_is_any_ram(p2mt) )
>          return NULL;

What about the iommu mappings?


>      if ( !mfn_valid(mfn) )
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 10/22] xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO
  2016-07-20 16:10 ` [PATCH 10/22] xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO Julien Grall
@ 2016-07-26 22:47   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-26 22:47 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> Currently, the p2m type p2m_mmio_direct is used to map in stage-2
> cacheable MMIO (via map_regions_rw_cache) and non-cacheable one (via
> map_mmio_regions). The p2m code is relying on the caller to give the
> correct memory attribute.
> 
> In a follow-up patch, the p2m code will rely on the p2m type to find the
> correct memory attribute. In preparation of this, introduce
> p2m_mmio_direct_nc and p2m_mimo_direct_c to differentiate the
> cacheability of the MMIO.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c        | 7 ++++---
>  xen/include/asm-arm/p2m.h | 3 ++-
>  2 files changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 99be9be..999de2b 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -272,7 +272,8 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
>      case p2m_iommu_map_rw:
>      case p2m_map_foreign:
>      case p2m_grant_map_rw:
> -    case p2m_mmio_direct:
> +    case p2m_mmio_direct_nc:
> +    case p2m_mmio_direct_c:
>          e->p2m.xn = 1;
>          e->p2m.write = 1;
>          break;
> @@ -1195,7 +1196,7 @@ int map_regions_rw_cache(struct domain *d,
>                           mfn_t mfn)
>  {
>      return p2m_insert_mapping(d, gfn, nr, mfn,
> -                              MATTR_MEM, p2m_mmio_direct);
> +                              MATTR_MEM, p2m_mmio_direct_c);
>  }
>  
>  int unmap_regions_rw_cache(struct domain *d,
> @@ -1212,7 +1213,7 @@ int map_mmio_regions(struct domain *d,
>                       mfn_t mfn)
>  {
>      return p2m_insert_mapping(d, start_gfn, nr, mfn,
> -                              MATTR_DEV, p2m_mmio_direct);
> +                              MATTR_DEV, p2m_mmio_direct_nc);
>  }
>  
>  int unmap_mmio_regions(struct domain *d,
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 78d37ab..20a220ea 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -87,7 +87,8 @@ typedef enum {
>      p2m_invalid = 0,    /* Nothing mapped here */
>      p2m_ram_rw,         /* Normal read/write guest RAM */
>      p2m_ram_ro,         /* Read-only; writes are silently dropped */
> -    p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
> +    p2m_mmio_direct_nc, /* Read/write mapping of genuine MMIO area non-cacheable */
> +    p2m_mmio_direct_c,  /* Read/write mapping of genuine MMIO area cacheable */
>      p2m_map_foreign,    /* Ram pages from foreign domain */
>      p2m_grant_map_rw,   /* Read/write grant mapping */
>      p2m_grant_map_ro,   /* Read-only grant mapping */
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type
  2016-07-20 16:10 ` [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type Julien Grall
@ 2016-07-27  0:41   ` Stefano Stabellini
  2016-07-27 17:15   ` Julien Grall
  1 sibling, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  0:41 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> Currently, mfn_to_p2m_entry is relying on the caller to provide the
> correct memory attribute and will deduce the sharability based on it.
> 
> Some of the callers, such as p2m_create_table, are using same memory
> attribute regardless the underlying p2m type. For instance, this will
> lead to use change the memory attribute from MATTR_DEV to MATTR_MEM when
> a MMIO superpage is shattered.
> 
> Furthermore, it makes more difficult to support different shareability
> with the same memory attribute.
> 
> All the memory attributes could be deduced via the p2m type. This will
> simplify the code by dropping one parameter.
> 
> ---
>     I am not sure whether p2m_direct_mmio_c (cacheable MMIO) should use
>     the outer-shareability or inner-shareability. Any opinions?

I think you did the right thing by setting it to outer. Good work.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/p2m.c | 55 ++++++++++++++++++++++++------------------------------
>  1 file changed, 24 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 999de2b..2f50b4f 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -325,8 +325,7 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
>      }
>  }
>  
> -static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
> -                               p2m_type_t t, p2m_access_t a)
> +static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a)
>  {
>      /*
>       * sh, xn and write bit will be defined in the following switches
> @@ -335,7 +334,6 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
>      lpae_t e = (lpae_t) {
>          .p2m.af = 1,
>          .p2m.read = 1,
> -        .p2m.mattr = mattr,
>          .p2m.table = 1,
>          .p2m.valid = 1,
>          .p2m.type = t,
> @@ -343,18 +341,21 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
>  
>      BUILD_BUG_ON(p2m_max_real_type > (1 << 4));
>  
> -    switch (mattr)
> +    switch ( t )
>      {
> -    case MATTR_MEM:
> -        e.p2m.sh = LPAE_SH_INNER;
> +    case p2m_mmio_direct_nc:
> +        e.p2m.mattr = MATTR_DEV;
> +        e.p2m.sh = LPAE_SH_OUTER;
>          break;
>  
> -    case MATTR_DEV:
> +    case p2m_mmio_direct_c:
> +        e.p2m.mattr = MATTR_MEM;
>          e.p2m.sh = LPAE_SH_OUTER;
>          break;
> +
>      default:
> -        BUG();
> -        break;
> +        e.p2m.mattr = MATTR_MEM;
> +        e.p2m.sh = LPAE_SH_INNER;
>      }
>  
>      p2m_set_permission(&e, t, a);
> @@ -421,7 +422,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>           */
>           for ( i=0 ; i < LPAE_ENTRIES; i++ )
>           {
> -             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t, p2m->default_access);
> +             pte = mfn_to_p2m_entry(mfn, t, p2m->default_access);
>  
>               mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));
>  
> @@ -445,7 +446,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>  
>      unmap_domain_page(p);
>  
> -    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), MATTR_MEM, p2m_invalid,
> +    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
>                             p2m->default_access);
>  
>      p2m_write_pte(entry, pte, flush_cache);
> @@ -666,7 +667,6 @@ static int apply_one_level(struct domain *d,
>                             paddr_t *addr,
>                             paddr_t *maddr,
>                             bool_t *flush,
> -                           int mattr,
>                             p2m_type_t t,
>                             p2m_access_t a)
>  {
> @@ -695,7 +695,7 @@ static int apply_one_level(struct domain *d,
>                  return rc;
>  
>              /* New mapping is superpage aligned, make it */
> -            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), mattr, t, a);
> +            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), t, a);
>              if ( level < 3 )
>                  pte.p2m.table = 0; /* Superpage entry */
>  
> @@ -915,7 +915,6 @@ static int apply_p2m_changes(struct domain *d,
>                       gfn_t sgfn,
>                       unsigned long nr,
>                       mfn_t smfn,
> -                     int mattr,
>                       uint32_t mask,
>                       p2m_type_t t,
>                       p2m_access_t a)
> @@ -1054,7 +1053,7 @@ static int apply_p2m_changes(struct domain *d,
>                                    level, flush_pt, op,
>                                    start_gpaddr, end_gpaddr,
>                                    &addr, &maddr, &flush,
> -                                  mattr, t, a);
> +                                  t, a);
>              if ( ret < 0 ) { rc = ret ; goto out; }
>              count += ret;
>  
> @@ -1163,7 +1162,7 @@ out:
>           * mapping.
>           */
>          apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
> -                          mattr, 0, p2m_invalid, d->arch.p2m.default_access);
> +                          0, p2m_invalid, d->arch.p2m.default_access);
>      }
>  
>      return rc;
> @@ -1173,10 +1172,10 @@ static inline int p2m_insert_mapping(struct domain *d,
>                                       gfn_t start_gfn,
>                                       unsigned long nr,
>                                       mfn_t mfn,
> -                                     int mattr, p2m_type_t t)
> +                                     p2m_type_t t)
>  {
>      return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
> -                             mattr, 0, t, d->arch.p2m.default_access);
> +                             0, t, d->arch.p2m.default_access);
>  }
>  
>  static inline int p2m_remove_mapping(struct domain *d,
> @@ -1186,8 +1185,7 @@ static inline int p2m_remove_mapping(struct domain *d,
>  {
>      return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
>                               /* arguments below not used when removing mapping */
> -                             MATTR_MEM, 0, p2m_invalid,
> -                             d->arch.p2m.default_access);
> +                             0, p2m_invalid, d->arch.p2m.default_access);
>  }
>  
>  int map_regions_rw_cache(struct domain *d,
> @@ -1195,8 +1193,7 @@ int map_regions_rw_cache(struct domain *d,
>                           unsigned long nr,
>                           mfn_t mfn)
>  {
> -    return p2m_insert_mapping(d, gfn, nr, mfn,
> -                              MATTR_MEM, p2m_mmio_direct_c);
> +    return p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_c);
>  }
>  
>  int unmap_regions_rw_cache(struct domain *d,
> @@ -1212,8 +1209,7 @@ int map_mmio_regions(struct domain *d,
>                       unsigned long nr,
>                       mfn_t mfn)
>  {
> -    return p2m_insert_mapping(d, start_gfn, nr, mfn,
> -                              MATTR_DEV, p2m_mmio_direct_nc);
> +    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_nc);
>  }
>  
>  int unmap_mmio_regions(struct domain *d,
> @@ -1251,8 +1247,7 @@ int guest_physmap_add_entry(struct domain *d,
>                              unsigned long page_order,
>                              p2m_type_t t)
>  {
> -    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn,
> -                              MATTR_MEM, t);
> +    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
>  }
>  
>  void guest_physmap_remove_page(struct domain *d,
> @@ -1412,7 +1407,7 @@ int relinquish_p2m_mapping(struct domain *d)
>      nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
>  
>      return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
> -                             INVALID_MFN, MATTR_MEM, 0, p2m_invalid,
> +                             INVALID_MFN, 0, p2m_invalid,
>                               d->arch.p2m.default_access);
>  }
>  
> @@ -1425,8 +1420,7 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
>      end = gfn_min(end, p2m->max_mapped_gfn);
>  
>      return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
> -                             MATTR_MEM, 0, p2m_invalid,
> -                             d->arch.p2m.default_access);
> +                             0, p2m_invalid, d->arch.p2m.default_access);
>  }
>  
>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
> @@ -1827,8 +1821,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>      }
>  
>      rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
> -                           (nr - start), INVALID_MFN,
> -                           MATTR_MEM, mask, 0, a);
> +                           (nr - start), INVALID_MFN, mask, 0, a);
>      if ( rc < 0 )
>          return rc;
>      else if ( rc > 0 )
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 12/22] xen/arm: p2m: Remove unnecessary locking
  2016-07-20 16:10 ` [PATCH 12/22] xen/arm: p2m: Remove unnecessary locking Julien Grall
@ 2016-07-27  0:47   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  0:47 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The p2m is not yet in use when p2m_init and p2m_allocate_table are
> called. Furthermore the p2m is not used anymore when p2m_teardown is
> called. So taking the p2m lock is not necessary.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c | 14 +-------------
>  1 file changed, 1 insertion(+), 13 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 2f50b4f..4c279dc 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1267,8 +1267,6 @@ int p2m_alloc_table(struct domain *d)
>      if ( page == NULL )
>          return -ENOMEM;
>  
> -    spin_lock(&p2m->lock);
> -
>      /* Clear both first level pages */
>      for ( i = 0; i < P2M_ROOT_PAGES; i++ )
>          clear_and_clean_page(page + i);
> @@ -1284,8 +1282,6 @@ int p2m_alloc_table(struct domain *d)
>       */
>      flush_tlb_domain(d);
>  
> -    spin_unlock(&p2m->lock);
> -
>      return 0;
>  }
>  
> @@ -1350,8 +1346,6 @@ void p2m_teardown(struct domain *d)
>      struct p2m_domain *p2m = &d->arch.p2m;
>      struct page_info *pg;
>  
> -    spin_lock(&p2m->lock);
> -
>      while ( (pg = page_list_remove_head(&p2m->pages)) )
>          free_domheap_page(pg);
>  
> @@ -1363,8 +1357,6 @@ void p2m_teardown(struct domain *d)
>      p2m_free_vmid(d);
>  
>      radix_tree_destroy(&p2m->mem_access_settings, NULL);
> -
> -    spin_unlock(&p2m->lock);
>  }
>  
>  int p2m_init(struct domain *d)
> @@ -1375,12 +1367,11 @@ int p2m_init(struct domain *d)
>      spin_lock_init(&p2m->lock);
>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>  
> -    spin_lock(&p2m->lock);
>      p2m->vmid = INVALID_VMID;
>  
>      rc = p2m_alloc_vmid(d);
>      if ( rc != 0 )
> -        goto err;
> +        return rc;
>  
>      d->arch.vttbr = 0;
>  
> @@ -1393,9 +1384,6 @@ int p2m_init(struct domain *d)
>      p2m->mem_access_enabled = false;
>      radix_tree_init(&p2m->mem_access_settings);
>  
> -err:
> -    spin_unlock(&p2m->lock);
> -
>      return rc;
>  }
>  
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 13/22] xen/arm: p2m: Introduce p2m_{read, write}_{, un}lock helpers
  2016-07-20 16:10 ` [PATCH 13/22] xen/arm: p2m: Introduce p2m_{read, write}_{, un}lock helpers Julien Grall
@ 2016-07-27  0:50   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  0:50 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> Some functions in the p2m code do not require to modify the P2M code.
> Document it by introducing separate helpers to lock the p2m.
> 
> This patch does not change the lock. This will be done in a subsequent
> patch.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c | 49 +++++++++++++++++++++++++++++++++++++------------
>  1 file changed, 37 insertions(+), 12 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 4c279dc..d74c249 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -47,11 +47,36 @@ static bool_t p2m_mapping(lpae_t pte)
>      return p2m_valid(pte) && !pte.p2m.table;
>  }
>  
> +static inline void p2m_write_lock(struct p2m_domain *p2m)
> +{
> +    spin_lock(&p2m->lock);
> +}
> +
> +static inline void p2m_write_unlock(struct p2m_domain *p2m)
> +{
> +    spin_unlock(&p2m->lock);
> +}
> +
> +static inline void p2m_read_lock(struct p2m_domain *p2m)
> +{
> +    spin_lock(&p2m->lock);
> +}
> +
> +static inline void p2m_read_unlock(struct p2m_domain *p2m)
> +{
> +    spin_unlock(&p2m->lock);
> +}
> +
> +static inline int p2m_is_locked(struct p2m_domain *p2m)
> +{
> +    return spin_is_locked(&p2m->lock);
> +}
> +
>  void p2m_dump_info(struct domain *d)
>  {
>      struct p2m_domain *p2m = &d->arch.p2m;
>  
> -    spin_lock(&p2m->lock);
> +    p2m_read_lock(p2m);
>      printk("p2m mappings for domain %d (vmid %d):\n",
>             d->domain_id, p2m->vmid);
>      BUG_ON(p2m->stats.mappings[0] || p2m->stats.shattered[0]);
> @@ -60,7 +85,7 @@ void p2m_dump_info(struct domain *d)
>      printk("  2M mappings: %ld (shattered %ld)\n",
>             p2m->stats.mappings[2], p2m->stats.shattered[2]);
>      printk("  4K mappings: %ld\n", p2m->stats.mappings[3]);
> -    spin_unlock(&p2m->lock);
> +    p2m_read_unlock(p2m);
>  }
>  
>  void memory_type_changed(struct domain *d)
> @@ -166,7 +191,7 @@ static mfn_t __p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
>      p2m_type_t _t;
>      unsigned int level, root_table;
>  
> -    ASSERT(spin_is_locked(&p2m->lock));
> +    ASSERT(p2m_is_locked(p2m));
>      BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
>  
>      /* Allow t to be NULL */
> @@ -233,9 +258,9 @@ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
>      mfn_t ret;
>      struct p2m_domain *p2m = &d->arch.p2m;
>  
> -    spin_lock(&p2m->lock);
> +    p2m_read_lock(p2m);
>      ret = __p2m_lookup(d, gfn, t);
> -    spin_unlock(&p2m->lock);
> +    p2m_read_unlock(p2m);
>  
>      return ret;
>  }
> @@ -476,7 +501,7 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
>  #undef ACCESS
>      };
>  
> -    ASSERT(spin_is_locked(&p2m->lock));
> +    ASSERT(p2m_is_locked(p2m));
>  
>      /* If no setting was ever set, just return rwx. */
>      if ( !p2m->mem_access_enabled )
> @@ -945,7 +970,7 @@ static int apply_p2m_changes(struct domain *d,
>       */
>      flush_pt = iommu_enabled && !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK);
>  
> -    spin_lock(&p2m->lock);
> +    p2m_write_lock(p2m);
>  
>      /* Static mapping. P2M_ROOT_PAGES > 1 are handled below */
>      if ( P2M_ROOT_PAGES == 1 )
> @@ -1149,7 +1174,7 @@ out:
>              unmap_domain_page(mappings[level]);
>      }
>  
> -    spin_unlock(&p2m->lock);
> +    p2m_write_unlock(p2m);
>  
>      if ( rc < 0 && ( op == INSERT ) &&
>           addr != start_gpaddr )
> @@ -1530,7 +1555,7 @@ struct page_info *get_page_from_gva(struct vcpu *v, vaddr_t va,
>      if ( v != current )
>          return NULL;
>  
> -    spin_lock(&p2m->lock);
> +    p2m_read_lock(p2m);
>  
>      rc = gvirt_to_maddr(va, &maddr, flags);
>  
> @@ -1550,7 +1575,7 @@ err:
>      if ( !page && p2m->mem_access_enabled )
>          page = p2m_mem_access_check_and_get_page(va, flags);
>  
> -    spin_unlock(&p2m->lock);
> +    p2m_read_unlock(p2m);
>  
>      return page;
>  }
> @@ -1824,9 +1849,9 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn,
>      int ret;
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>  
> -    spin_lock(&p2m->lock);
> +    p2m_read_lock(p2m);
>      ret = __p2m_get_mem_access(d, gfn, access);
> -    spin_unlock(&p2m->lock);
> +    p2m_read_unlock(p2m);
>  
>      return ret;
>  }
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 14/22] xen/arm: p2m: Switch the p2m lock from spinlock to rwlock
  2016-07-20 16:10 ` [PATCH 14/22] xen/arm: p2m: Switch the p2m lock from spinlock to rwlock Julien Grall
@ 2016-07-27  0:51   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  0:51 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> P2M reads do not require to be serialized. This will add contention
> when PV drivers are using multi-queue because parallel grant
> map/unmaps/copies will happen on DomU's p2m.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>     I have not done benchark to verify the performance, however a rwlock
>     is always an improvement compare to a spinlock when most of the
>     access only read data.
> 
>     It might be possible to convert the rwlock to a per-cpu rwlock which
>     show some improvement on x86.
> ---
>  xen/arch/arm/p2m.c        | 12 ++++++------
>  xen/include/asm-arm/p2m.h |  3 ++-
>  2 files changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index d74c249..6136767 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -49,27 +49,27 @@ static bool_t p2m_mapping(lpae_t pte)
>  
>  static inline void p2m_write_lock(struct p2m_domain *p2m)
>  {
> -    spin_lock(&p2m->lock);
> +    write_lock(&p2m->lock);
>  }
>  
>  static inline void p2m_write_unlock(struct p2m_domain *p2m)
>  {
> -    spin_unlock(&p2m->lock);
> +    write_unlock(&p2m->lock);
>  }
>  
>  static inline void p2m_read_lock(struct p2m_domain *p2m)
>  {
> -    spin_lock(&p2m->lock);
> +    read_lock(&p2m->lock);
>  }
>  
>  static inline void p2m_read_unlock(struct p2m_domain *p2m)
>  {
> -    spin_unlock(&p2m->lock);
> +    read_unlock(&p2m->lock);
>  }
>  
>  static inline int p2m_is_locked(struct p2m_domain *p2m)
>  {
> -    return spin_is_locked(&p2m->lock);
> +    return rw_is_locked(&p2m->lock);
>  }
>  
>  void p2m_dump_info(struct domain *d)
> @@ -1389,7 +1389,7 @@ int p2m_init(struct domain *d)
>      struct p2m_domain *p2m = &d->arch.p2m;
>      int rc = 0;
>  
> -    spin_lock_init(&p2m->lock);
> +    rwlock_init(&p2m->lock);
>      INIT_PAGE_LIST_HEAD(&p2m->pages);
>  
>      p2m->vmid = INVALID_VMID;
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 20a220ea..abda70c 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -3,6 +3,7 @@
>  
>  #include <xen/mm.h>
>  #include <xen/radix-tree.h>
> +#include <xen/rwlock.h>
>  #include <public/vm_event.h> /* for vm_event_response_t */
>  #include <public/memory.h>
>  #include <xen/p2m-common.h>
> @@ -20,7 +21,7 @@ extern void memory_type_changed(struct domain *);
>  /* Per-p2m-table state */
>  struct p2m_domain {
>      /* Lock that protects updates to the p2m */
> -    spinlock_t lock;
> +    rwlock_t lock;
>  
>      /* Pages used to construct the p2m */
>      struct page_list_head pages;
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create
  2016-07-20 16:10 ` [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create Julien Grall
  2016-07-22  8:32   ` Sergej Proskurin
@ 2016-07-27  0:54   ` Stefano Stabellini
  1 sibling, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  0:54 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The p2m root table does not need to be allocate separately.
> 
> Also remove unnecessary fields initialization as the structure is already
> memset to 0 and the fields will be override by p2m_alloc_table.
                                      ^ overridden


> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/domain.c     | 3 ---
>  xen/arch/arm/p2m.c        | 8 +++-----
>  xen/include/asm-arm/p2m.h | 7 -------
>  3 files changed, 3 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 61fc08e..688adec 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -572,9 +572,6 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
>      if ( (rc = domain_io_init(d)) != 0 )
>          goto fail;
>  
> -    if ( (rc = p2m_alloc_table(d)) != 0 )
> -        goto fail;
> -
>      switch ( config->gic_version )
>      {
>      case XEN_DOMCTL_CONFIG_GIC_NATIVE:
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 6136767..c407e6a 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1282,7 +1282,7 @@ void guest_physmap_remove_page(struct domain *d,
>      p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
>  }
>  
> -int p2m_alloc_table(struct domain *d)
> +static int p2m_alloc_table(struct domain *d)
>  {
>      struct p2m_domain *p2m = &d->arch.p2m;
>      struct page_info *page;
> @@ -1398,10 +1398,6 @@ int p2m_init(struct domain *d)
>      if ( rc != 0 )
>          return rc;
>  
> -    d->arch.vttbr = 0;
> -
> -    p2m->root = NULL;
> -
>      p2m->max_mapped_gfn = _gfn(0);
>      p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
>  
> @@ -1409,6 +1405,8 @@ int p2m_init(struct domain *d)
>      p2m->mem_access_enabled = false;
>      radix_tree_init(&p2m->mem_access_settings);
>  
> +    rc = p2m_alloc_table(d);
> +
>      return rc;
>  }
>  
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index abda70c..ce28e8a 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -149,13 +149,6 @@ void p2m_teardown(struct domain *d);
>   */
>  int relinquish_p2m_mapping(struct domain *d);
>  
> -/*
> - * Allocate a new p2m table for a domain.
> - *
> - * Returns 0 for success or -errno.
> - */
> -int p2m_alloc_table(struct domain *d);
> -
>  /* Context switch */
>  void p2m_save_state(struct vcpu *p);
>  void p2m_restore_state(struct vcpu *n);
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain
  2016-07-20 16:10 ` [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain Julien Grall
  2016-07-22  7:46   ` Sergej Proskurin
@ 2016-07-27  0:57   ` Stefano Stabellini
  2016-07-27 10:00     ` Julien Grall
  2016-07-27 17:19   ` Julien Grall
  2 siblings, 1 reply; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  0:57 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The field vttbr holds the base address of the translation table for
> guest. Its value will depends on how the p2m has been initialized and
> will only be used by the code code.
                           ^ code code?


> So move the field from arch_domain to p2m_domain. This will also ease
> the implementation of altp2m.
> ---
>  xen/arch/arm/p2m.c           | 11 +++++++----
>  xen/arch/arm/traps.c         |  2 +-
>  xen/include/asm-arm/domain.h |  1 -
>  xen/include/asm-arm/p2m.h    |  3 +++
>  4 files changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index c407e6a..c52081a 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -107,10 +107,14 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>  
>  static void p2m_load_VTTBR(struct domain *d)
>  {
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
>      if ( is_idle_domain(d) )
>          return;
> -    BUG_ON(!d->arch.vttbr);
> -    WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
> +
> +    ASSERT(p2m->vttbr);
> +
> +    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>      isb(); /* Ensure update is visible */
>  }
>  
> @@ -1298,8 +1302,7 @@ static int p2m_alloc_table(struct domain *d)
>  
>      p2m->root = page;
>  
> -    d->arch.vttbr = page_to_maddr(p2m->root)
> -        | ((uint64_t)p2m->vmid&0xff)<<48;
> +    p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
>  
>      /*
>       * Make sure that all TLBs corresponding to the new VMID are flushed
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 06a8ee5..65c6fb4 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -880,7 +880,7 @@ void vcpu_show_registers(const struct vcpu *v)
>      ctxt.ifsr32_el2 = v->arch.ifsr;
>  #endif
>  
> -    ctxt.vttbr_el2 = v->domain->arch.vttbr;
> +    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
>  
>      _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1, v);
>  }
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 4e9d8bf..9452fcd 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -48,7 +48,6 @@ struct arch_domain
>  
>      /* Virtual MMU */
>      struct p2m_domain p2m;
> -    uint64_t vttbr;
>  
>      struct hvm_domain hvm_domain;
>      gfn_t *grant_table_gfn;
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index ce28e8a..53c4d78 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -32,6 +32,9 @@ struct p2m_domain {
>      /* Current VMID in use */
>      uint8_t vmid;
>  
> +    /* Current Translation Table Base Register for the p2m */
> +    uint64_t vttbr;
> +
>      /*
>       * Highest guest frame that's ever been mapped in the p2m
>       * Only takes into account ram and foreign mapping
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 17/22] xen/arm: p2m: Don't need to restore the state for an idle vCPU.
  2016-07-20 16:10 ` [PATCH 17/22] xen/arm: p2m: Don't need to restore the state for an idle vCPU Julien Grall
  2016-07-22  7:37   ` Sergej Proskurin
@ 2016-07-27  1:05   ` Stefano Stabellini
  1 sibling, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  1:05 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The function p2m_restore_state could be called with an idle vCPU in
> arguments (when called by construct_dom0). However, we will never return
> to EL0/EL1 in this case, so it is not necessary to restore the p2m
> registers.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index c52081a..d1b6009 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -127,6 +127,9 @@ void p2m_restore_state(struct vcpu *n)
>  {
>      register_t hcr;
>  
> +    if ( is_idle_vcpu(n) )
> +        return;
> +
>      hcr = READ_SYSREG(HCR_EL2);
>      WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>      isb();
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 18/22] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain
  2016-07-20 16:11 ` [PATCH 18/22] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain Julien Grall
  2016-07-22  7:51   ` Sergej Proskurin
@ 2016-07-27  1:12   ` Stefano Stabellini
  2016-07-27 10:22     ` Julien Grall
  1 sibling, 1 reply; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  1:12 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The current implementation of flush_tlb_domain is relying on the domain
> to have a single p2m. With the upcoming feature altp2m, a single domain
> may have different p2m. So we would need to switch to the correct p2m in
> order to flush the TLBs.
> 
> Rather than checking whether the domain is not the current domain, check
> whether the VTTBR is different. The resulting assembly code is much
> smaller: from 38 instructions (+ 2 functions call) to 22 instructions.

That's true but SYSREG reads are more expensive than regular
instructions.


> Signed-off-by: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/p2m.c | 18 +++++++++++-------
>  1 file changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index d1b6009..015c1e8 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -151,24 +151,28 @@ void p2m_restore_state(struct vcpu *n)
>  
>  void flush_tlb_domain(struct domain *d)
>  {
> +    struct p2m_domain *p2m = &d->arch.p2m;
>      unsigned long flags = 0;
> +    uint64_t ovttbr;
>  
>      /*
> -     * Update the VTTBR if necessary with the domain d. In this case,
> -     * it's only necessary to flush TLBs on every CPUs with the current VMID
> -     * (our domain).
> +     * ARM only provides an instruction to flush TLBs for the current
> +     * VMID. So switch to the VTTBR of a given P2M if different.
>       */
> -    if ( d != current->domain )
> +    ovttbr = READ_SYSREG64(VTTBR_EL2);
> +    if ( ovttbr != p2m->vttbr )
>      {
>          local_irq_save(flags);
> -        p2m_load_VTTBR(d);
> +        WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
> +        isb();
>      }
>  
>      flush_tlb();
>  
> -    if ( d != current->domain )
> +    if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )

You should be able to remove this second SYSREG read and optimize the
code further.


>      {
> -        p2m_load_VTTBR(current->domain);
> +        WRITE_SYSREG64(ovttbr, VTTBR_EL2);
> +        isb();
>          local_irq_restore(flags);
>      }
>  }
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 19/22] xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state
  2016-07-20 16:11 ` [PATCH 19/22] xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state Julien Grall
  2016-07-22  8:07   ` Sergej Proskurin
@ 2016-07-27  1:13   ` Stefano Stabellini
  1 sibling, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  1:13 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> p2m_restore_state is the last caller of p2m_load_VTTBR and already check
> if the vCPU does not belong to the idle domain.
> 
> Note that it is likely possible to remove some isb in the function
> p2m_restore_state, however this is not the purpose of this patch. So the
> numerous isb have been left.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c | 16 ++--------------
>  1 file changed, 2 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 015c1e8..c756e0c 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -105,19 +105,6 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>                   P2M_ROOT_LEVEL, P2M_ROOT_PAGES);
>  }
>  
> -static void p2m_load_VTTBR(struct domain *d)
> -{
> -    struct p2m_domain *p2m = &d->arch.p2m;
> -
> -    if ( is_idle_domain(d) )
> -        return;
> -
> -    ASSERT(p2m->vttbr);
> -
> -    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
> -    isb(); /* Ensure update is visible */
> -}
> -
>  void p2m_save_state(struct vcpu *p)
>  {
>      p->arch.sctlr = READ_SYSREG(SCTLR_EL1);
> @@ -126,6 +113,7 @@ void p2m_save_state(struct vcpu *p)
>  void p2m_restore_state(struct vcpu *n)
>  {
>      register_t hcr;
> +    struct p2m_domain *p2m = &n->domain->arch.p2m;
>  
>      if ( is_idle_vcpu(n) )
>          return;
> @@ -134,7 +122,7 @@ void p2m_restore_state(struct vcpu *n)
>      WRITE_SYSREG(hcr & ~HCR_VM, HCR_EL2);
>      isb();
>  
> -    p2m_load_VTTBR(n->domain);
> +    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>      isb();
>  
>      if ( is_32bit_domain(n->domain) )
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 20/22] xen/arm: Don't export flush_tlb_domain
  2016-07-20 16:11 ` [PATCH 20/22] xen/arm: Don't export flush_tlb_domain Julien Grall
  2016-07-22  8:54   ` Sergej Proskurin
@ 2016-07-27  1:14   ` Stefano Stabellini
  1 sibling, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  1:14 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The function flush_tlb_domain is not used outside of the file where it
> has been declared.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c             | 2 +-
>  xen/include/asm-arm/flushtlb.h | 3 ---
>  2 files changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index c756e0c..8541171 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -137,7 +137,7 @@ void p2m_restore_state(struct vcpu *n)
>      isb();
>  }
>  
> -void flush_tlb_domain(struct domain *d)
> +static void flush_tlb_domain(struct domain *d)
>  {
>      struct p2m_domain *p2m = &d->arch.p2m;
>      unsigned long flags = 0;
> diff --git a/xen/include/asm-arm/flushtlb.h b/xen/include/asm-arm/flushtlb.h
> index c986b3f..329fbb4 100644
> --- a/xen/include/asm-arm/flushtlb.h
> +++ b/xen/include/asm-arm/flushtlb.h
> @@ -25,9 +25,6 @@ do {                                                                    \
>  /* Flush specified CPUs' TLBs */
>  void flush_tlb_mask(const cpumask_t *mask);
>  
> -/* Flush CPU's TLBs for the specified domain */
> -void flush_tlb_domain(struct domain *d);
> -
>  #endif /* __ASM_ARM_FLUSHTLB_H__ */
>  /*
>   * Local variables:
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 21/22] xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb
  2016-07-20 16:11 ` [PATCH 21/22] xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb Julien Grall
@ 2016-07-27  1:15   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  1:15 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> The function to flush the TLBs for a given p2m does not need to know about
> the domain. So pass directly the p2m in parameter.
> 
> At the same time rename the function to p2m_flush_tlb to match the
> parameter change.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 8541171..5511d25 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -137,9 +137,8 @@ void p2m_restore_state(struct vcpu *n)
>      isb();
>  }
>  
> -static void flush_tlb_domain(struct domain *d)
> +static void p2m_flush_tlb(struct p2m_domain *p2m)
>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
>      unsigned long flags = 0;
>      uint64_t ovttbr;
>  
> @@ -1158,7 +1157,7 @@ static int apply_p2m_changes(struct domain *d,
>  out:
>      if ( flush )
>      {
> -        flush_tlb_domain(d);
> +        p2m_flush_tlb(&d->arch.p2m);
>          ret = iommu_iotlb_flush(d, gfn_x(sgfn), nr);
>          if ( !rc )
>              rc = ret;
> @@ -1303,7 +1302,7 @@ static int p2m_alloc_table(struct domain *d)
>       * Make sure that all TLBs corresponding to the new VMID are flushed
>       * before using it
>       */
> -    flush_tlb_domain(d);
> +    p2m_flush_tlb(p2m);
>  
>      return 0;
>  }
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 22/22] xen/arm: p2m: Pass the p2m in parameter rather the domain when it is possible
  2016-07-20 16:11 ` [PATCH 22/22] xen/arm: p2m: Pass the p2m in parameter rather the domain when it is possible Julien Grall
@ 2016-07-27  1:15   ` Stefano Stabellini
  0 siblings, 0 replies; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27  1:15 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 20 Jul 2016, Julien Grall wrote:
> Some p2m functions do not care about the domain except to get the
> associate p2m.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>  xen/arch/arm/p2m.c | 16 +++++++---------
>  1 file changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 5511d25..aceafc2 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -415,10 +415,9 @@ static inline void p2m_remove_pte(lpae_t *p, bool_t flush_cache)
>   *
>   * level_shift is the number of bits at the level we want to create.
>   */
> -static int p2m_create_table(struct domain *d, lpae_t *entry,
> +static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry,
>                              int level_shift, bool_t flush_cache)
>  {
> -    struct p2m_domain *p2m = &d->arch.p2m;
>      struct page_info *page;
>      lpae_t *p;
>      lpae_t pte;
> @@ -653,18 +652,17 @@ static const paddr_t level_masks[] =
>  static const paddr_t level_shifts[] =
>      { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
>  
> -static int p2m_shatter_page(struct domain *d,
> +static int p2m_shatter_page(struct p2m_domain *p2m,
>                              lpae_t *entry,
>                              unsigned int level,
>                              bool_t flush_cache)
>  {
>      const paddr_t level_shift = level_shifts[level];
> -    int rc = p2m_create_table(d, entry,
> +    int rc = p2m_create_table(p2m, entry,
>                                level_shift - PAGE_SHIFT, flush_cache);
>  
>      if ( !rc )
>      {
> -        struct p2m_domain *p2m = &d->arch.p2m;
>          p2m->stats.shattered[level]++;
>          p2m->stats.mappings[level]--;
>          p2m->stats.mappings[level+1] += LPAE_ENTRIES;
> @@ -757,7 +755,7 @@ static int apply_one_level(struct domain *d,
>              /* Not present -> create table entry and descend */
>              if ( !p2m_valid(orig_pte) )
>              {
> -                rc = p2m_create_table(d, entry, 0, flush_cache);
> +                rc = p2m_create_table(p2m, entry, 0, flush_cache);
>                  if ( rc < 0 )
>                      return rc;
>                  return P2M_ONE_DESCEND;
> @@ -767,7 +765,7 @@ static int apply_one_level(struct domain *d,
>              if ( p2m_mapping(orig_pte) )
>              {
>                  *flush = true;
> -                rc = p2m_shatter_page(d, entry, level, flush_cache);
> +                rc = p2m_shatter_page(p2m, entry, level, flush_cache);
>                  if ( rc < 0 )
>                      return rc;
>              } /* else: an existing table mapping -> descend */
> @@ -804,7 +802,7 @@ static int apply_one_level(struct domain *d,
>                   * and descend.
>                   */
>                  *flush = true;
> -                rc = p2m_shatter_page(d, entry, level, flush_cache);
> +                rc = p2m_shatter_page(p2m, entry, level, flush_cache);
>                  if ( rc < 0 )
>                      return rc;
>  
> @@ -889,7 +887,7 @@ static int apply_one_level(struct domain *d,
>              /* Shatter large pages as we descend */
>              if ( p2m_mapping(orig_pte) )
>              {
> -                rc = p2m_shatter_page(d, entry, level, flush_cache);
> +                rc = p2m_shatter_page(p2m, entry, level, flush_cache);
>                  if ( rc < 0 )
>                      return rc;
>              } /* else: an existing table mapping -> descend */
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 06/22] xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry
  2016-07-26 22:28   ` Stefano Stabellini
@ 2016-07-27  9:54     ` Julien Grall
  2016-07-27 18:25       ` Stefano Stabellini
  0 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-27  9:54 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: proskurin, steve.capper, wei.chen, xen-devel

Hi Stefano,

On 26/07/16 23:28, Stefano Stabellini wrote:
> On Wed, 20 Jul 2016, Julien Grall wrote:
>> @@ -411,7 +411,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>>      if ( splitting )
>>      {
>>          p2m_type_t t = entry->p2m.type;
>> -        unsigned long base_pfn = entry->p2m.base;
>> +        mfn_t mfn = _mfn(entry->p2m.base);
>>          int i;
>>
>>          /*
>> @@ -420,8 +420,9 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>>           */
>>           for ( i=0 ; i < LPAE_ENTRIES; i++ )
>>           {
>> -             pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
>> -                                    MATTR_MEM, t, p2m->default_access);
>> +             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t, p2m->default_access);
>> +
>> +             mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));
>
> Should we be incrementing mfn before calling mfn_to_p2m_entry?

No. The base of the superpage is mfn, after splitting the first entry 
will be equal to the base, the second entry base + level_size...

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 09/22] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn
  2016-07-26 22:44   ` Stefano Stabellini
@ 2016-07-27  9:59     ` Julien Grall
  2016-07-27 17:56       ` Stefano Stabellini
  0 siblings, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-27  9:59 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: proskurin, steve.capper, wei.chen, xen-devel



On 26/07/16 23:44, Stefano Stabellini wrote:
> On Wed, 20 Jul 2016, Julien Grall wrote:
>> Currently, the check in get_page_from_gfn is using a blacklist. This is
>> very fragile because we may forgot to update the check when a new p2m
>> type is added.
>>
>> To avoid any possible issue, use a whitelist. All type backed by a RAM
>> page can could potential be valid. The check is borrowed from x86.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/include/asm-arm/p2m.h | 9 ++++++++-
>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 3091c04..78d37ab 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -104,9 +104,16 @@ typedef enum {
>>  #define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw) |        \
>>                         p2m_to_mask(p2m_ram_ro))
>>
>> +/* Grant mapping types, which map to a real frame in another VM */
>> +#define P2M_GRANT_TYPES (p2m_to_mask(p2m_grant_map_rw) |  \
>> +                         p2m_to_mask(p2m_grant_map_ro))
>> +
>>  /* Useful predicates */
>>  #define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
>>  #define p2m_is_foreign(_t) (p2m_to_mask(_t) & p2m_to_mask(p2m_map_foreign))
>> +#define p2m_is_any_ram(_t) (p2m_to_mask(_t) &                   \
>> +                            (P2M_RAM_TYPES | P2M_GRANT_TYPES |  \
>> +                             p2m_to_mask(p2m_map_foreign)))
>>
>>  static inline
>>  void p2m_mem_access_emulate_check(struct vcpu *v,
>> @@ -224,7 +231,7 @@ static inline struct page_info *get_page_from_gfn(
>>      if (t)
>>          *t = p2mt;
>>
>> -    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_direct )
>> +    if ( !p2m_is_any_ram(p2mt) )
>>          return NULL;
>
> What about the iommu mappings?

iommu mappings (p2m_iommu_map_*) are special mappings for the direct 
mapping workaround. They should not be used in get_page_from_gfn.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain
  2016-07-27  0:57   ` Stefano Stabellini
@ 2016-07-27 10:00     ` Julien Grall
  0 siblings, 0 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-27 10:00 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: proskurin, steve.capper, wei.chen, xen-devel

Hi Stefano,

On 27/07/16 01:57, Stefano Stabellini wrote:
> On Wed, 20 Jul 2016, Julien Grall wrote:
>> The field vttbr holds the base address of the translation table for
>> guest. Its value will depends on how the p2m has been initialized and
>> will only be used by the code code.
>                            ^ code code?

I think I wanted to say "P2M code".

Regards,

>
>> So move the field from arch_domain to p2m_domain. This will also ease
>> the implementation of altp2m.
>> ---
>>  xen/arch/arm/p2m.c           | 11 +++++++----
>>  xen/arch/arm/traps.c         |  2 +-
>>  xen/include/asm-arm/domain.h |  1 -
>>  xen/include/asm-arm/p2m.h    |  3 +++
>>  4 files changed, 11 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index c407e6a..c52081a 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -107,10 +107,14 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>>
>>  static void p2m_load_VTTBR(struct domain *d)
>>  {
>> +    struct p2m_domain *p2m = &d->arch.p2m;
>> +
>>      if ( is_idle_domain(d) )
>>          return;
>> -    BUG_ON(!d->arch.vttbr);
>> -    WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
>> +
>> +    ASSERT(p2m->vttbr);
>> +
>> +    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>>      isb(); /* Ensure update is visible */
>>  }
>>
>> @@ -1298,8 +1302,7 @@ static int p2m_alloc_table(struct domain *d)
>>
>>      p2m->root = page;
>>
>> -    d->arch.vttbr = page_to_maddr(p2m->root)
>> -        | ((uint64_t)p2m->vmid&0xff)<<48;
>> +    p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
>>
>>      /*
>>       * Make sure that all TLBs corresponding to the new VMID are flushed
>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>> index 06a8ee5..65c6fb4 100644
>> --- a/xen/arch/arm/traps.c
>> +++ b/xen/arch/arm/traps.c
>> @@ -880,7 +880,7 @@ void vcpu_show_registers(const struct vcpu *v)
>>      ctxt.ifsr32_el2 = v->arch.ifsr;
>>  #endif
>>
>> -    ctxt.vttbr_el2 = v->domain->arch.vttbr;
>> +    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
>>
>>      _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1, v);
>>  }
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 4e9d8bf..9452fcd 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -48,7 +48,6 @@ struct arch_domain
>>
>>      /* Virtual MMU */
>>      struct p2m_domain p2m;
>> -    uint64_t vttbr;
>>
>>      struct hvm_domain hvm_domain;
>>      gfn_t *grant_table_gfn;
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index ce28e8a..53c4d78 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -32,6 +32,9 @@ struct p2m_domain {
>>      /* Current VMID in use */
>>      uint8_t vmid;
>>
>> +    /* Current Translation Table Base Register for the p2m */
>> +    uint64_t vttbr;
>> +
>>      /*
>>       * Highest guest frame that's ever been mapped in the p2m
>>       * Only takes into account ram and foreign mapping
>> --
>> 1.9.1
>>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 18/22] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain
  2016-07-27  1:12   ` Stefano Stabellini
@ 2016-07-27 10:22     ` Julien Grall
  0 siblings, 0 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-27 10:22 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: proskurin, steve.capper, wei.chen, xen-devel

Hi Stefano,

On 27/07/16 02:12, Stefano Stabellini wrote:
> On Wed, 20 Jul 2016, Julien Grall wrote:
>> The current implementation of flush_tlb_domain is relying on the domain
>> to have a single p2m. With the upcoming feature altp2m, a single domain
>> may have different p2m. So we would need to switch to the correct p2m in
>> order to flush the TLBs.
>>
>> Rather than checking whether the domain is not the current domain, check
>> whether the VTTBR is different. The resulting assembly code is much
>> smaller: from 38 instructions (+ 2 functions call) to 22 instructions.
>
> That's true but SYSREG reads are more expensive than regular
> instructions.

This argument is not really true. The ARM ARM (D7-1879 in ARM DDI 
0487A.j) says: "Reads of the System registers can occur out of order 
with respect to earlier instructions executed on the same PE, provided 
that any data dependencies between the instructions are respected". So 
It will depend on how the micro-architecture implemented access to SYSREG.

However, the current code already contains plenty of SYSREG read access 
(via the macro current using TPIDR_EL2). So the number of SYSREG 
accesses stay exactly the same.

I also forgot to mention that the number of instructions in the function 
call (10 instructions). So we are down from 58 instructions to 22 
instructions.

Therefore, smaller code and likely better performance.

>
>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/p2m.c | 18 +++++++++++-------
>>  1 file changed, 11 insertions(+), 7 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index d1b6009..015c1e8 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -151,24 +151,28 @@ void p2m_restore_state(struct vcpu *n)
>>
>>  void flush_tlb_domain(struct domain *d)
>>  {
>> +    struct p2m_domain *p2m = &d->arch.p2m;
>>      unsigned long flags = 0;
>> +    uint64_t ovttbr;
>>
>>      /*
>> -     * Update the VTTBR if necessary with the domain d. In this case,
>> -     * it's only necessary to flush TLBs on every CPUs with the current VMID
>> -     * (our domain).
>> +     * ARM only provides an instruction to flush TLBs for the current
>> +     * VMID. So switch to the VTTBR of a given P2M if different.
>>       */
>> -    if ( d != current->domain )
>> +    ovttbr = READ_SYSREG64(VTTBR_EL2);
>> +    if ( ovttbr != p2m->vttbr )
>>      {
>>          local_irq_save(flags);
>> -        p2m_load_VTTBR(d);
>> +        WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>> +        isb();
>>      }
>>
>>      flush_tlb();
>>
>> -    if ( d != current->domain )
>> +    if ( ovttbr != READ_SYSREG64(VTTBR_EL2) )
>
> You should be able to remove this second SYSREG read and optimize the
> code further.

I should be able, however I think it will not bring much more 
optimization here but obfuscating a bit more the code.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type
  2016-07-20 16:10 ` [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type Julien Grall
  2016-07-27  0:41   ` Stefano Stabellini
@ 2016-07-27 17:15   ` Julien Grall
  2016-07-27 17:55     ` Stefano Stabellini
  1 sibling, 1 reply; 80+ messages in thread
From: Julien Grall @ 2016-07-27 17:15 UTC (permalink / raw)
  To: xen-devel, sstabellini; +Cc: proskurin, wei.chen, steve.capper

Hi,

On 20/07/16 17:10, Julien Grall wrote:
> Currently, mfn_to_p2m_entry is relying on the caller to provide the
> correct memory attribute and will deduce the sharability based on it.
>
> Some of the callers, such as p2m_create_table, are using same memory
> attribute regardless the underlying p2m type. For instance, this will
> lead to use change the memory attribute from MATTR_DEV to MATTR_MEM when
> a MMIO superpage is shattered.
>
> Furthermore, it makes more difficult to support different shareability
> with the same memory attribute.
>
> All the memory attributes could be deduced via the p2m type. This will
> simplify the code by dropping one parameter.

I just noticed that I forgot to add my Signed-off-by. Stefano, can you 
add my Signed-off-by while committing?

Cheers,

> ---
>     I am not sure whether p2m_direct_mmio_c (cacheable MMIO) should use
>     the outer-shareability or inner-shareability. Any opinions?
> ---
>  xen/arch/arm/p2m.c | 55 ++++++++++++++++++++++++------------------------------
>  1 file changed, 24 insertions(+), 31 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 999de2b..2f50b4f 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -325,8 +325,7 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
>      }
>  }
>
> -static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
> -                               p2m_type_t t, p2m_access_t a)
> +static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a)
>  {
>      /*
>       * sh, xn and write bit will be defined in the following switches
> @@ -335,7 +334,6 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
>      lpae_t e = (lpae_t) {
>          .p2m.af = 1,
>          .p2m.read = 1,
> -        .p2m.mattr = mattr,
>          .p2m.table = 1,
>          .p2m.valid = 1,
>          .p2m.type = t,
> @@ -343,18 +341,21 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
>
>      BUILD_BUG_ON(p2m_max_real_type > (1 << 4));
>
> -    switch (mattr)
> +    switch ( t )
>      {
> -    case MATTR_MEM:
> -        e.p2m.sh = LPAE_SH_INNER;
> +    case p2m_mmio_direct_nc:
> +        e.p2m.mattr = MATTR_DEV;
> +        e.p2m.sh = LPAE_SH_OUTER;
>          break;
>
> -    case MATTR_DEV:
> +    case p2m_mmio_direct_c:
> +        e.p2m.mattr = MATTR_MEM;
>          e.p2m.sh = LPAE_SH_OUTER;
>          break;
> +
>      default:
> -        BUG();
> -        break;
> +        e.p2m.mattr = MATTR_MEM;
> +        e.p2m.sh = LPAE_SH_INNER;
>      }
>
>      p2m_set_permission(&e, t, a);
> @@ -421,7 +422,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>           */
>           for ( i=0 ; i < LPAE_ENTRIES; i++ )
>           {
> -             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t, p2m->default_access);
> +             pte = mfn_to_p2m_entry(mfn, t, p2m->default_access);
>
>               mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));
>
> @@ -445,7 +446,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>
>      unmap_domain_page(p);
>
> -    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), MATTR_MEM, p2m_invalid,
> +    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
>                             p2m->default_access);
>
>      p2m_write_pte(entry, pte, flush_cache);
> @@ -666,7 +667,6 @@ static int apply_one_level(struct domain *d,
>                             paddr_t *addr,
>                             paddr_t *maddr,
>                             bool_t *flush,
> -                           int mattr,
>                             p2m_type_t t,
>                             p2m_access_t a)
>  {
> @@ -695,7 +695,7 @@ static int apply_one_level(struct domain *d,
>                  return rc;
>
>              /* New mapping is superpage aligned, make it */
> -            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), mattr, t, a);
> +            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), t, a);
>              if ( level < 3 )
>                  pte.p2m.table = 0; /* Superpage entry */
>
> @@ -915,7 +915,6 @@ static int apply_p2m_changes(struct domain *d,
>                       gfn_t sgfn,
>                       unsigned long nr,
>                       mfn_t smfn,
> -                     int mattr,
>                       uint32_t mask,
>                       p2m_type_t t,
>                       p2m_access_t a)
> @@ -1054,7 +1053,7 @@ static int apply_p2m_changes(struct domain *d,
>                                    level, flush_pt, op,
>                                    start_gpaddr, end_gpaddr,
>                                    &addr, &maddr, &flush,
> -                                  mattr, t, a);
> +                                  t, a);
>              if ( ret < 0 ) { rc = ret ; goto out; }
>              count += ret;
>
> @@ -1163,7 +1162,7 @@ out:
>           * mapping.
>           */
>          apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
> -                          mattr, 0, p2m_invalid, d->arch.p2m.default_access);
> +                          0, p2m_invalid, d->arch.p2m.default_access);
>      }
>
>      return rc;
> @@ -1173,10 +1172,10 @@ static inline int p2m_insert_mapping(struct domain *d,
>                                       gfn_t start_gfn,
>                                       unsigned long nr,
>                                       mfn_t mfn,
> -                                     int mattr, p2m_type_t t)
> +                                     p2m_type_t t)
>  {
>      return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
> -                             mattr, 0, t, d->arch.p2m.default_access);
> +                             0, t, d->arch.p2m.default_access);
>  }
>
>  static inline int p2m_remove_mapping(struct domain *d,
> @@ -1186,8 +1185,7 @@ static inline int p2m_remove_mapping(struct domain *d,
>  {
>      return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
>                               /* arguments below not used when removing mapping */
> -                             MATTR_MEM, 0, p2m_invalid,
> -                             d->arch.p2m.default_access);
> +                             0, p2m_invalid, d->arch.p2m.default_access);
>  }
>
>  int map_regions_rw_cache(struct domain *d,
> @@ -1195,8 +1193,7 @@ int map_regions_rw_cache(struct domain *d,
>                           unsigned long nr,
>                           mfn_t mfn)
>  {
> -    return p2m_insert_mapping(d, gfn, nr, mfn,
> -                              MATTR_MEM, p2m_mmio_direct_c);
> +    return p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_c);
>  }
>
>  int unmap_regions_rw_cache(struct domain *d,
> @@ -1212,8 +1209,7 @@ int map_mmio_regions(struct domain *d,
>                       unsigned long nr,
>                       mfn_t mfn)
>  {
> -    return p2m_insert_mapping(d, start_gfn, nr, mfn,
> -                              MATTR_DEV, p2m_mmio_direct_nc);
> +    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_nc);
>  }
>
>  int unmap_mmio_regions(struct domain *d,
> @@ -1251,8 +1247,7 @@ int guest_physmap_add_entry(struct domain *d,
>                              unsigned long page_order,
>                              p2m_type_t t)
>  {
> -    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn,
> -                              MATTR_MEM, t);
> +    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
>  }
>
>  void guest_physmap_remove_page(struct domain *d,
> @@ -1412,7 +1407,7 @@ int relinquish_p2m_mapping(struct domain *d)
>      nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
>
>      return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
> -                             INVALID_MFN, MATTR_MEM, 0, p2m_invalid,
> +                             INVALID_MFN, 0, p2m_invalid,
>                               d->arch.p2m.default_access);
>  }
>
> @@ -1425,8 +1420,7 @@ int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
>      end = gfn_min(end, p2m->max_mapped_gfn);
>
>      return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
> -                             MATTR_MEM, 0, p2m_invalid,
> -                             d->arch.p2m.default_access);
> +                             0, p2m_invalid, d->arch.p2m.default_access);
>  }
>
>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
> @@ -1827,8 +1821,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
>      }
>
>      rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
> -                           (nr - start), INVALID_MFN,
> -                           MATTR_MEM, mask, 0, a);
> +                           (nr - start), INVALID_MFN, mask, 0, a);
>      if ( rc < 0 )
>          return rc;
>      else if ( rc > 0 )
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain
  2016-07-20 16:10 ` [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain Julien Grall
  2016-07-22  7:46   ` Sergej Proskurin
  2016-07-27  0:57   ` Stefano Stabellini
@ 2016-07-27 17:19   ` Julien Grall
  2 siblings, 0 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-27 17:19 UTC (permalink / raw)
  To: xen-devel; +Cc: proskurin, sstabellini, wei.chen, steve.capper

Hi,

On 20/07/16 17:10, Julien Grall wrote:
> The field vttbr holds the base address of the translation table for
> guest. Its value will depends on how the p2m has been initialized and
> will only be used by the code code.
>
> So move the field from arch_domain to p2m_domain. This will also ease
> the implementation of altp2m.

I forgot to add my signed-off-by in this patch as well :/. I will add in 
the next version.

Regards,

> ---
>  xen/arch/arm/p2m.c           | 11 +++++++----
>  xen/arch/arm/traps.c         |  2 +-
>  xen/include/asm-arm/domain.h |  1 -
>  xen/include/asm-arm/p2m.h    |  3 +++
>  4 files changed, 11 insertions(+), 6 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index c407e6a..c52081a 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -107,10 +107,14 @@ void dump_p2m_lookup(struct domain *d, paddr_t addr)
>
>  static void p2m_load_VTTBR(struct domain *d)
>  {
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
>      if ( is_idle_domain(d) )
>          return;
> -    BUG_ON(!d->arch.vttbr);
> -    WRITE_SYSREG64(d->arch.vttbr, VTTBR_EL2);
> +
> +    ASSERT(p2m->vttbr);
> +
> +    WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2);
>      isb(); /* Ensure update is visible */
>  }
>
> @@ -1298,8 +1302,7 @@ static int p2m_alloc_table(struct domain *d)
>
>      p2m->root = page;
>
> -    d->arch.vttbr = page_to_maddr(p2m->root)
> -        | ((uint64_t)p2m->vmid&0xff)<<48;
> +    p2m->vttbr = page_to_maddr(p2m->root) | ((uint64_t)p2m->vmid & 0xff) << 48;
>
>      /*
>       * Make sure that all TLBs corresponding to the new VMID are flushed
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 06a8ee5..65c6fb4 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -880,7 +880,7 @@ void vcpu_show_registers(const struct vcpu *v)
>      ctxt.ifsr32_el2 = v->arch.ifsr;
>  #endif
>
> -    ctxt.vttbr_el2 = v->domain->arch.vttbr;
> +    ctxt.vttbr_el2 = v->domain->arch.p2m.vttbr;
>
>      _show_registers(&v->arch.cpu_info->guest_cpu_user_regs, &ctxt, 1, v);
>  }
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 4e9d8bf..9452fcd 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -48,7 +48,6 @@ struct arch_domain
>
>      /* Virtual MMU */
>      struct p2m_domain p2m;
> -    uint64_t vttbr;
>
>      struct hvm_domain hvm_domain;
>      gfn_t *grant_table_gfn;
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index ce28e8a..53c4d78 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -32,6 +32,9 @@ struct p2m_domain {
>      /* Current VMID in use */
>      uint8_t vmid;
>
> +    /* Current Translation Table Base Register for the p2m */
> +    uint64_t vttbr;
> +
>      /*
>       * Highest guest frame that's ever been mapped in the p2m
>       * Only takes into account ram and foreign mapping
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type
  2016-07-27 17:15   ` Julien Grall
@ 2016-07-27 17:55     ` Stefano Stabellini
  2016-07-27 20:15       ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27 17:55 UTC (permalink / raw)
  To: Julien Grall; +Cc: proskurin, sstabellini, steve.capper, wei.chen, xen-devel

On Wed, 27 Jul 2016, Julien Grall wrote:
> Hi,
> 
> On 20/07/16 17:10, Julien Grall wrote:
> > Currently, mfn_to_p2m_entry is relying on the caller to provide the
> > correct memory attribute and will deduce the sharability based on it.
> > 
> > Some of the callers, such as p2m_create_table, are using same memory
> > attribute regardless the underlying p2m type. For instance, this will
> > lead to use change the memory attribute from MATTR_DEV to MATTR_MEM when
> > a MMIO superpage is shattered.
> > 
> > Furthermore, it makes more difficult to support different shareability
> > with the same memory attribute.
> > 
> > All the memory attributes could be deduced via the p2m type. This will
> > simplify the code by dropping one parameter.
> 
> I just noticed that I forgot to add my Signed-off-by. Stefano, can you add my
> Signed-off-by while committing?

I could, but given that you need to resend some of the patches anyway,
it might be easier for me to wait for the next version.

 
> > ---
> >     I am not sure whether p2m_direct_mmio_c (cacheable MMIO) should use
> >     the outer-shareability or inner-shareability. Any opinions?
> > ---
> >  xen/arch/arm/p2m.c | 55
> > ++++++++++++++++++++++++------------------------------
> >  1 file changed, 24 insertions(+), 31 deletions(-)
> > 
> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > index 999de2b..2f50b4f 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -325,8 +325,7 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t,
> > p2m_access_t a)
> >      }
> >  }
> > 
> > -static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
> > -                               p2m_type_t t, p2m_access_t a)
> > +static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a)
> >  {
> >      /*
> >       * sh, xn and write bit will be defined in the following switches
> > @@ -335,7 +334,6 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int
> > mattr,
> >      lpae_t e = (lpae_t) {
> >          .p2m.af = 1,
> >          .p2m.read = 1,
> > -        .p2m.mattr = mattr,
> >          .p2m.table = 1,
> >          .p2m.valid = 1,
> >          .p2m.type = t,
> > @@ -343,18 +341,21 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int
> > mattr,
> > 
> >      BUILD_BUG_ON(p2m_max_real_type > (1 << 4));
> > 
> > -    switch (mattr)
> > +    switch ( t )
> >      {
> > -    case MATTR_MEM:
> > -        e.p2m.sh = LPAE_SH_INNER;
> > +    case p2m_mmio_direct_nc:
> > +        e.p2m.mattr = MATTR_DEV;
> > +        e.p2m.sh = LPAE_SH_OUTER;
> >          break;
> > 
> > -    case MATTR_DEV:
> > +    case p2m_mmio_direct_c:
> > +        e.p2m.mattr = MATTR_MEM;
> >          e.p2m.sh = LPAE_SH_OUTER;
> >          break;
> > +
> >      default:
> > -        BUG();
> > -        break;
> > +        e.p2m.mattr = MATTR_MEM;
> > +        e.p2m.sh = LPAE_SH_INNER;
> >      }
> > 
> >      p2m_set_permission(&e, t, a);
> > @@ -421,7 +422,7 @@ static int p2m_create_table(struct domain *d, lpae_t
> > *entry,
> >           */
> >           for ( i=0 ; i < LPAE_ENTRIES; i++ )
> >           {
> > -             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t,
> > p2m->default_access);
> > +             pte = mfn_to_p2m_entry(mfn, t, p2m->default_access);
> > 
> >               mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));
> > 
> > @@ -445,7 +446,7 @@ static int p2m_create_table(struct domain *d, lpae_t
> > *entry,
> > 
> >      unmap_domain_page(p);
> > 
> > -    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), MATTR_MEM, p2m_invalid,
> > +    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
> >                             p2m->default_access);
> > 
> >      p2m_write_pte(entry, pte, flush_cache);
> > @@ -666,7 +667,6 @@ static int apply_one_level(struct domain *d,
> >                             paddr_t *addr,
> >                             paddr_t *maddr,
> >                             bool_t *flush,
> > -                           int mattr,
> >                             p2m_type_t t,
> >                             p2m_access_t a)
> >  {
> > @@ -695,7 +695,7 @@ static int apply_one_level(struct domain *d,
> >                  return rc;
> > 
> >              /* New mapping is superpage aligned, make it */
> > -            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), mattr, t,
> > a);
> > +            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), t, a);
> >              if ( level < 3 )
> >                  pte.p2m.table = 0; /* Superpage entry */
> > 
> > @@ -915,7 +915,6 @@ static int apply_p2m_changes(struct domain *d,
> >                       gfn_t sgfn,
> >                       unsigned long nr,
> >                       mfn_t smfn,
> > -                     int mattr,
> >                       uint32_t mask,
> >                       p2m_type_t t,
> >                       p2m_access_t a)
> > @@ -1054,7 +1053,7 @@ static int apply_p2m_changes(struct domain *d,
> >                                    level, flush_pt, op,
> >                                    start_gpaddr, end_gpaddr,
> >                                    &addr, &maddr, &flush,
> > -                                  mattr, t, a);
> > +                                  t, a);
> >              if ( ret < 0 ) { rc = ret ; goto out; }
> >              count += ret;
> > 
> > @@ -1163,7 +1162,7 @@ out:
> >           * mapping.
> >           */
> >          apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
> > -                          mattr, 0, p2m_invalid,
> > d->arch.p2m.default_access);
> > +                          0, p2m_invalid, d->arch.p2m.default_access);
> >      }
> > 
> >      return rc;
> > @@ -1173,10 +1172,10 @@ static inline int p2m_insert_mapping(struct domain
> > *d,
> >                                       gfn_t start_gfn,
> >                                       unsigned long nr,
> >                                       mfn_t mfn,
> > -                                     int mattr, p2m_type_t t)
> > +                                     p2m_type_t t)
> >  {
> >      return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
> > -                             mattr, 0, t, d->arch.p2m.default_access);
> > +                             0, t, d->arch.p2m.default_access);
> >  }
> > 
> >  static inline int p2m_remove_mapping(struct domain *d,
> > @@ -1186,8 +1185,7 @@ static inline int p2m_remove_mapping(struct domain *d,
> >  {
> >      return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
> >                               /* arguments below not used when removing
> > mapping */
> > -                             MATTR_MEM, 0, p2m_invalid,
> > -                             d->arch.p2m.default_access);
> > +                             0, p2m_invalid, d->arch.p2m.default_access);
> >  }
> > 
> >  int map_regions_rw_cache(struct domain *d,
> > @@ -1195,8 +1193,7 @@ int map_regions_rw_cache(struct domain *d,
> >                           unsigned long nr,
> >                           mfn_t mfn)
> >  {
> > -    return p2m_insert_mapping(d, gfn, nr, mfn,
> > -                              MATTR_MEM, p2m_mmio_direct_c);
> > +    return p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_c);
> >  }
> > 
> >  int unmap_regions_rw_cache(struct domain *d,
> > @@ -1212,8 +1209,7 @@ int map_mmio_regions(struct domain *d,
> >                       unsigned long nr,
> >                       mfn_t mfn)
> >  {
> > -    return p2m_insert_mapping(d, start_gfn, nr, mfn,
> > -                              MATTR_DEV, p2m_mmio_direct_nc);
> > +    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_nc);
> >  }
> > 
> >  int unmap_mmio_regions(struct domain *d,
> > @@ -1251,8 +1247,7 @@ int guest_physmap_add_entry(struct domain *d,
> >                              unsigned long page_order,
> >                              p2m_type_t t)
> >  {
> > -    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn,
> > -                              MATTR_MEM, t);
> > +    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
> >  }
> > 
> >  void guest_physmap_remove_page(struct domain *d,
> > @@ -1412,7 +1407,7 @@ int relinquish_p2m_mapping(struct domain *d)
> >      nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
> > 
> >      return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
> > -                             INVALID_MFN, MATTR_MEM, 0, p2m_invalid,
> > +                             INVALID_MFN, 0, p2m_invalid,
> >                               d->arch.p2m.default_access);
> >  }
> > 
> > @@ -1425,8 +1420,7 @@ int p2m_cache_flush(struct domain *d, gfn_t start,
> > unsigned long nr)
> >      end = gfn_min(end, p2m->max_mapped_gfn);
> > 
> >      return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
> > -                             MATTR_MEM, 0, p2m_invalid,
> > -                             d->arch.p2m.default_access);
> > +                             0, p2m_invalid, d->arch.p2m.default_access);
> >  }
> > 
> >  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
> > @@ -1827,8 +1821,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn,
> > uint32_t nr,
> >      }
> > 
> >      rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
> > -                           (nr - start), INVALID_MFN,
> > -                           MATTR_MEM, mask, 0, a);
> > +                           (nr - start), INVALID_MFN, mask, 0, a);
> >      if ( rc < 0 )
> >          return rc;
> >      else if ( rc > 0 )
> > 
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 09/22] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn
  2016-07-27  9:59     ` Julien Grall
@ 2016-07-27 17:56       ` Stefano Stabellini
  2016-07-27 17:57         ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27 17:56 UTC (permalink / raw)
  To: Julien Grall
  Cc: proskurin, Stefano Stabellini, steve.capper, wei.chen, xen-devel

On Wed, 27 Jul 2016, Julien Grall wrote:
> On 26/07/16 23:44, Stefano Stabellini wrote:
> > On Wed, 20 Jul 2016, Julien Grall wrote:
> > > Currently, the check in get_page_from_gfn is using a blacklist. This is
> > > very fragile because we may forgot to update the check when a new p2m
> > > type is added.
> > > 
> > > To avoid any possible issue, use a whitelist. All type backed by a RAM
> > > page can could potential be valid. The check is borrowed from x86.
> > > 
> > > Signed-off-by: Julien Grall <julien.grall@arm.com>
> > > ---
> > >  xen/include/asm-arm/p2m.h | 9 ++++++++-
> > >  1 file changed, 8 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> > > index 3091c04..78d37ab 100644
> > > --- a/xen/include/asm-arm/p2m.h
> > > +++ b/xen/include/asm-arm/p2m.h
> > > @@ -104,9 +104,16 @@ typedef enum {
> > >  #define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw) |        \
> > >                         p2m_to_mask(p2m_ram_ro))
> > > 
> > > +/* Grant mapping types, which map to a real frame in another VM */
> > > +#define P2M_GRANT_TYPES (p2m_to_mask(p2m_grant_map_rw) |  \
> > > +                         p2m_to_mask(p2m_grant_map_ro))
> > > +
> > >  /* Useful predicates */
> > >  #define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
> > >  #define p2m_is_foreign(_t) (p2m_to_mask(_t) &
> > > p2m_to_mask(p2m_map_foreign))
> > > +#define p2m_is_any_ram(_t) (p2m_to_mask(_t) &                   \
> > > +                            (P2M_RAM_TYPES | P2M_GRANT_TYPES |  \
> > > +                             p2m_to_mask(p2m_map_foreign)))
> > > 
> > >  static inline
> > >  void p2m_mem_access_emulate_check(struct vcpu *v,
> > > @@ -224,7 +231,7 @@ static inline struct page_info *get_page_from_gfn(
> > >      if (t)
> > >          *t = p2mt;
> > > 
> > > -    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_direct )
> > > +    if ( !p2m_is_any_ram(p2mt) )
> > >          return NULL;
> > 
> > What about the iommu mappings?
> 
> iommu mappings (p2m_iommu_map_*) are special mappings for the direct mapping
> workaround. They should not be used in get_page_from_gfn.

Make sense. I think they deserve to be mentioned in the commit message,
as this patch changes behavior for them.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 09/22] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn
  2016-07-27 17:56       ` Stefano Stabellini
@ 2016-07-27 17:57         ` Julien Grall
  0 siblings, 0 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-27 17:57 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: proskurin, steve.capper, wei.chen, xen-devel

Hi Stefano,

On 27/07/16 18:56, Stefano Stabellini wrote:
> On Wed, 27 Jul 2016, Julien Grall wrote:
>> On 26/07/16 23:44, Stefano Stabellini wrote:
>>> On Wed, 20 Jul 2016, Julien Grall wrote:
>>>> Currently, the check in get_page_from_gfn is using a blacklist. This is
>>>> very fragile because we may forgot to update the check when a new p2m
>>>> type is added.
>>>>
>>>> To avoid any possible issue, use a whitelist. All type backed by a RAM
>>>> page can could potential be valid. The check is borrowed from x86.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>>> ---
>>>>  xen/include/asm-arm/p2m.h | 9 ++++++++-
>>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>>>> index 3091c04..78d37ab 100644
>>>> --- a/xen/include/asm-arm/p2m.h
>>>> +++ b/xen/include/asm-arm/p2m.h
>>>> @@ -104,9 +104,16 @@ typedef enum {
>>>>  #define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw) |        \
>>>>                         p2m_to_mask(p2m_ram_ro))
>>>>
>>>> +/* Grant mapping types, which map to a real frame in another VM */
>>>> +#define P2M_GRANT_TYPES (p2m_to_mask(p2m_grant_map_rw) |  \
>>>> +                         p2m_to_mask(p2m_grant_map_ro))
>>>> +
>>>>  /* Useful predicates */
>>>>  #define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
>>>>  #define p2m_is_foreign(_t) (p2m_to_mask(_t) &
>>>> p2m_to_mask(p2m_map_foreign))
>>>> +#define p2m_is_any_ram(_t) (p2m_to_mask(_t) &                   \
>>>> +                            (P2M_RAM_TYPES | P2M_GRANT_TYPES |  \
>>>> +                             p2m_to_mask(p2m_map_foreign)))
>>>>
>>>>  static inline
>>>>  void p2m_mem_access_emulate_check(struct vcpu *v,
>>>> @@ -224,7 +231,7 @@ static inline struct page_info *get_page_from_gfn(
>>>>      if (t)
>>>>          *t = p2mt;
>>>>
>>>> -    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_direct )
>>>> +    if ( !p2m_is_any_ram(p2mt) )
>>>>          return NULL;
>>>
>>> What about the iommu mappings?
>>
>> iommu mappings (p2m_iommu_map_*) are special mappings for the direct mapping
>> workaround. They should not be used in get_page_from_gfn.
>
> Make sense. I think they deserve to be mentioned in the commit message,
> as this patch changes behavior for them.

Good point. I will update the commit message.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 06/22] xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry
  2016-07-27  9:54     ` Julien Grall
@ 2016-07-27 18:25       ` Stefano Stabellini
  2016-07-27 20:14         ` Julien Grall
  0 siblings, 1 reply; 80+ messages in thread
From: Stefano Stabellini @ 2016-07-27 18:25 UTC (permalink / raw)
  To: Julien Grall
  Cc: proskurin, Stefano Stabellini, steve.capper, wei.chen, xen-devel

On Wed, 27 Jul 2016, Julien Grall wrote:
> Hi Stefano,
> 
> On 26/07/16 23:28, Stefano Stabellini wrote:
> > On Wed, 20 Jul 2016, Julien Grall wrote:
> > > @@ -411,7 +411,7 @@ static int p2m_create_table(struct domain *d, lpae_t
> > > *entry,
> > >      if ( splitting )
> > >      {
> > >          p2m_type_t t = entry->p2m.type;
> > > -        unsigned long base_pfn = entry->p2m.base;
> > > +        mfn_t mfn = _mfn(entry->p2m.base);
> > >          int i;
> > > 
> > >          /*
> > > @@ -420,8 +420,9 @@ static int p2m_create_table(struct domain *d, lpae_t
> > > *entry,
> > >           */
> > >           for ( i=0 ; i < LPAE_ENTRIES; i++ )
> > >           {
> > > -             pte = mfn_to_p2m_entry(base_pfn +
> > > (i<<(level_shift-LPAE_SHIFT)),
> > > -                                    MATTR_MEM, t, p2m->default_access);
> > > +             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t,
> > > p2m->default_access);
> > > +
> > > +             mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));
> > 
> > Should we be incrementing mfn before calling mfn_to_p2m_entry?
> 
> No. The base of the superpage is mfn, after splitting the first entry will be
> equal to the base, the second entry base + level_size...

I understand what the patch is doing now, I confused "1" with "i" :-)
The patch is OK. It might be more obvious as the following:


  for ( i=0 ; i < LPAE_ENTRIES; i++ )
  {
     pte = mfn_to_p2m_entry(mfn_add(mfn, (i<<(level_shift-LPAE_SHIFT))),
                            MATTR_MEM, t, p2m->default_access);


However it's just a matter of taste, so I'll let you choose the way you
prefer.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 06/22] xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry
  2016-07-27 18:25       ` Stefano Stabellini
@ 2016-07-27 20:14         ` Julien Grall
  0 siblings, 0 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-27 20:14 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: proskurin, steve.capper, wei.chen, xen-devel

Hi Stefano,

On 27/07/2016 19:25, Stefano Stabellini wrote:
> On Wed, 27 Jul 2016, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 26/07/16 23:28, Stefano Stabellini wrote:
>>> On Wed, 20 Jul 2016, Julien Grall wrote:
>>>> @@ -411,7 +411,7 @@ static int p2m_create_table(struct domain *d, lpae_t
>>>> *entry,
>>>>      if ( splitting )
>>>>      {
>>>>          p2m_type_t t = entry->p2m.type;
>>>> -        unsigned long base_pfn = entry->p2m.base;
>>>> +        mfn_t mfn = _mfn(entry->p2m.base);
>>>>          int i;
>>>>
>>>>          /*
>>>> @@ -420,8 +420,9 @@ static int p2m_create_table(struct domain *d, lpae_t
>>>> *entry,
>>>>           */
>>>>           for ( i=0 ; i < LPAE_ENTRIES; i++ )
>>>>           {
>>>> -             pte = mfn_to_p2m_entry(base_pfn +
>>>> (i<<(level_shift-LPAE_SHIFT)),
>>>> -                                    MATTR_MEM, t, p2m->default_access);
>>>> +             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t,
>>>> p2m->default_access);
>>>> +
>>>> +             mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));
>>>
>>> Should we be incrementing mfn before calling mfn_to_p2m_entry?
>>
>> No. The base of the superpage is mfn, after splitting the first entry will be
>> equal to the base, the second entry base + level_size...
>
> I understand what the patch is doing now, I confused "1" with "i" :-)
> The patch is OK. It might be more obvious as the following:
>
>
>   for ( i=0 ; i < LPAE_ENTRIES; i++ )
>   {
>      pte = mfn_to_p2m_entry(mfn_add(mfn, (i<<(level_shift-LPAE_SHIFT))),
>                             MATTR_MEM, t, p2m->default_access);
>
>
> However it's just a matter of taste, so I'll let you choose the way you
> prefer.

I wanted to avoid shifting "i" at each loop (which should save an 
instruction). However, as it seems to be confusing, I will use your 
suggestion.

>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you!

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

* Re: [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type
  2016-07-27 17:55     ` Stefano Stabellini
@ 2016-07-27 20:15       ` Julien Grall
  0 siblings, 0 replies; 80+ messages in thread
From: Julien Grall @ 2016-07-27 20:15 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: proskurin, steve.capper, wei.chen, xen-devel



On 27/07/2016 18:55, Stefano Stabellini wrote:
> On Wed, 27 Jul 2016, Julien Grall wrote:
>> Hi,
>>
>> On 20/07/16 17:10, Julien Grall wrote:
>>> Currently, mfn_to_p2m_entry is relying on the caller to provide the
>>> correct memory attribute and will deduce the sharability based on it.
>>>
>>> Some of the callers, such as p2m_create_table, are using same memory
>>> attribute regardless the underlying p2m type. For instance, this will
>>> lead to use change the memory attribute from MATTR_DEV to MATTR_MEM when
>>> a MMIO superpage is shattered.
>>>
>>> Furthermore, it makes more difficult to support different shareability
>>> with the same memory attribute.
>>>
>>> All the memory attributes could be deduced via the p2m type. This will
>>> simplify the code by dropping one parameter.
>>
>> I just noticed that I forgot to add my Signed-off-by. Stefano, can you add my
>> Signed-off-by while committing?
>
> I could, but given that you need to resend some of the patches anyway,
> it might be easier for me to wait for the next version.

I will resend the series tomorrow morning. Thank you for the review!

Cheers,

>
>
>>> ---
>>>     I am not sure whether p2m_direct_mmio_c (cacheable MMIO) should use
>>>     the outer-shareability or inner-shareability. Any opinions?
>>> ---
>>>  xen/arch/arm/p2m.c | 55
>>> ++++++++++++++++++++++++------------------------------
>>>  1 file changed, 24 insertions(+), 31 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 999de2b..2f50b4f 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -325,8 +325,7 @@ static void p2m_set_permission(lpae_t *e, p2m_type_t t,
>>> p2m_access_t a)
>>>      }
>>>  }
>>>
>>> -static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int mattr,
>>> -                               p2m_type_t t, p2m_access_t a)
>>> +static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a)
>>>  {
>>>      /*
>>>       * sh, xn and write bit will be defined in the following switches
>>> @@ -335,7 +334,6 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int
>>> mattr,
>>>      lpae_t e = (lpae_t) {
>>>          .p2m.af = 1,
>>>          .p2m.read = 1,
>>> -        .p2m.mattr = mattr,
>>>          .p2m.table = 1,
>>>          .p2m.valid = 1,
>>>          .p2m.type = t,
>>> @@ -343,18 +341,21 @@ static lpae_t mfn_to_p2m_entry(mfn_t mfn, unsigned int
>>> mattr,
>>>
>>>      BUILD_BUG_ON(p2m_max_real_type > (1 << 4));
>>>
>>> -    switch (mattr)
>>> +    switch ( t )
>>>      {
>>> -    case MATTR_MEM:
>>> -        e.p2m.sh = LPAE_SH_INNER;
>>> +    case p2m_mmio_direct_nc:
>>> +        e.p2m.mattr = MATTR_DEV;
>>> +        e.p2m.sh = LPAE_SH_OUTER;
>>>          break;
>>>
>>> -    case MATTR_DEV:
>>> +    case p2m_mmio_direct_c:
>>> +        e.p2m.mattr = MATTR_MEM;
>>>          e.p2m.sh = LPAE_SH_OUTER;
>>>          break;
>>> +
>>>      default:
>>> -        BUG();
>>> -        break;
>>> +        e.p2m.mattr = MATTR_MEM;
>>> +        e.p2m.sh = LPAE_SH_INNER;
>>>      }
>>>
>>>      p2m_set_permission(&e, t, a);
>>> @@ -421,7 +422,7 @@ static int p2m_create_table(struct domain *d, lpae_t
>>> *entry,
>>>           */
>>>           for ( i=0 ; i < LPAE_ENTRIES; i++ )
>>>           {
>>> -             pte = mfn_to_p2m_entry(mfn, MATTR_MEM, t,
>>> p2m->default_access);
>>> +             pte = mfn_to_p2m_entry(mfn, t, p2m->default_access);
>>>
>>>               mfn = mfn_add(mfn, 1UL << (level_shift - LPAE_SHIFT));
>>>
>>> @@ -445,7 +446,7 @@ static int p2m_create_table(struct domain *d, lpae_t
>>> *entry,
>>>
>>>      unmap_domain_page(p);
>>>
>>> -    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), MATTR_MEM, p2m_invalid,
>>> +    pte = mfn_to_p2m_entry(_mfn(page_to_mfn(page)), p2m_invalid,
>>>                             p2m->default_access);
>>>
>>>      p2m_write_pte(entry, pte, flush_cache);
>>> @@ -666,7 +667,6 @@ static int apply_one_level(struct domain *d,
>>>                             paddr_t *addr,
>>>                             paddr_t *maddr,
>>>                             bool_t *flush,
>>> -                           int mattr,
>>>                             p2m_type_t t,
>>>                             p2m_access_t a)
>>>  {
>>> @@ -695,7 +695,7 @@ static int apply_one_level(struct domain *d,
>>>                  return rc;
>>>
>>>              /* New mapping is superpage aligned, make it */
>>> -            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), mattr, t,
>>> a);
>>> +            pte = mfn_to_p2m_entry(_mfn(*maddr >> PAGE_SHIFT), t, a);
>>>              if ( level < 3 )
>>>                  pte.p2m.table = 0; /* Superpage entry */
>>>
>>> @@ -915,7 +915,6 @@ static int apply_p2m_changes(struct domain *d,
>>>                       gfn_t sgfn,
>>>                       unsigned long nr,
>>>                       mfn_t smfn,
>>> -                     int mattr,
>>>                       uint32_t mask,
>>>                       p2m_type_t t,
>>>                       p2m_access_t a)
>>> @@ -1054,7 +1053,7 @@ static int apply_p2m_changes(struct domain *d,
>>>                                    level, flush_pt, op,
>>>                                    start_gpaddr, end_gpaddr,
>>>                                    &addr, &maddr, &flush,
>>> -                                  mattr, t, a);
>>> +                                  t, a);
>>>              if ( ret < 0 ) { rc = ret ; goto out; }
>>>              count += ret;
>>>
>>> @@ -1163,7 +1162,7 @@ out:
>>>           * mapping.
>>>           */
>>>          apply_p2m_changes(d, REMOVE, sgfn, gfn - gfn_x(sgfn), smfn,
>>> -                          mattr, 0, p2m_invalid,
>>> d->arch.p2m.default_access);
>>> +                          0, p2m_invalid, d->arch.p2m.default_access);
>>>      }
>>>
>>>      return rc;
>>> @@ -1173,10 +1172,10 @@ static inline int p2m_insert_mapping(struct domain
>>> *d,
>>>                                       gfn_t start_gfn,
>>>                                       unsigned long nr,
>>>                                       mfn_t mfn,
>>> -                                     int mattr, p2m_type_t t)
>>> +                                     p2m_type_t t)
>>>  {
>>>      return apply_p2m_changes(d, INSERT, start_gfn, nr, mfn,
>>> -                             mattr, 0, t, d->arch.p2m.default_access);
>>> +                             0, t, d->arch.p2m.default_access);
>>>  }
>>>
>>>  static inline int p2m_remove_mapping(struct domain *d,
>>> @@ -1186,8 +1185,7 @@ static inline int p2m_remove_mapping(struct domain *d,
>>>  {
>>>      return apply_p2m_changes(d, REMOVE, start_gfn, nr, mfn,
>>>                               /* arguments below not used when removing
>>> mapping */
>>> -                             MATTR_MEM, 0, p2m_invalid,
>>> -                             d->arch.p2m.default_access);
>>> +                             0, p2m_invalid, d->arch.p2m.default_access);
>>>  }
>>>
>>>  int map_regions_rw_cache(struct domain *d,
>>> @@ -1195,8 +1193,7 @@ int map_regions_rw_cache(struct domain *d,
>>>                           unsigned long nr,
>>>                           mfn_t mfn)
>>>  {
>>> -    return p2m_insert_mapping(d, gfn, nr, mfn,
>>> -                              MATTR_MEM, p2m_mmio_direct_c);
>>> +    return p2m_insert_mapping(d, gfn, nr, mfn, p2m_mmio_direct_c);
>>>  }
>>>
>>>  int unmap_regions_rw_cache(struct domain *d,
>>> @@ -1212,8 +1209,7 @@ int map_mmio_regions(struct domain *d,
>>>                       unsigned long nr,
>>>                       mfn_t mfn)
>>>  {
>>> -    return p2m_insert_mapping(d, start_gfn, nr, mfn,
>>> -                              MATTR_DEV, p2m_mmio_direct_nc);
>>> +    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_nc);
>>>  }
>>>
>>>  int unmap_mmio_regions(struct domain *d,
>>> @@ -1251,8 +1247,7 @@ int guest_physmap_add_entry(struct domain *d,
>>>                              unsigned long page_order,
>>>                              p2m_type_t t)
>>>  {
>>> -    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn,
>>> -                              MATTR_MEM, t);
>>> +    return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t);
>>>  }
>>>
>>>  void guest_physmap_remove_page(struct domain *d,
>>> @@ -1412,7 +1407,7 @@ int relinquish_p2m_mapping(struct domain *d)
>>>      nr = gfn_x(p2m->max_mapped_gfn) - gfn_x(p2m->lowest_mapped_gfn);
>>>
>>>      return apply_p2m_changes(d, RELINQUISH, p2m->lowest_mapped_gfn, nr,
>>> -                             INVALID_MFN, MATTR_MEM, 0, p2m_invalid,
>>> +                             INVALID_MFN, 0, p2m_invalid,
>>>                               d->arch.p2m.default_access);
>>>  }
>>>
>>> @@ -1425,8 +1420,7 @@ int p2m_cache_flush(struct domain *d, gfn_t start,
>>> unsigned long nr)
>>>      end = gfn_min(end, p2m->max_mapped_gfn);
>>>
>>>      return apply_p2m_changes(d, CACHEFLUSH, start, nr, INVALID_MFN,
>>> -                             MATTR_MEM, 0, p2m_invalid,
>>> -                             d->arch.p2m.default_access);
>>> +                             0, p2m_invalid, d->arch.p2m.default_access);
>>>  }
>>>
>>>  mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
>>> @@ -1827,8 +1821,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn,
>>> uint32_t nr,
>>>      }
>>>
>>>      rc = apply_p2m_changes(d, MEMACCESS, gfn_add(gfn, start),
>>> -                           (nr - start), INVALID_MFN,
>>> -                           MATTR_MEM, mask, 0, a);
>>> +                           (nr - start), INVALID_MFN, mask, 0, a);
>>>      if ( rc < 0 )
>>>          return rc;
>>>      else if ( rc > 0 )
>>>
>>
>> --
>> Julien Grall
>>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 80+ messages in thread

end of thread, other threads:[~2016-07-27 20:15 UTC | newest]

Thread overview: 80+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-20 16:10 [PATCH 00/22] xen/arm: P2M clean-up and fixes Julien Grall
2016-07-20 16:10 ` [PATCH 01/22] xen/arm: system: Use the correct parameter name in local_irq_restore Julien Grall
2016-07-22  1:19   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 02/22] xen/arm: p2m: Pass the vCPU in parameter to get_page_from_gva Julien Grall
2016-07-22  1:22   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 03/22] xen/arm: p2m: Restrict usage of get_page_from_gva to the current vCPU Julien Grall
2016-07-22  1:25   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 04/22] xen/arm: p2m: Fix multi-lines coding style comments Julien Grall
2016-07-22  1:26   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 05/22] xen/arm: p2m: Clean-up mfn_to_p2m_entry Julien Grall
2016-07-26 22:24   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 06/22] xen/arm: p2m: Use the typesafe MFN in mfn_to_p2m_entry Julien Grall
2016-07-26 22:28   ` Stefano Stabellini
2016-07-27  9:54     ` Julien Grall
2016-07-27 18:25       ` Stefano Stabellini
2016-07-27 20:14         ` Julien Grall
2016-07-20 16:10 ` [PATCH 07/22] xen/arm: p2m: Use p2m_is_foreign in get_page_from_gfn to avoid open coding Julien Grall
2016-07-26 22:33   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 08/22] xen/arm: p2m: Simplify p2m type check by using bitmask Julien Grall
2016-07-26 22:36   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 09/22] xen/arm: p2m: Use a whitelist rather than blacklist in get_page_from_gfn Julien Grall
2016-07-26 22:44   ` Stefano Stabellini
2016-07-27  9:59     ` Julien Grall
2016-07-27 17:56       ` Stefano Stabellini
2016-07-27 17:57         ` Julien Grall
2016-07-20 16:10 ` [PATCH 10/22] xen/arm: p2m: Differentiate cacheable vs non-cacheable MMIO Julien Grall
2016-07-26 22:47   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 11/22] xen/arm: p2m: Find the memory attributes based on the p2m type Julien Grall
2016-07-27  0:41   ` Stefano Stabellini
2016-07-27 17:15   ` Julien Grall
2016-07-27 17:55     ` Stefano Stabellini
2016-07-27 20:15       ` Julien Grall
2016-07-20 16:10 ` [PATCH 12/22] xen/arm: p2m: Remove unnecessary locking Julien Grall
2016-07-27  0:47   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 13/22] xen/arm: p2m: Introduce p2m_{read, write}_{, un}lock helpers Julien Grall
2016-07-27  0:50   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 14/22] xen/arm: p2m: Switch the p2m lock from spinlock to rwlock Julien Grall
2016-07-27  0:51   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 15/22] xen/arm: Don't call p2m_alloc_table from arch_domain_create Julien Grall
2016-07-22  8:32   ` Sergej Proskurin
2016-07-22  9:18     ` Julien Grall
2016-07-22 10:16       ` Sergej Proskurin
2016-07-22 10:26         ` Julien Grall
2016-07-22 10:39           ` Sergej Proskurin
2016-07-22 10:38             ` Julien Grall
2016-07-22 11:05               ` Sergej Proskurin
2016-07-22 13:00                 ` Julien Grall
2016-07-23 17:59                   ` Sergej Proskurin
2016-07-27  0:54   ` Stefano Stabellini
2016-07-20 16:10 ` [PATCH 16/22] xen/arm: p2m: Move the vttbr field from arch_domain to p2m_domain Julien Grall
2016-07-22  7:46   ` Sergej Proskurin
2016-07-22  9:23     ` Julien Grall
2016-07-27  0:57   ` Stefano Stabellini
2016-07-27 10:00     ` Julien Grall
2016-07-27 17:19   ` Julien Grall
2016-07-20 16:10 ` [PATCH 17/22] xen/arm: p2m: Don't need to restore the state for an idle vCPU Julien Grall
2016-07-22  7:37   ` Sergej Proskurin
2016-07-27  1:05   ` Stefano Stabellini
2016-07-20 16:11 ` [PATCH 18/22] xen/arm: p2m: Rework the context switch to another VTTBR in flush_tlb_domain Julien Grall
2016-07-22  7:51   ` Sergej Proskurin
2016-07-27  1:12   ` Stefano Stabellini
2016-07-27 10:22     ` Julien Grall
2016-07-20 16:11 ` [PATCH 19/22] xen/arm: p2m: Inline p2m_load_VTTBR into p2m_restore_state Julien Grall
2016-07-22  8:07   ` Sergej Proskurin
2016-07-22  9:29     ` Julien Grall
2016-07-27  1:13   ` Stefano Stabellini
2016-07-20 16:11 ` [PATCH 20/22] xen/arm: Don't export flush_tlb_domain Julien Grall
2016-07-22  8:54   ` Sergej Proskurin
2016-07-22  9:30     ` Julien Grall
2016-07-22 10:25       ` Sergej Proskurin
2016-07-22 10:34         ` Julien Grall
2016-07-22 10:46           ` Sergej Proskurin
2016-07-22 10:57             ` Julien Grall
2016-07-22 11:22               ` Sergej Proskurin
2016-07-27  1:14   ` Stefano Stabellini
2016-07-20 16:11 ` [PATCH 21/22] xen/arm: p2m: Replace flush_tlb_domain by p2m_flush_tlb Julien Grall
2016-07-27  1:15   ` Stefano Stabellini
2016-07-20 16:11 ` [PATCH 22/22] xen/arm: p2m: Pass the p2m in parameter rather the domain when it is possible Julien Grall
2016-07-27  1:15   ` Stefano Stabellini
2016-07-22  1:31 ` [PATCH 00/22] xen/arm: P2M clean-up and fixes Stefano Stabellini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).