All of lore.kernel.org
 help / color / mirror / Atom feed
* [v2 00/16] xen/arm: Stage-2 handling cleanup
@ 2017-12-12 19:01 Julien Grall
  2017-12-12 19:01 ` [v2 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags Julien Grall
                   ` (15 more replies)
  0 siblings, 16 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:01 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

Hi all,

This patch series is a collection of cleanup around stage-2 handling. They
are consolidating different pieces of the hypervisor. This will make easier
to maintain and update stage-2 change in the future.

For all the changes see in each patch.

Cheers,

Julien Grall (16):
  xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags
  xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it
  xen/arm: Extend copy_to_guest to support copying from guest VA and use
    it
  xen/arm: Extend copy_to_guest to support zeroing guest VA and use it
  xen/arm: guest_copy: Extend the prototype to pass the vCPU
  xen/arm: Extend copy_to_guest to support copying from/to guest
    physical address
  xen/arm: Introduce copy_to_guest_phys_flush_dcache
  xen/arm: kernel: Rework kernel_zimage_load to use the generic copy
    helper
  xen/arm: domain_build: Rework initrd_load to use the generic copy
    helper
  xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load
  xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync
  xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it
  xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync
  xen/arm: traps: Remove the field gva from mmio_info_t
  xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio
  xen/arm: traps: Merge do_trap_instr_abort_guest and
    do_trap_data_abort_guest

 xen/arch/arm/domain_build.c        |  39 +++-----
 xen/arch/arm/guestcopy.c           | 200 +++++++++++++++++--------------------
 xen/arch/arm/kernel.c              |  33 +++---
 xen/arch/arm/kernel.h              |   2 +
 xen/arch/arm/p2m.c                 |  53 +++++-----
 xen/arch/arm/traps.c               | 161 +++++++++++------------------
 xen/include/asm-arm/guest_access.h |   6 ++
 xen/include/asm-arm/mmio.h         |   1 -
 xen/include/asm-arm/p2m.h          |   2 +
 9 files changed, 209 insertions(+), 288 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [v2 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
@ 2017-12-12 19:01 ` Julien Grall
  2017-12-12 19:01 ` [v2 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it Julien Grall
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:01 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

In a follow-up patch, it will be necessary to pass more flags to the
function.

Rename flush_dcache to flags and introduce a define to tell whether the
cache needs to be flushed after the copy.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/guestcopy.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 4ee07fcea3..2620e659b4 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -5,8 +5,10 @@
 #include <asm/current.h>
 #include <asm/guest_access.h>
 
+#define COPY_flush_dcache   (1U << 0)
+
 static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
-                                              unsigned len, int flush_dcache)
+                                              unsigned len, int flags)
 {
     /* XXX needs to handle faults */
     unsigned offset = (vaddr_t)to & ~PAGE_MASK;
@@ -24,7 +26,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         p = __map_domain_page(page);
         p += offset;
         memcpy(p, from, size);
-        if ( flush_dcache )
+        if ( flags & COPY_flush_dcache )
             clean_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
@@ -50,7 +52,7 @@ unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
-    return raw_copy_to_guest_helper(to, from, len, 1);
+    return raw_copy_to_guest_helper(to, from, len, COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
  2017-12-12 19:01 ` [v2 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags Julien Grall
@ 2017-12-12 19:01 ` Julien Grall
  2017-12-12 19:51   ` Stefano Stabellini
  2017-12-12 19:01 ` [v2 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it Julien Grall
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:01 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

All the helpers within arch/arm/guestcopy.c are doing the same things:
copy data from/to the guest.

At the moment, the logic is duplicated in each helpers making more
difficult to implement new variant.

The first step for the consolidation is to get a common prototype and a
base. For convenience (it is at the beginning of the file!),
raw_copy_to_guest_helper is chosen.

The function is now renamed copy_guest to show it will be a
generic function to copy data from/to the guest. Note that for now, only
copying to guest virtual address is supported. Follow-up patches will
extend the support.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Use vaddr_t
        - Use uint64_t for addr in copy_guest
        - Add a BUILD_BUG_ON to make sure vaddr_t fit in uint64_t.
---
 xen/arch/arm/guestcopy.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 2620e659b4..08d0fa0a83 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -7,11 +7,13 @@
 
 #define COPY_flush_dcache   (1U << 0)
 
-static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
-                                              unsigned len, int flags)
+static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
+                                unsigned int flags)
 {
     /* XXX needs to handle faults */
-    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
+    unsigned offset = addr & ~PAGE_MASK;
+
+    BUILD_BUG_ON((sizeof(addr)) < sizeof(vaddr_t));
 
     while ( len )
     {
@@ -19,21 +21,21 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
+        page = get_page_from_gva(current, addr, GV2M_WRITE);
         if ( page == NULL )
             return len;
 
         p = __map_domain_page(page);
         p += offset;
-        memcpy(p, from, size);
+        memcpy(p, buf, size);
         if ( flags & COPY_flush_dcache )
             clean_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
         put_page(page);
         len -= size;
-        from += size;
-        to += size;
+        buf += size;
+        addr += size;
         /*
          * After the first iteration, guest virtual address is correctly
          * aligned to PAGE_SIZE.
@@ -46,13 +48,13 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
-    return raw_copy_to_guest_helper(to, from, len, 0);
+    return copy_guest((void *)from, (vaddr_t)to, len, 0);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
-    return raw_copy_to_guest_helper(to, from, len, COPY_flush_dcache);
+    return copy_guest((void *)from, (vaddr_t)to, len, COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
  2017-12-12 19:01 ` [v2 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags Julien Grall
  2017-12-12 19:01 ` [v2 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it Julien Grall
@ 2017-12-12 19:01 ` Julien Grall
  2017-12-12 19:54   ` Stefano Stabellini
  2017-12-12 19:02 ` [v2 04/16] xen/arm: Extend copy_to_guest to support zeroing " Julien Grall
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:01 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The only differences between copy_to_guest (formerly called
raw_copy_to_guest_helper) and raw_copy_from_guest is:
    - The direction of the memcpy
    - The permission use for translating the address

Extend copy_to_guest to support copying from guest VA by adding using a
bit in the flags to tell the direction of the copy.

Lastly, reimplement raw_copy_from_guest using copy_to_guest.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Use vaddr_t
---
 xen/arch/arm/guestcopy.c | 46 +++++++++++++---------------------------------
 1 file changed, 13 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 08d0fa0a83..12fb03dd19 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -6,6 +6,8 @@
 #include <asm/guest_access.h>
 
 #define COPY_flush_dcache   (1U << 0)
+#define COPY_from_guest     (0U << 1)
+#define COPY_to_guest       (1U << 1)
 
 static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
                                 unsigned int flags)
@@ -21,13 +23,18 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current, addr, GV2M_WRITE);
+        page = get_page_from_gva(current, addr,
+                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
         if ( page == NULL )
             return len;
 
         p = __map_domain_page(page);
         p += offset;
-        memcpy(p, buf, size);
+        if ( flags & COPY_to_guest )
+            memcpy(p, buf, size);
+        else
+            memcpy(buf, p, size);
+
         if ( flags & COPY_flush_dcache )
             clean_dcache_va_range(p, size);
 
@@ -48,13 +55,14 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
-    return copy_guest((void *)from, (vaddr_t)to, len, 0);
+    return copy_guest((void *)from, (vaddr_t)to, len, COPY_to_guest);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
-    return copy_guest((void *)from, (vaddr_t)to, len, COPY_flush_dcache);
+    return copy_guest((void *)from, (vaddr_t)to, len,
+                      COPY_to_guest | COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
@@ -92,35 +100,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
 {
-    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
-
-    while ( len )
-    {
-        void *p;
-        unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
-        struct page_info *page;
-
-        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
-        if ( page == NULL )
-            return len;
-
-        p = __map_domain_page(page);
-        p += ((vaddr_t)from & (~PAGE_MASK));
-
-        memcpy(to, p, size);
-
-        unmap_domain_page(p);
-        put_page(page);
-        len -= size;
-        from += size;
-        to += size;
-        /*
-         * After the first iteration, guest virtual address is correctly
-         * aligned to PAGE_SIZE.
-         */
-        offset = 0;
-    }
-    return 0;
+    return copy_guest(to, (vaddr_t)from, len, COPY_from_guest);
 }
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 04/16] xen/arm: Extend copy_to_guest to support zeroing guest VA and use it
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (2 preceding siblings ...)
  2017-12-12 19:01 ` [v2 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 19:57   ` Stefano Stabellini
  2017-12-12 19:02 ` [v2 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU Julien Grall
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The function copy_to_guest can easily be extended to support zeroing
guest VA. To avoid using a new bit, it is considered that a NULL buffer
(i.e buf == NULL) means the guest memory will be zeroed.

Lastly, reimplement raw_clear_guest using copy_to_guest.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v3:
        - Use vaddr_t
---
 xen/arch/arm/guestcopy.c | 41 +++++++++++------------------------------
 1 file changed, 11 insertions(+), 30 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 12fb03dd19..ff7d15380f 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -31,7 +31,16 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
         p = __map_domain_page(page);
         p += offset;
         if ( flags & COPY_to_guest )
-            memcpy(p, buf, size);
+        {
+            /*
+             * buf will be NULL when the caller request to zero the
+             * guest memory.
+             */
+            if ( buf )
+                memcpy(p, buf, size);
+            else
+                memset(p, 0, size);
+        }
         else
             memcpy(buf, p, size);
 
@@ -67,35 +76,7 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
 
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
-    /* XXX needs to handle faults */
-    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
-
-    while ( len )
-    {
-        void *p;
-        unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
-        struct page_info *page;
-
-        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
-        if ( page == NULL )
-            return len;
-
-        p = __map_domain_page(page);
-        p += offset;
-        memset(p, 0x00, size);
-
-        unmap_domain_page(p - offset);
-        put_page(page);
-        len -= size;
-        to += size;
-        /*
-         * After the first iteration, guest virtual address is correctly
-         * aligned to PAGE_SIZE.
-         */
-        offset = 0;
-    }
-
-    return 0;
+    return copy_guest(NULL, (vaddr_t)to, len, COPY_to_guest);
 }
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (3 preceding siblings ...)
  2017-12-12 19:02 ` [v2 04/16] xen/arm: Extend copy_to_guest to support zeroing " Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 20:00   ` Stefano Stabellini
  2017-12-12 19:02 ` [v2 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address Julien Grall
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

Currently, guest_copy assumes the copy will only be done for the current
vCPU. copy_guest is meant to be vCPU agnostic, so extend the prototype
to pass the vCPU.

At the same time, encapsulate the vCPU in an union to allow extension
for copying from a guest domain (ipa case) in the future.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Encapsulate the vCPU in an union.
        - Rework the commit message
---
 xen/arch/arm/guestcopy.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index ff7d15380f..7e92e27beb 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -9,8 +9,18 @@
 #define COPY_from_guest     (0U << 1)
 #define COPY_to_guest       (1U << 1)
 
+typedef union
+{
+    struct
+    {
+        struct vcpu *v;
+    } gva;
+} copy_info_t;
+
+#define GVA_INFO(vcpu) ((copy_info_t) { .gva = { vcpu } })
+
 static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
-                                unsigned int flags)
+                                copy_info_t info, unsigned int flags)
 {
     /* XXX needs to handle faults */
     unsigned offset = addr & ~PAGE_MASK;
@@ -23,7 +33,7 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current, addr,
+        page = get_page_from_gva(info.gva.v, addr,
                                  (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
         if ( page == NULL )
             return len;
@@ -64,24 +74,27 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
-    return copy_guest((void *)from, (vaddr_t)to, len, COPY_to_guest);
+    return copy_guest((void *)from, (vaddr_t)to, len,
+                      GVA_INFO(current), COPY_to_guest);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
-    return copy_guest((void *)from, (vaddr_t)to, len,
+    return copy_guest((void *)from, (vaddr_t)to, len, GVA_INFO(current),
                       COPY_to_guest | COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
-    return copy_guest(NULL, (vaddr_t)to, len, COPY_to_guest);
+    return copy_guest(NULL, (vaddr_t)to, len, GVA_INFO(current),
+                      COPY_to_guest);
 }
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
 {
-    return copy_guest(to, (vaddr_t)from, len, COPY_from_guest);
+    return copy_guest(to, (vaddr_t)from, len, GVA_INFO(current),
+                      COPY_from_guest);
 }
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (4 preceding siblings ...)
  2017-12-12 19:02 ` [v2 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 20:06   ` Stefano Stabellini
  2017-12-12 19:02 ` [v2 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache Julien Grall
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The only differences between copy_to_guest and access_guest_memory_by_ipa are:
    - The latter does not support copying data crossing page boundary
    - The former is copying from/to guest VA whilst the latter from
    guest PA

copy_to_guest can easily be extended to support copying from/to guest
physical address. For that a new bit is used to tell whether linear
address or ipa is been used.

Lastly access_guest_memory_by_ipa is reimplemented using copy_to_guest.
This also has the benefits to extend the use of it, it is now possible
to copy data crossing page boundary.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Rework the patch after the interface changes in the previous
        patch.
        - Use uint64_t rather than paddr_t in translate_get_page
        - Add a BUILD_BUG_ON to check whether paddr_t fits in uint64_t
---
 xen/arch/arm/guestcopy.c | 91 +++++++++++++++++++++++-------------------------
 1 file changed, 44 insertions(+), 47 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 7e92e27beb..93e4aa2d3f 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -8,6 +8,8 @@
 #define COPY_flush_dcache   (1U << 0)
 #define COPY_from_guest     (0U << 1)
 #define COPY_to_guest       (1U << 1)
+#define COPY_ipa            (0U << 2)
+#define COPY_linear         (1U << 2)
 
 typedef union
 {
@@ -15,9 +17,39 @@ typedef union
     {
         struct vcpu *v;
     } gva;
+
+    struct
+    {
+        struct domain *d;
+    } gpa;
 } copy_info_t;
 
 #define GVA_INFO(vcpu) ((copy_info_t) { .gva = { vcpu } })
+#define GPA_INFO(domain) ((copy_info_t) { .gpa = { domain } })
+
+static struct page_info *translate_get_page(copy_info_t info, uint64_t addr,
+                                            bool linear, bool write)
+{
+    p2m_type_t p2mt;
+    struct page_info *page;
+
+    if ( linear )
+        return get_page_from_gva(info.gva.v, addr,
+                                 write ? GV2M_WRITE : GV2M_READ);
+
+    page = get_page_from_gfn(info.gpa.d, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
+
+    if ( !page )
+        return NULL;
+
+    if ( !p2m_is_ram(p2mt) )
+    {
+        put_page(page);
+        return NULL;
+    }
+
+    return page;
+}
 
 static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
                                 copy_info_t info, unsigned int flags)
@@ -26,6 +58,7 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
     unsigned offset = addr & ~PAGE_MASK;
 
     BUILD_BUG_ON((sizeof(addr)) < sizeof(vaddr_t));
+    BUILD_BUG_ON((sizeof(addr)) < sizeof(paddr_t));
 
     while ( len )
     {
@@ -33,8 +66,8 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(info.gva.v, addr,
-                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
+        page = translate_get_page(info, addr, flags & COPY_linear,
+                                  flags & COPY_to_guest);
         if ( page == NULL )
             return len;
 
@@ -75,75 +108,39 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
     return copy_guest((void *)from, (vaddr_t)to, len,
-                      GVA_INFO(current), COPY_to_guest);
+                      GVA_INFO(current), COPY_to_guest | COPY_linear);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
     return copy_guest((void *)from, (vaddr_t)to, len, GVA_INFO(current),
-                      COPY_to_guest | COPY_flush_dcache);
+                      COPY_to_guest | COPY_flush_dcache | COPY_linear);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
     return copy_guest(NULL, (vaddr_t)to, len, GVA_INFO(current),
-                      COPY_to_guest);
+                      COPY_to_guest | COPY_linear);
 }
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
 {
     return copy_guest(to, (vaddr_t)from, len, GVA_INFO(current),
-                      COPY_from_guest);
+                      COPY_from_guest | COPY_linear);
 }
 
-/*
- * Temporarily map one physical guest page and copy data to or from it.
- * The data to be copied cannot cross a page boundary.
- */
 int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
                                uint32_t size, bool is_write)
 {
-    struct page_info *page;
-    uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
-    p2m_type_t p2mt;
-    void *p;
-
-    /* Do not cross a page boundary. */
-    if ( size > (PAGE_SIZE - offset) )
-    {
-        printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page boundary.\n",
-               d->domain_id);
-        return -EINVAL;
-    }
-
-    page = get_page_from_gfn(d, paddr_to_pfn(gpa), &p2mt, P2M_ALLOC);
-    if ( !page )
-    {
-        printk(XENLOG_G_ERR "d%d: guestcopy: failed to get table entry.\n",
-               d->domain_id);
-        return -EINVAL;
-    }
+    unsigned long left;
+    int flags = COPY_ipa;
 
-    if ( !p2m_is_ram(p2mt) )
-    {
-        put_page(page);
-        printk(XENLOG_G_ERR "d%d: guestcopy: guest memory should be RAM.\n",
-               d->domain_id);
-        return -EINVAL;
-    }
-
-    p = __map_domain_page(page);
+    flags |= is_write ? COPY_to_guest : COPY_from_guest;
 
-    if ( is_write )
-        memcpy(p + offset, buf, size);
-    else
-        memcpy(buf, p + offset, size);
+    left = copy_guest(buf, gpa, size, GPA_INFO(d), flags);
 
-    unmap_domain_page(p);
-    put_page(page);
-
-    return 0;
+    return (!left) ? 0 : -EINVAL;
 }
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (5 preceding siblings ...)
  2017-12-12 19:02 ` [v2 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 20:10   ` Stefano Stabellini
  2017-12-12 19:02 ` [v2 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper Julien Grall
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

This new function will be used in a follow-up patch to copy data to the guest
using the IPA (aka guest physical address) and then clean the cache.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Use the new interface
---
 xen/arch/arm/guestcopy.c           | 9 +++++++++
 xen/include/asm-arm/guest_access.h | 6 ++++++
 2 files changed, 15 insertions(+)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 93e4aa2d3f..7a0f3e9d5f 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -130,6 +130,15 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
                       COPY_from_guest | COPY_linear);
 }
 
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+                                              paddr_t gpa,
+                                              void *buf,
+                                              unsigned int len)
+{
+    return copy_guest(buf, gpa, len, GPA_INFO(d),
+                      COPY_to_guest | COPY_ipa | COPY_flush_dcache);
+}
+
 int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
                                uint32_t size, bool is_write)
 {
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 6796801cfe..224d2a033b 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -11,6 +11,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
 unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
 unsigned long raw_clear_guest(void *to, unsigned len);
 
+/* Copy data to guest physical address, then clean the region. */
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+                                              paddr_t phys,
+                                              void *buf,
+                                              unsigned int len);
+
 int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
                                uint32_t size, bool is_write);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (6 preceding siblings ...)
  2017-12-12 19:02 ` [v2 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 19:02 ` [v2 09/16] xen/arm: domain_build: Rework initrd_load " Julien Grall
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The function kernel_zimage is dealing with IPA but uses gvirt_to_maddr to
do the translation. This is currently working fine because the stage-1 MMU
is disabled.

Furthermore, the function is implementing its own copy to guest resulting
in code duplication and making more difficult to update the logic in
page-tables (such support for Populate On Demand).

The new copy_to_guest_phys_flush_dcache could be used here by
temporarily mapping the full kernel in the virtual space.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/domain_build.c |  1 +
 xen/arch/arm/kernel.c       | 33 ++++++++++++---------------------
 xen/arch/arm/kernel.h       |  2 ++
 3 files changed, 15 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d7b114ec23..aa6ff8d456 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2135,6 +2135,7 @@ int construct_dom0(struct domain *d)
     d->max_pages = ~0U;
 
     kinfo.unassigned_mem = dom0_mem;
+    kinfo.d = d;
 
     rc = kernel_probe(&kinfo);
     if ( rc < 0 )
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index a6c6413712..2fb0b9684d 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -15,6 +15,8 @@
 #include <xen/gunzip.h>
 #include <xen/vmap.h>
 
+#include <asm/guest_access.h>
+
 #include "kernel.h"
 
 #define UIMAGE_MAGIC          0x27051956
@@ -157,7 +159,8 @@ static void kernel_zimage_load(struct kernel_info *info)
     paddr_t load_addr = kernel_zimage_place(info);
     paddr_t paddr = info->zimage.kernel_addr;
     paddr_t len = info->zimage.len;
-    unsigned long offs;
+    void *kernel;
+    int rc;
 
     info->entry = load_addr;
 
@@ -165,29 +168,17 @@ static void kernel_zimage_load(struct kernel_info *info)
 
     printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
            paddr, load_addr, load_addr + len);
-    for ( offs = 0; offs < len; )
-    {
-        uint64_t par;
-        paddr_t s, l, ma = 0;
-        void *dst;
-
-        s = offs & ~PAGE_MASK;
-        l = min(PAGE_SIZE - s, len);
-
-        par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
-        if ( par )
-        {
-            panic("Unable to map translate guest address");
-            return;
-        }
 
-        dst = map_domain_page(maddr_to_mfn(ma));
+    kernel = ioremap_wc(paddr, len);
+    if ( !kernel )
+        panic("Unable to map the hwdom kernel");
 
-        copy_from_paddr(dst + s, paddr + offs, l);
+    rc = copy_to_guest_phys_flush_dcache(info->d, load_addr,
+                                         kernel, len);
+    if ( rc != 0 )
+        panic("Unable to copy the kernel in the hwdom memory");
 
-        unmap_domain_page(dst);
-        offs += l;
-    }
+    iounmap(kernel);
 }
 
 /*
diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index c1b07d4f7b..6d695097b5 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -15,6 +15,8 @@ struct kernel_info {
     enum domain_type type;
 #endif
 
+    struct domain *d;
+
     void *fdt; /* flat device tree */
     paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
     struct meminfo mem;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 09/16] xen/arm: domain_build: Rework initrd_load to use the generic copy helper
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (7 preceding siblings ...)
  2017-12-12 19:02 ` [v2 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 19:02 ` [v2 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load Julien Grall
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The function initrd_load is dealing with IPA but uses gvirt_to_maddr to
do the translation. This is currently working fine because the stage-1 MMU
is disabled.

Furthermore, the function is implementing its own copy to guest resulting
in code duplication and making more difficult to update the logic in
page-tables (such support for Populate On Demand).

The new copy_to_guest_phys_flush_dcache could be used here by temporarily
mapping the full initrd in the virtual space.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/domain_build.c | 31 ++++++++-----------------------
 1 file changed, 8 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index aa6ff8d456..66fd77def6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1968,11 +1968,11 @@ static void initrd_load(struct kernel_info *kinfo)
     const struct bootmodule *mod = kinfo->initrd_bootmodule;
     paddr_t load_addr = kinfo->initrd_paddr;
     paddr_t paddr, len;
-    unsigned long offs;
     int node;
     int res;
     __be32 val[2];
     __be32 *cellp;
+    void __iomem *initrd;
 
     if ( !mod || !mod->size )
         return;
@@ -2002,29 +2002,14 @@ static void initrd_load(struct kernel_info *kinfo)
     if ( res )
         panic("Cannot fix up \"linux,initrd-end\" property");
 
-    for ( offs = 0; offs < len; )
-    {
-        uint64_t par;
-        paddr_t s, l, ma = 0;
-        void *dst;
-
-        s = offs & ~PAGE_MASK;
-        l = min(PAGE_SIZE - s, len);
-
-        par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
-        if ( par )
-        {
-            panic("Unable to translate guest address");
-            return;
-        }
-
-        dst = map_domain_page(maddr_to_mfn(ma));
+    initrd = ioremap_wc(paddr, len);
+    if ( !initrd )
+        panic("Unable to map the hwdom initrd");
 
-        copy_from_paddr(dst + s, paddr + offs, l);
-
-        unmap_domain_page(dst);
-        offs += l;
-    }
+    res = copy_to_guest_phys_flush_dcache(kinfo->d, load_addr,
+                                          initrd, len);
+    if ( res != 0 )
+        panic("Unable to copy the initrd in the hwdom memory");
 }
 
 static void evtchn_fixup(struct domain *d, struct kernel_info *kinfo)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (8 preceding siblings ...)
  2017-12-12 19:02 ` [v2 09/16] xen/arm: domain_build: Rework initrd_load " Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 19:02 ` [v2 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync Julien Grall
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The function dtb_load is dealing with IPA but uses gvirt_to_maddr to do
the translation. This is currently working fine because the stage-1 MMU
is disabled.

Rather than relying on such assumption, use the new
copy_to_guest_phys_flush_dcache. This also result to a slightly more
comprehensible code.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/domain_build.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 66fd77def6..155c952349 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1950,14 +1950,15 @@ static int prepare_acpi(struct domain *d, struct kernel_info *kinfo)
 #endif
 static void dtb_load(struct kernel_info *kinfo)
 {
-    void * __user dtb_virt = (void * __user)(register_t)kinfo->dtb_paddr;
     unsigned long left;
 
     printk("Loading dom0 DTB to 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
            kinfo->dtb_paddr, kinfo->dtb_paddr + fdt_totalsize(kinfo->fdt));
 
-    left = raw_copy_to_guest_flush_dcache(dtb_virt, kinfo->fdt,
-                                        fdt_totalsize(kinfo->fdt));
+    left = copy_to_guest_phys_flush_dcache(kinfo->d, kinfo->dtb_paddr,
+                                           kinfo->fdt,
+                                           fdt_totalsize(kinfo->fdt));
+
     if ( left != 0 )
         panic("Unable to copy the DTB to dom0 memory (left = %lu bytes)", left);
     xfree(kinfo->fdt);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (9 preceding siblings ...)
  2017-12-12 19:02 ` [v2 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 19:02 ` [v2 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it Julien Grall
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

Rename p2m_flush_tlb and p2m_flush_tlb_sync to respectively
p2m_tlb_flush and p2m_force_tlb_flush_sync.

At first glance, inverting 'flush' and 'tlb'  might seem pointless but
would be helpful in the future in order to get more easily some code ported
from x86 P2M or even to shared with.

For p2m_flush_tlb_sync, the 'force' was added because the TLBs are
flush unconditionally. A follow-up patch will add an helper to flush
TLBs only in certain cases.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/p2m.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 7bf34aaa8c..95090874c3 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -51,7 +51,7 @@ static const paddr_t level_masks[] =
 static const uint8_t level_orders[] =
     { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
-static void p2m_flush_tlb(struct p2m_domain *p2m);
+static void p2m_tlb_flush(struct p2m_domain *p2m);
 
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
@@ -64,7 +64,7 @@ void p2m_write_unlock(struct p2m_domain *p2m)
          * to avoid someone else modify the P2M before the TLB
          * invalidation has completed.
          */
-        p2m_flush_tlb(p2m);
+        p2m_tlb_flush(p2m);
     }
 
     write_unlock(&p2m->lock);
@@ -137,7 +137,7 @@ void p2m_restore_state(struct vcpu *n)
     *last_vcpu_ran = n->vcpu_id;
 }
 
-static void p2m_flush_tlb(struct p2m_domain *p2m)
+static void p2m_tlb_flush(struct p2m_domain *p2m)
 {
     unsigned long flags = 0;
     uint64_t ovttbr;
@@ -169,11 +169,11 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
  *
  * Must be called with the p2m lock held.
  */
-static void p2m_flush_tlb_sync(struct p2m_domain *p2m)
+static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
 {
     ASSERT(p2m_is_write_locked(p2m));
 
-    p2m_flush_tlb(p2m);
+    p2m_tlb_flush(p2m);
     p2m->need_flush = false;
 }
 
@@ -674,7 +674,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
      * flush?
      */
     if ( p2m->need_flush )
-        p2m_flush_tlb_sync(p2m);
+        p2m_force_tlb_flush_sync(p2m);
 
     mfn = _mfn(entry.p2m.base);
     ASSERT(mfn_valid(mfn));
@@ -863,7 +863,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
          * For more details see (D4.7.1 in ARM DDI 0487A.j).
          */
         p2m_remove_pte(entry, p2m->clean_pte);
-        p2m_flush_tlb_sync(p2m);
+        p2m_force_tlb_flush_sync(p2m);
 
         p2m_write_pte(entry, split_pte, p2m->clean_pte);
 
@@ -939,7 +939,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
         {
             if ( likely(!p2m->mem_access_enabled) ||
                  P2M_CLEAR_PERM(pte) != P2M_CLEAR_PERM(orig_pte) )
-                p2m_flush_tlb_sync(p2m);
+                p2m_force_tlb_flush_sync(p2m);
             else
                 p2m->need_flush = true;
         }
@@ -1143,7 +1143,7 @@ static int p2m_alloc_table(struct domain *d)
      * Make sure that all TLBs corresponding to the new VMID are flushed
      * before using it
      */
-    p2m_flush_tlb(p2m);
+    p2m_tlb_flush(p2m);
 
     return 0;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (10 preceding siblings ...)
  2017-12-12 19:02 ` [v2 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 19:02 ` [v2 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync Julien Grall
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

Multiple places in the code requires to flush the TLBs only when
p2m->need_flush is set.

Rather than open-coding it, introduce a new helper p2m_tlb_flush_sync to
do it.

Note that p2m_tlb_flush_sync is exported as it might be used by other
part of Xen.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
        - Fix typo in the commit message
---
 xen/arch/arm/p2m.c        | 27 +++++++++++++--------------
 xen/include/asm-arm/p2m.h |  2 ++
 2 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 95090874c3..15711a4c80 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -51,21 +51,15 @@ static const paddr_t level_masks[] =
 static const uint8_t level_orders[] =
     { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
-static void p2m_tlb_flush(struct p2m_domain *p2m);
-
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
-    if ( p2m->need_flush )
-    {
-        p2m->need_flush = false;
-        /*
-         * The final flush is done with the P2M write lock taken to
-         * to avoid someone else modify the P2M before the TLB
-         * invalidation has completed.
-         */
-        p2m_tlb_flush(p2m);
-    }
+    /*
+     * The final flush is done with the P2M write lock taken to avoid
+     * someone else modifying the P2M wbefore the TLB invalidation has
+     * completed.
+     */
+    p2m_tlb_flush_sync(p2m);
 
     write_unlock(&p2m->lock);
 }
@@ -177,6 +171,12 @@ static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
     p2m->need_flush = false;
 }
 
+void p2m_tlb_flush_sync(struct p2m_domain *p2m)
+{
+    if ( p2m->need_flush )
+        p2m_force_tlb_flush_sync(p2m);
+}
+
 /*
  * Find and map the root page table. The caller is responsible for
  * unmapping the table.
@@ -673,8 +673,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
      * XXX: Should we defer the free of the page table to avoid the
      * flush?
      */
-    if ( p2m->need_flush )
-        p2m_force_tlb_flush_sync(p2m);
+    p2m_tlb_flush_sync(p2m);
 
     mfn = _mfn(entry.p2m.base);
     ASSERT(mfn_valid(mfn));
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index faadcfe8fe..a0abc84ed8 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -204,6 +204,8 @@ static inline int p2m_is_write_locked(struct p2m_domain *p2m)
     return rw_is_write_locked(&p2m->lock);
 }
 
+void p2m_tlb_flush_sync(struct p2m_domain *p2m);
+
 /* Look up the MFN corresponding to a domain's GFN. */
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (11 preceding siblings ...)
  2017-12-12 19:02 ` [v2 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 19:02 ` [v2 14/16] xen/arm: traps: Remove the field gva from mmio_info_t Julien Grall
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

p2m_tlb_flush is called in 2 places: p2m_alloc_table and
p2m_force_tlb_flush_sync.

p2m_alloc_table is called when the domain is initialized and could be
replace by a call to p2m_force_tlb_flush_sync with the P2M write locked.

This seems a bit pointless but would allow to have a single API for
flushing and avoid misusage in the P2M code.

So update p2m_alloc_table to use p2m_force_tlb_flush_sync and fold
p2m_tlb_flush in p2m_force_tlb_flush_sync.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/p2m.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 15711a4c80..22165ae376 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -131,11 +131,18 @@ void p2m_restore_state(struct vcpu *n)
     *last_vcpu_ran = n->vcpu_id;
 }
 
-static void p2m_tlb_flush(struct p2m_domain *p2m)
+/*
+ * Force a synchronous P2M TLB flush.
+ *
+ * Must be called with the p2m lock held.
+ */
+static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
 {
     unsigned long flags = 0;
     uint64_t ovttbr;
 
+    ASSERT(p2m_is_write_locked(p2m));
+
     /*
      * ARM only provides an instruction to flush TLBs for the current
      * VMID. So switch to the VTTBR of a given P2M if different.
@@ -156,18 +163,7 @@ static void p2m_tlb_flush(struct p2m_domain *p2m)
         isb();
         local_irq_restore(flags);
     }
-}
-
-/*
- * Force a synchronous P2M TLB flush.
- *
- * Must be called with the p2m lock held.
- */
-static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
-{
-    ASSERT(p2m_is_write_locked(p2m));
 
-    p2m_tlb_flush(p2m);
     p2m->need_flush = false;
 }
 
@@ -1142,7 +1138,9 @@ static int p2m_alloc_table(struct domain *d)
      * Make sure that all TLBs corresponding to the new VMID are flushed
      * before using it
      */
-    p2m_tlb_flush(p2m);
+    p2m_write_lock(p2m);
+    p2m_force_tlb_flush_sync(p2m);
+    p2m_write_unlock(p2m);
 
     return 0;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 14/16] xen/arm: traps: Remove the field gva from mmio_info_t
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (12 preceding siblings ...)
  2017-12-12 19:02 ` [v2 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 19:02 ` [v2 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio Julien Grall
  2017-12-12 19:02 ` [v2 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest Julien Grall
  15 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

mmio_info_t is used to gather information in order do emulation of a
region. Guest virtual address is unlikely to be a useful information and
not currently used. So remove the field gva from mmio_info_t and replace
by a local variable.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Stefano's reviewed-by
        - Fix typo in the commit message
---
 xen/arch/arm/traps.c       | 13 +++++++------
 xen/include/asm-arm/mmio.h |  1 -
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ff3d6ff2aa..130e85a6ba 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2000,6 +2000,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
 {
     const struct hsr_dabt dabt = hsr.dabt;
     int rc;
+    vaddr_t gva;
     mmio_info_t info;
     uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
     mfn_t mfn;
@@ -2013,13 +2014,13 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
 
     info.dabt = dabt;
 
-    info.gva = get_hfar(true /* is_data */);
+    gva = get_hfar(true /* is_data */);
 
     if ( hpfar_is_valid(dabt.s1ptw, fsc) )
-        info.gpa = get_faulting_ipa(info.gva);
+        info.gpa = get_faulting_ipa(gva);
     else
     {
-        rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
+        rc = gva_to_ipa(gva, &info.gpa, GV2M_READ);
         /*
          * We may not be able to translate because someone is
          * playing with the Stage-2 page table of the domain.
@@ -2040,7 +2041,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
             .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
         };
 
-        p2m_mem_access_check(info.gpa, info.gva, npfec);
+        p2m_mem_access_check(info.gpa, gva, npfec);
         /*
          * The only way to get here right now is because of mem_access,
          * thus reinjecting the exception to the guest is never required.
@@ -2077,8 +2078,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     }
 
     gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
-             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, info.gva, info.gpa);
-    inject_dabt_exception(regs, info.gva, hsr.len);
+             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, info.gpa);
+    inject_dabt_exception(regs, gva, hsr.len);
 }
 
 static void enter_hypervisor_head(struct cpu_user_regs *regs)
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index c620eed4cd..37e2b7a707 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -29,7 +29,6 @@
 typedef struct
 {
     struct hsr_dabt dabt;
-    vaddr_t gva;
     paddr_t gpa;
 } mmio_info_t;
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (13 preceding siblings ...)
  2017-12-12 19:02 ` [v2 14/16] xen/arm: traps: Remove the field gva from mmio_info_t Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 19:02 ` [v2 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest Julien Grall
  15 siblings, 0 replies; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

mmio_info_t is currently filled by do_trap_data_guest_abort but only
important when emulation an MMIO region.

A follow-up patch will merge stage-2 prefetch abort and stage-2 data abort
in a single helper. To prepare that, mmio_info_t is now filled by
try_handle_mmio.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org.

---
    Changes in v2:
        - Add Stefano's reviewed-by
---
 xen/arch/arm/traps.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 130e85a6ba..082396c26d 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1935,9 +1935,14 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
 }
 
 static bool try_handle_mmio(struct cpu_user_regs *regs,
-                            mmio_info_t *info)
+                            const union hsr hsr,
+                            paddr_t gpa)
 {
-    const struct hsr_dabt dabt = info->dabt;
+    const struct hsr_dabt dabt = hsr.dabt;
+    mmio_info_t info = {
+        .gpa = gpa,
+        .dabt = dabt
+    };
     int rc;
 
     /* stage-1 page table should never live in an emulated MMIO region */
@@ -1955,7 +1960,7 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
     if ( check_workaround_766422() && (regs->cpsr & PSR_THUMB) &&
          dabt.write )
     {
-        rc = decode_instruction(regs, &info->dabt);
+        rc = decode_instruction(regs, &info.dabt);
         if ( rc )
         {
             gprintk(XENLOG_DEBUG, "Unable to decode instruction\n");
@@ -1963,7 +1968,7 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
         }
     }
 
-    return !!handle_mmio(info);
+    return !!handle_mmio(&info);
 }
 
 /*
@@ -2001,7 +2006,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     const struct hsr_dabt dabt = hsr.dabt;
     int rc;
     vaddr_t gva;
-    mmio_info_t info;
+    paddr_t gpa;
     uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
     mfn_t mfn;
 
@@ -2012,15 +2017,13 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     if ( dabt.eat )
         return __do_trap_serror(regs, true);
 
-    info.dabt = dabt;
-
     gva = get_hfar(true /* is_data */);
 
     if ( hpfar_is_valid(dabt.s1ptw, fsc) )
-        info.gpa = get_faulting_ipa(gva);
+        gpa = get_faulting_ipa(gva);
     else
     {
-        rc = gva_to_ipa(gva, &info.gpa, GV2M_READ);
+        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
         /*
          * We may not be able to translate because someone is
          * playing with the Stage-2 page table of the domain.
@@ -2041,7 +2044,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
             .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
         };
 
-        p2m_mem_access_check(info.gpa, gva, npfec);
+        p2m_mem_access_check(gpa, gva, npfec);
         /*
          * The only way to get here right now is because of mem_access,
          * thus reinjecting the exception to the guest is never required.
@@ -2053,7 +2056,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
          * Attempt first to emulate the MMIO as the data abort will
          * likely happen in an emulated region.
          */
-        if ( try_handle_mmio(regs, &info) )
+        if ( try_handle_mmio(regs, hsr, gpa) )
         {
             advance_pc(regs, hsr);
             return;
@@ -2064,11 +2067,11 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
          * with the Stage-2 page table. Walk the Stage-2 PT to check
          * if the entry exists. If it's the case, return to the guest
          */
-        mfn = gfn_to_mfn(current->domain, gaddr_to_gfn(info.gpa));
+        mfn = gfn_to_mfn(current->domain, gaddr_to_gfn(gpa));
         if ( !mfn_eq(mfn, INVALID_MFN) )
             return;
 
-        if ( try_map_mmio(gaddr_to_gfn(info.gpa)) )
+        if ( try_map_mmio(gaddr_to_gfn(gpa)) )
             return;
 
         break;
@@ -2078,7 +2081,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     }
 
     gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
-             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, info.gpa);
+             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, gpa);
     inject_dabt_exception(regs, gva, hsr.len);
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [v2 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest
  2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (14 preceding siblings ...)
  2017-12-12 19:02 ` [v2 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio Julien Grall
@ 2017-12-12 19:02 ` Julien Grall
  2017-12-12 20:11   ` Stefano Stabellini
  15 siblings, 1 reply; 24+ messages in thread
From: Julien Grall @ 2017-12-12 19:02 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The two helpers do_trap_instr_abort_guest and do_trap_data_abort_guest
are used trap stage-2 abort. While the former is only handling prefetch
abort and the latter data abort, they are very similarly and does not
warrant to have separate helpers.

For instance, merging the both will make easier to maintain stage-2 abort
handling. So consolidate the two helpers in a new helper
do_trap_stage2_abort.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2
        - Fix the way to compute npfec.write_access
---
 xen/arch/arm/traps.c | 133 ++++++++++++++++-----------------------------------
 1 file changed, 41 insertions(+), 92 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 082396c26d..013c1600ec 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1861,79 +1861,6 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
     return s1ptw || (fsc == FSC_FLT_TRANS && !check_workaround_834220());
 }
 
-static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
-                                      const union hsr hsr)
-{
-    int rc;
-    register_t gva;
-    uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
-    paddr_t gpa;
-    mfn_t mfn;
-
-    gva = get_hfar(false /* is_data */);
-
-    /*
-     * If this bit has been set, it means that this instruction abort is caused
-     * by a guest external abort. We can handle this instruction abort as guest
-     * SError.
-     */
-    if ( hsr.iabt.eat )
-        return __do_trap_serror(regs, true);
-
-
-    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
-        gpa = get_faulting_ipa(gva);
-    else
-    {
-        /*
-         * Flush the TLB to make sure the DTLB is clear before
-         * doing GVA->IPA translation. If we got here because of
-         * an entry only present in the ITLB, this translation may
-         * still be inaccurate.
-         */
-        flush_tlb_local();
-
-        /*
-         * We may not be able to translate because someone is
-         * playing with the Stage-2 page table of the domain.
-         * Return to the guest.
-         */
-        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
-        if ( rc == -EFAULT )
-            return; /* Try again */
-    }
-
-    switch ( fsc )
-    {
-    case FSC_FLT_PERM:
-    {
-        const struct npfec npfec = {
-            .insn_fetch = 1,
-            .gla_valid = 1,
-            .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
-        };
-
-        p2m_mem_access_check(gpa, gva, npfec);
-        /*
-         * The only way to get here right now is because of mem_access,
-         * thus reinjecting the exception to the guest is never required.
-         */
-        return;
-    }
-    case FSC_FLT_TRANS:
-        /*
-         * The PT walk may have failed because someone was playing
-         * with the Stage-2 page table. Walk the Stage-2 PT to check
-         * if the entry exists. If it's the case, return to the guest
-         */
-        mfn = gfn_to_mfn(current->domain, _gfn(paddr_to_pfn(gpa)));
-        if ( !mfn_eq(mfn, INVALID_MFN) )
-            return;
-    }
-
-    inject_iabt_exception(regs, gva, hsr.len);
-}
-
 static bool try_handle_mmio(struct cpu_user_regs *regs,
                             const union hsr hsr,
                             paddr_t gpa)
@@ -1945,6 +1872,8 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
     };
     int rc;
 
+    ASSERT(hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
+
     /* stage-1 page table should never live in an emulated MMIO region */
     if ( dabt.s1ptw )
         return false;
@@ -2000,29 +1929,43 @@ static bool try_map_mmio(gfn_t gfn)
     return !map_regions_p2mt(d, gfn, 1, mfn, p2m_mmio_direct_c);
 }
 
-static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
-                                     const union hsr hsr)
+static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
+                                       const union hsr hsr)
 {
-    const struct hsr_dabt dabt = hsr.dabt;
+    /*
+     * The encoding of hsr_iabt is a subset of hsr_dabt. So use
+     * hsr_dabt to represent an abort fault.
+     */
+    const struct hsr_xabt xabt = hsr.xabt;
     int rc;
     vaddr_t gva;
     paddr_t gpa;
-    uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
+    uint8_t fsc = xabt.fsc & ~FSC_LL_MASK;
     mfn_t mfn;
+    bool is_data = (hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
 
     /*
-     * If this bit has been set, it means that this data abort is caused
-     * by a guest external abort. We treat this data abort as guest SError.
+     * If this bit has been set, it means that this stage-2 abort is caused
+     * by a guest external abort. We treat this stage-2 abort as guest SError.
      */
-    if ( dabt.eat )
+    if ( xabt.eat )
         return __do_trap_serror(regs, true);
 
-    gva = get_hfar(true /* is_data */);
+    gva = get_hfar(is_data);
 
-    if ( hpfar_is_valid(dabt.s1ptw, fsc) )
+    if ( hpfar_is_valid(xabt.s1ptw, fsc) )
         gpa = get_faulting_ipa(gva);
     else
     {
+        /*
+         * Flush the TLB to make sure the DTLB is clear before
+         * doing GVA->IPA translation. If we got here because of
+         * an entry only present in the ITLB, this translation may
+         * still be inaccurate.
+         */
+        if ( !is_data )
+            flush_tlb_local();
+
         rc = gva_to_ipa(gva, &gpa, GV2M_READ);
         /*
          * We may not be able to translate because someone is
@@ -2038,10 +1981,11 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     case FSC_FLT_PERM:
     {
         const struct npfec npfec = {
-            .read_access = !dabt.write,
-            .write_access = dabt.write,
+            .insn_fetch = !is_data,
+            .read_access = is_data && !hsr.dabt.write,
+            .write_access = is_data && hsr.dabt.write,
             .gla_valid = 1,
-            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+            .kind = xabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
         };
 
         p2m_mem_access_check(gpa, gva, npfec);
@@ -2055,8 +1999,10 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
         /*
          * Attempt first to emulate the MMIO as the data abort will
          * likely happen in an emulated region.
+         *
+         * Note that emulated region cannot be executed
          */
-        if ( try_handle_mmio(regs, hsr, gpa) )
+        if ( is_data && try_handle_mmio(regs, hsr, gpa) )
         {
             advance_pc(regs, hsr);
             return;
@@ -2071,18 +2017,21 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
         if ( !mfn_eq(mfn, INVALID_MFN) )
             return;
 
-        if ( try_map_mmio(gaddr_to_gfn(gpa)) )
+        if ( is_data && try_map_mmio(gaddr_to_gfn(gpa)) )
             return;
 
         break;
     default:
-        gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
-                hsr.bits, dabt.dfsc);
+        gprintk(XENLOG_WARNING, "Unsupported FSC: HSR=%#x DFSC=%#x\n",
+                hsr.bits, xabt.fsc);
     }
 
     gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
              " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, gpa);
-    inject_dabt_exception(regs, gva, hsr.len);
+    if ( is_data )
+        inject_dabt_exception(regs, gva, hsr.len);
+    else
+        inject_iabt_exception(regs, gva, hsr.len);
 }
 
 static void enter_hypervisor_head(struct cpu_user_regs *regs)
@@ -2215,11 +2164,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
 
     case HSR_EC_INSTR_ABORT_LOWER_EL:
         perfc_incr(trap_iabt);
-        do_trap_instr_abort_guest(regs, hsr);
+        do_trap_stage2_abort_guest(regs, hsr);
         break;
     case HSR_EC_DATA_ABORT_LOWER_EL:
         perfc_incr(trap_dabt);
-        do_trap_data_abort_guest(regs, hsr);
+        do_trap_stage2_abort_guest(regs, hsr);
         break;
 
     default:
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [v2 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it
  2017-12-12 19:01 ` [v2 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it Julien Grall
@ 2017-12-12 19:51   ` Stefano Stabellini
  0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-12-12 19:51 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Tue, 12 Dec 2017, Julien Grall wrote:
> All the helpers within arch/arm/guestcopy.c are doing the same things:
> copy data from/to the guest.
> 
> At the moment, the logic is duplicated in each helpers making more
> difficult to implement new variant.
> 
> The first step for the consolidation is to get a common prototype and a
> base. For convenience (it is at the beginning of the file!),
> raw_copy_to_guest_helper is chosen.
> 
> The function is now renamed copy_guest to show it will be a
> generic function to copy data from/to the guest. Note that for now, only
> copying to guest virtual address is supported. Follow-up patches will
> extend the support.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Use vaddr_t
>         - Use uint64_t for addr in copy_guest
>         - Add a BUILD_BUG_ON to make sure vaddr_t fit in uint64_t.
> ---
>  xen/arch/arm/guestcopy.c | 20 +++++++++++---------
>  1 file changed, 11 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 2620e659b4..08d0fa0a83 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -7,11 +7,13 @@
>  
>  #define COPY_flush_dcache   (1U << 0)
>  
> -static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
> -                                              unsigned len, int flags)
> +static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
> +                                unsigned int flags)
>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
> +    unsigned offset = addr & ~PAGE_MASK;
> +
> +    BUILD_BUG_ON((sizeof(addr)) < sizeof(vaddr_t));
>  
>      while ( len )
>      {
> @@ -19,21 +21,21 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
> +        page = get_page_from_gva(current, addr, GV2M_WRITE);
>          if ( page == NULL )
>              return len;
>  
>          p = __map_domain_page(page);
>          p += offset;
> -        memcpy(p, from, size);
> +        memcpy(p, buf, size);
>          if ( flags & COPY_flush_dcache )
>              clean_dcache_va_range(p, size);
>  
>          unmap_domain_page(p - offset);
>          put_page(page);
>          len -= size;
> -        from += size;
> -        to += size;
> +        buf += size;
> +        addr += size;
>          /*
>           * After the first iteration, guest virtual address is correctly
>           * aligned to PAGE_SIZE.
> @@ -46,13 +48,13 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>  
>  unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>  {
> -    return raw_copy_to_guest_helper(to, from, len, 0);
> +    return copy_guest((void *)from, (vaddr_t)to, len, 0);
>  }
>  
>  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>                                               unsigned len)
>  {
> -    return raw_copy_to_guest_helper(to, from, len, COPY_flush_dcache);
> +    return copy_guest((void *)from, (vaddr_t)to, len, COPY_flush_dcache);
>  }
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [v2 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it
  2017-12-12 19:01 ` [v2 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it Julien Grall
@ 2017-12-12 19:54   ` Stefano Stabellini
  0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-12-12 19:54 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Tue, 12 Dec 2017, Julien Grall wrote:
> The only differences between copy_to_guest (formerly called
> raw_copy_to_guest_helper) and raw_copy_from_guest is:
>     - The direction of the memcpy
>     - The permission use for translating the address
> 
> Extend copy_to_guest to support copying from guest VA by adding using a
> bit in the flags to tell the direction of the copy.
> 
> Lastly, reimplement raw_copy_from_guest using copy_to_guest.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Use vaddr_t
> ---
>  xen/arch/arm/guestcopy.c | 46 +++++++++++++---------------------------------
>  1 file changed, 13 insertions(+), 33 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 08d0fa0a83..12fb03dd19 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -6,6 +6,8 @@
>  #include <asm/guest_access.h>
>  
>  #define COPY_flush_dcache   (1U << 0)
> +#define COPY_from_guest     (0U << 1)
> +#define COPY_to_guest       (1U << 1)
>  
>  static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>                                  unsigned int flags)
> @@ -21,13 +23,18 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(current, addr, GV2M_WRITE);
> +        page = get_page_from_gva(current, addr,
> +                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
>          if ( page == NULL )
>              return len;
>  
>          p = __map_domain_page(page);
>          p += offset;
> -        memcpy(p, buf, size);
> +        if ( flags & COPY_to_guest )
> +            memcpy(p, buf, size);
> +        else
> +            memcpy(buf, p, size);
> +
>          if ( flags & COPY_flush_dcache )
>              clean_dcache_va_range(p, size);
>  
> @@ -48,13 +55,14 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>  
>  unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>  {
> -    return copy_guest((void *)from, (vaddr_t)to, len, 0);
> +    return copy_guest((void *)from, (vaddr_t)to, len, COPY_to_guest);
>  }
>  
>  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>                                               unsigned len)
>  {
> -    return copy_guest((void *)from, (vaddr_t)to, len, COPY_flush_dcache);
> +    return copy_guest((void *)from, (vaddr_t)to, len,
> +                      COPY_to_guest | COPY_flush_dcache);
>  }
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
> @@ -92,35 +100,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
> -    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
> -
> -    while ( len )
> -    {
> -        void *p;
> -        unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
> -        struct page_info *page;
> -
> -        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
> -        if ( page == NULL )
> -            return len;
> -
> -        p = __map_domain_page(page);
> -        p += ((vaddr_t)from & (~PAGE_MASK));
> -
> -        memcpy(to, p, size);
> -
> -        unmap_domain_page(p);
> -        put_page(page);
> -        len -= size;
> -        from += size;
> -        to += size;
> -        /*
> -         * After the first iteration, guest virtual address is correctly
> -         * aligned to PAGE_SIZE.
> -         */
> -        offset = 0;
> -    }
> -    return 0;
> +    return copy_guest(to, (vaddr_t)from, len, COPY_from_guest);
>  }
>  
>  /*
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [v2 04/16] xen/arm: Extend copy_to_guest to support zeroing guest VA and use it
  2017-12-12 19:02 ` [v2 04/16] xen/arm: Extend copy_to_guest to support zeroing " Julien Grall
@ 2017-12-12 19:57   ` Stefano Stabellini
  0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-12-12 19:57 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Tue, 12 Dec 2017, Julien Grall wrote:
> The function copy_to_guest can easily be extended to support zeroing
> guest VA. To avoid using a new bit, it is considered that a NULL buffer
> (i.e buf == NULL) means the guest memory will be zeroed.
> 
> Lastly, reimplement raw_clear_guest using copy_to_guest.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v3:
>         - Use vaddr_t
> ---
>  xen/arch/arm/guestcopy.c | 41 +++++++++++------------------------------
>  1 file changed, 11 insertions(+), 30 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 12fb03dd19..ff7d15380f 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -31,7 +31,16 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>          p = __map_domain_page(page);
>          p += offset;
>          if ( flags & COPY_to_guest )
> -            memcpy(p, buf, size);
> +        {
> +            /*
> +             * buf will be NULL when the caller request to zero the
> +             * guest memory.
> +             */
> +            if ( buf )
> +                memcpy(p, buf, size);
> +            else
> +                memset(p, 0, size);
> +        }
>          else
>              memcpy(buf, p, size);
>  
> @@ -67,35 +76,7 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
> -    /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
> -
> -    while ( len )
> -    {
> -        void *p;
> -        unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
> -        struct page_info *page;
> -
> -        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
> -        if ( page == NULL )
> -            return len;
> -
> -        p = __map_domain_page(page);
> -        p += offset;
> -        memset(p, 0x00, size);
> -
> -        unmap_domain_page(p - offset);
> -        put_page(page);
> -        len -= size;
> -        to += size;
> -        /*
> -         * After the first iteration, guest virtual address is correctly
> -         * aligned to PAGE_SIZE.
> -         */
> -        offset = 0;
> -    }
> -
> -    return 0;
> +    return copy_guest(NULL, (vaddr_t)to, len, COPY_to_guest);
>  }
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [v2 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU
  2017-12-12 19:02 ` [v2 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU Julien Grall
@ 2017-12-12 20:00   ` Stefano Stabellini
  0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-12-12 20:00 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Tue, 12 Dec 2017, Julien Grall wrote:
> Currently, guest_copy assumes the copy will only be done for the current
> vCPU. copy_guest is meant to be vCPU agnostic, so extend the prototype
> to pass the vCPU.
> 
> At the same time, encapsulate the vCPU in an union to allow extension
> for copying from a guest domain (ipa case) in the future.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Encapsulate the vCPU in an union.
>         - Rework the commit message
> ---
>  xen/arch/arm/guestcopy.c | 25 +++++++++++++++++++------
>  1 file changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index ff7d15380f..7e92e27beb 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -9,8 +9,18 @@
>  #define COPY_from_guest     (0U << 1)
>  #define COPY_to_guest       (1U << 1)
>  
> +typedef union
> +{
> +    struct
> +    {
> +        struct vcpu *v;
> +    } gva;
> +} copy_info_t;
> +
> +#define GVA_INFO(vcpu) ((copy_info_t) { .gva = { vcpu } })
> +
>  static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
> -                                unsigned int flags)
> +                                copy_info_t info, unsigned int flags)
>  {
>      /* XXX needs to handle faults */
>      unsigned offset = addr & ~PAGE_MASK;
> @@ -23,7 +33,7 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(current, addr,
> +        page = get_page_from_gva(info.gva.v, addr,
>                                   (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
>          if ( page == NULL )
>              return len;
> @@ -64,24 +74,27 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>  
>  unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>  {
> -    return copy_guest((void *)from, (vaddr_t)to, len, COPY_to_guest);
> +    return copy_guest((void *)from, (vaddr_t)to, len,
> +                      GVA_INFO(current), COPY_to_guest);
>  }
>  
>  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>                                               unsigned len)
>  {
> -    return copy_guest((void *)from, (vaddr_t)to, len,
> +    return copy_guest((void *)from, (vaddr_t)to, len, GVA_INFO(current),
>                        COPY_to_guest | COPY_flush_dcache);
>  }
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
> -    return copy_guest(NULL, (vaddr_t)to, len, COPY_to_guest);
> +    return copy_guest(NULL, (vaddr_t)to, len, GVA_INFO(current),
> +                      COPY_to_guest);
>  }
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
> -    return copy_guest(to, (vaddr_t)from, len, COPY_from_guest);
> +    return copy_guest(to, (vaddr_t)from, len, GVA_INFO(current),
> +                      COPY_from_guest);
>  }
>  
>  /*
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [v2 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address
  2017-12-12 19:02 ` [v2 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address Julien Grall
@ 2017-12-12 20:06   ` Stefano Stabellini
  0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-12-12 20:06 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Tue, 12 Dec 2017, Julien Grall wrote:
> The only differences between copy_to_guest and access_guest_memory_by_ipa are:
>     - The latter does not support copying data crossing page boundary
>     - The former is copying from/to guest VA whilst the latter from
>     guest PA
> 
> copy_to_guest can easily be extended to support copying from/to guest
> physical address. For that a new bit is used to tell whether linear
> address or ipa is been used.
> 
> Lastly access_guest_memory_by_ipa is reimplemented using copy_to_guest.
> This also has the benefits to extend the use of it, it is now possible
> to copy data crossing page boundary.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Rework the patch after the interface changes in the previous
>         patch.
>         - Use uint64_t rather than paddr_t in translate_get_page
>         - Add a BUILD_BUG_ON to check whether paddr_t fits in uint64_t
> ---
>  xen/arch/arm/guestcopy.c | 91 +++++++++++++++++++++++-------------------------
>  1 file changed, 44 insertions(+), 47 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 7e92e27beb..93e4aa2d3f 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -8,6 +8,8 @@
>  #define COPY_flush_dcache   (1U << 0)
>  #define COPY_from_guest     (0U << 1)
>  #define COPY_to_guest       (1U << 1)
> +#define COPY_ipa            (0U << 2)
> +#define COPY_linear         (1U << 2)
>  
>  typedef union
>  {
> @@ -15,9 +17,39 @@ typedef union
>      {
>          struct vcpu *v;
>      } gva;
> +
> +    struct
> +    {
> +        struct domain *d;
> +    } gpa;
>  } copy_info_t;
>  
>  #define GVA_INFO(vcpu) ((copy_info_t) { .gva = { vcpu } })
> +#define GPA_INFO(domain) ((copy_info_t) { .gpa = { domain } })
> +
> +static struct page_info *translate_get_page(copy_info_t info, uint64_t addr,
> +                                            bool linear, bool write)
> +{
> +    p2m_type_t p2mt;
> +    struct page_info *page;
> +
> +    if ( linear )
> +        return get_page_from_gva(info.gva.v, addr,
> +                                 write ? GV2M_WRITE : GV2M_READ);
> +
> +    page = get_page_from_gfn(info.gpa.d, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
> +
> +    if ( !page )
> +        return NULL;
> +
> +    if ( !p2m_is_ram(p2mt) )
> +    {
> +        put_page(page);
> +        return NULL;
> +    }
> +
> +    return page;
> +}
>  
>  static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>                                  copy_info_t info, unsigned int flags)
> @@ -26,6 +58,7 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>      unsigned offset = addr & ~PAGE_MASK;
>  
>      BUILD_BUG_ON((sizeof(addr)) < sizeof(vaddr_t));
> +    BUILD_BUG_ON((sizeof(addr)) < sizeof(paddr_t));
>  
>      while ( len )
>      {
> @@ -33,8 +66,8 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(info.gva.v, addr,
> -                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
> +        page = translate_get_page(info, addr, flags & COPY_linear,
> +                                  flags & COPY_to_guest);
>          if ( page == NULL )
>              return len;
>  
> @@ -75,75 +108,39 @@ static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>  unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>  {
>      return copy_guest((void *)from, (vaddr_t)to, len,
> -                      GVA_INFO(current), COPY_to_guest);
> +                      GVA_INFO(current), COPY_to_guest | COPY_linear);
>  }
>  
>  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>                                               unsigned len)
>  {
>      return copy_guest((void *)from, (vaddr_t)to, len, GVA_INFO(current),
> -                      COPY_to_guest | COPY_flush_dcache);
> +                      COPY_to_guest | COPY_flush_dcache | COPY_linear);
>  }
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
>      return copy_guest(NULL, (vaddr_t)to, len, GVA_INFO(current),
> -                      COPY_to_guest);
> +                      COPY_to_guest | COPY_linear);
>  }
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
>      return copy_guest(to, (vaddr_t)from, len, GVA_INFO(current),
> -                      COPY_from_guest);
> +                      COPY_from_guest | COPY_linear);
>  }
>  
> -/*
> - * Temporarily map one physical guest page and copy data to or from it.
> - * The data to be copied cannot cross a page boundary.
> - */
>  int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
>                                 uint32_t size, bool is_write)
>  {
> -    struct page_info *page;
> -    uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
> -    p2m_type_t p2mt;
> -    void *p;
> -
> -    /* Do not cross a page boundary. */
> -    if ( size > (PAGE_SIZE - offset) )
> -    {
> -        printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page boundary.\n",
> -               d->domain_id);
> -        return -EINVAL;
> -    }
> -
> -    page = get_page_from_gfn(d, paddr_to_pfn(gpa), &p2mt, P2M_ALLOC);
> -    if ( !page )
> -    {
> -        printk(XENLOG_G_ERR "d%d: guestcopy: failed to get table entry.\n",
> -               d->domain_id);
> -        return -EINVAL;
> -    }
> +    unsigned long left;
> +    int flags = COPY_ipa;
>  
> -    if ( !p2m_is_ram(p2mt) )
> -    {
> -        put_page(page);
> -        printk(XENLOG_G_ERR "d%d: guestcopy: guest memory should be RAM.\n",
> -               d->domain_id);
> -        return -EINVAL;
> -    }
> -
> -    p = __map_domain_page(page);
> +    flags |= is_write ? COPY_to_guest : COPY_from_guest;
>  
> -    if ( is_write )
> -        memcpy(p + offset, buf, size);
> -    else
> -        memcpy(buf, p + offset, size);
> +    left = copy_guest(buf, gpa, size, GPA_INFO(d), flags);
>  
> -    unmap_domain_page(p);
> -    put_page(page);
> -
> -    return 0;
> +    return (!left) ? 0 : -EINVAL;
>  }
>  
>  /*
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [v2 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-12-12 19:02 ` [v2 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache Julien Grall
@ 2017-12-12 20:10   ` Stefano Stabellini
  0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-12-12 20:10 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Tue, 12 Dec 2017, Julien Grall wrote:
> This new function will be used in a follow-up patch to copy data to the guest
> using the IPA (aka guest physical address) and then clean the cache.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Use the new interface
> ---
>  xen/arch/arm/guestcopy.c           | 9 +++++++++
>  xen/include/asm-arm/guest_access.h | 6 ++++++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 93e4aa2d3f..7a0f3e9d5f 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -130,6 +130,15 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
>                        COPY_from_guest | COPY_linear);
>  }
>  
> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
> +                                              paddr_t gpa,
> +                                              void *buf,
> +                                              unsigned int len)
> +{
> +    return copy_guest(buf, gpa, len, GPA_INFO(d),
> +                      COPY_to_guest | COPY_ipa | COPY_flush_dcache);
> +}
> +
>  int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
>                                 uint32_t size, bool is_write)
>  {
> diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
> index 6796801cfe..224d2a033b 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -11,6 +11,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>  unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
>  unsigned long raw_clear_guest(void *to, unsigned len);
>  
> +/* Copy data to guest physical address, then clean the region. */
> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
> +                                              paddr_t phys,
> +                                              void *buf,
> +                                              unsigned int len);
> +
>  int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
>                                 uint32_t size, bool is_write);
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [v2 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest
  2017-12-12 19:02 ` [v2 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest Julien Grall
@ 2017-12-12 20:11   ` Stefano Stabellini
  0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-12-12 20:11 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Tue, 12 Dec 2017, Julien Grall wrote:
> The two helpers do_trap_instr_abort_guest and do_trap_data_abort_guest
> are used trap stage-2 abort. While the former is only handling prefetch
> abort and the latter data abort, they are very similarly and does not
> warrant to have separate helpers.
> 
> For instance, merging the both will make easier to maintain stage-2 abort
> handling. So consolidate the two helpers in a new helper
> do_trap_stage2_abort.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2
>         - Fix the way to compute npfec.write_access
> ---
>  xen/arch/arm/traps.c | 133 ++++++++++++++++-----------------------------------
>  1 file changed, 41 insertions(+), 92 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 082396c26d..013c1600ec 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1861,79 +1861,6 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
>      return s1ptw || (fsc == FSC_FLT_TRANS && !check_workaround_834220());
>  }
>  
> -static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
> -                                      const union hsr hsr)
> -{
> -    int rc;
> -    register_t gva;
> -    uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
> -    paddr_t gpa;
> -    mfn_t mfn;
> -
> -    gva = get_hfar(false /* is_data */);
> -
> -    /*
> -     * If this bit has been set, it means that this instruction abort is caused
> -     * by a guest external abort. We can handle this instruction abort as guest
> -     * SError.
> -     */
> -    if ( hsr.iabt.eat )
> -        return __do_trap_serror(regs, true);
> -
> -
> -    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
> -        gpa = get_faulting_ipa(gva);
> -    else
> -    {
> -        /*
> -         * Flush the TLB to make sure the DTLB is clear before
> -         * doing GVA->IPA translation. If we got here because of
> -         * an entry only present in the ITLB, this translation may
> -         * still be inaccurate.
> -         */
> -        flush_tlb_local();
> -
> -        /*
> -         * We may not be able to translate because someone is
> -         * playing with the Stage-2 page table of the domain.
> -         * Return to the guest.
> -         */
> -        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
> -        if ( rc == -EFAULT )
> -            return; /* Try again */
> -    }
> -
> -    switch ( fsc )
> -    {
> -    case FSC_FLT_PERM:
> -    {
> -        const struct npfec npfec = {
> -            .insn_fetch = 1,
> -            .gla_valid = 1,
> -            .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> -        };
> -
> -        p2m_mem_access_check(gpa, gva, npfec);
> -        /*
> -         * The only way to get here right now is because of mem_access,
> -         * thus reinjecting the exception to the guest is never required.
> -         */
> -        return;
> -    }
> -    case FSC_FLT_TRANS:
> -        /*
> -         * The PT walk may have failed because someone was playing
> -         * with the Stage-2 page table. Walk the Stage-2 PT to check
> -         * if the entry exists. If it's the case, return to the guest
> -         */
> -        mfn = gfn_to_mfn(current->domain, _gfn(paddr_to_pfn(gpa)));
> -        if ( !mfn_eq(mfn, INVALID_MFN) )
> -            return;
> -    }
> -
> -    inject_iabt_exception(regs, gva, hsr.len);
> -}
> -
>  static bool try_handle_mmio(struct cpu_user_regs *regs,
>                              const union hsr hsr,
>                              paddr_t gpa)
> @@ -1945,6 +1872,8 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
>      };
>      int rc;
>  
> +    ASSERT(hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
> +
>      /* stage-1 page table should never live in an emulated MMIO region */
>      if ( dabt.s1ptw )
>          return false;
> @@ -2000,29 +1929,43 @@ static bool try_map_mmio(gfn_t gfn)
>      return !map_regions_p2mt(d, gfn, 1, mfn, p2m_mmio_direct_c);
>  }
>  
> -static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
> -                                     const union hsr hsr)
> +static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
> +                                       const union hsr hsr)
>  {
> -    const struct hsr_dabt dabt = hsr.dabt;
> +    /*
> +     * The encoding of hsr_iabt is a subset of hsr_dabt. So use
> +     * hsr_dabt to represent an abort fault.
> +     */
> +    const struct hsr_xabt xabt = hsr.xabt;
>      int rc;
>      vaddr_t gva;
>      paddr_t gpa;
> -    uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
> +    uint8_t fsc = xabt.fsc & ~FSC_LL_MASK;
>      mfn_t mfn;
> +    bool is_data = (hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
>  
>      /*
> -     * If this bit has been set, it means that this data abort is caused
> -     * by a guest external abort. We treat this data abort as guest SError.
> +     * If this bit has been set, it means that this stage-2 abort is caused
> +     * by a guest external abort. We treat this stage-2 abort as guest SError.
>       */
> -    if ( dabt.eat )
> +    if ( xabt.eat )
>          return __do_trap_serror(regs, true);
>  
> -    gva = get_hfar(true /* is_data */);
> +    gva = get_hfar(is_data);
>  
> -    if ( hpfar_is_valid(dabt.s1ptw, fsc) )
> +    if ( hpfar_is_valid(xabt.s1ptw, fsc) )
>          gpa = get_faulting_ipa(gva);
>      else
>      {
> +        /*
> +         * Flush the TLB to make sure the DTLB is clear before
> +         * doing GVA->IPA translation. If we got here because of
> +         * an entry only present in the ITLB, this translation may
> +         * still be inaccurate.
> +         */
> +        if ( !is_data )
> +            flush_tlb_local();
> +
>          rc = gva_to_ipa(gva, &gpa, GV2M_READ);
>          /*
>           * We may not be able to translate because someone is
> @@ -2038,10 +1981,11 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      case FSC_FLT_PERM:
>      {
>          const struct npfec npfec = {
> -            .read_access = !dabt.write,
> -            .write_access = dabt.write,
> +            .insn_fetch = !is_data,
> +            .read_access = is_data && !hsr.dabt.write,
> +            .write_access = is_data && hsr.dabt.write,
>              .gla_valid = 1,
> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> +            .kind = xabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
>          };
>  
>          p2m_mem_access_check(gpa, gva, npfec);
> @@ -2055,8 +1999,10 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>          /*
>           * Attempt first to emulate the MMIO as the data abort will
>           * likely happen in an emulated region.
> +         *
> +         * Note that emulated region cannot be executed
>           */
> -        if ( try_handle_mmio(regs, hsr, gpa) )
> +        if ( is_data && try_handle_mmio(regs, hsr, gpa) )
>          {
>              advance_pc(regs, hsr);
>              return;
> @@ -2071,18 +2017,21 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>          if ( !mfn_eq(mfn, INVALID_MFN) )
>              return;
>  
> -        if ( try_map_mmio(gaddr_to_gfn(gpa)) )
> +        if ( is_data && try_map_mmio(gaddr_to_gfn(gpa)) )
>              return;
>  
>          break;
>      default:
> -        gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
> -                hsr.bits, dabt.dfsc);
> +        gprintk(XENLOG_WARNING, "Unsupported FSC: HSR=%#x DFSC=%#x\n",
> +                hsr.bits, xabt.fsc);
>      }
>  
>      gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
>               " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, gpa);
> -    inject_dabt_exception(regs, gva, hsr.len);
> +    if ( is_data )
> +        inject_dabt_exception(regs, gva, hsr.len);
> +    else
> +        inject_iabt_exception(regs, gva, hsr.len);
>  }
>  
>  static void enter_hypervisor_head(struct cpu_user_regs *regs)
> @@ -2215,11 +2164,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
>  
>      case HSR_EC_INSTR_ABORT_LOWER_EL:
>          perfc_incr(trap_iabt);
> -        do_trap_instr_abort_guest(regs, hsr);
> +        do_trap_stage2_abort_guest(regs, hsr);
>          break;
>      case HSR_EC_DATA_ABORT_LOWER_EL:
>          perfc_incr(trap_dabt);
> -        do_trap_data_abort_guest(regs, hsr);
> +        do_trap_stage2_abort_guest(regs, hsr);
>          break;
>  
>      default:
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2017-12-12 20:11 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-12 19:01 [v2 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
2017-12-12 19:01 ` [v2 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags Julien Grall
2017-12-12 19:01 ` [v2 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it Julien Grall
2017-12-12 19:51   ` Stefano Stabellini
2017-12-12 19:01 ` [v2 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it Julien Grall
2017-12-12 19:54   ` Stefano Stabellini
2017-12-12 19:02 ` [v2 04/16] xen/arm: Extend copy_to_guest to support zeroing " Julien Grall
2017-12-12 19:57   ` Stefano Stabellini
2017-12-12 19:02 ` [v2 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU Julien Grall
2017-12-12 20:00   ` Stefano Stabellini
2017-12-12 19:02 ` [v2 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address Julien Grall
2017-12-12 20:06   ` Stefano Stabellini
2017-12-12 19:02 ` [v2 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache Julien Grall
2017-12-12 20:10   ` Stefano Stabellini
2017-12-12 19:02 ` [v2 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper Julien Grall
2017-12-12 19:02 ` [v2 09/16] xen/arm: domain_build: Rework initrd_load " Julien Grall
2017-12-12 19:02 ` [v2 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load Julien Grall
2017-12-12 19:02 ` [v2 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync Julien Grall
2017-12-12 19:02 ` [v2 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it Julien Grall
2017-12-12 19:02 ` [v2 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync Julien Grall
2017-12-12 19:02 ` [v2 14/16] xen/arm: traps: Remove the field gva from mmio_info_t Julien Grall
2017-12-12 19:02 ` [v2 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio Julien Grall
2017-12-12 19:02 ` [v2 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest Julien Grall
2017-12-12 20:11   ` Stefano Stabellini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.