All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup
@ 2017-11-23 18:31 Julien Grall
  2017-11-23 18:31 ` [PATCH for-next 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags Julien Grall
                   ` (15 more replies)
  0 siblings, 16 replies; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:31 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

Hi all,

This patch series is a collection of cleanup around stage-2 handling. They
are consolidating different pieces of the hypervisor. This will make easier
to maintain and update stage-2 change in the future.

Cheers,

Julien Grall (16):
  xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags
  xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it
  xen/arm: Extend copy_to_guest to support copying from guest VA and use
    it
  xen/arm: Extend copy_to_guest to support zeroing guest VA and use it
  xen/arm: guest_copy: Extend the prototype to pass the vCPU
  xen/arm: Extend copy_to_guest to support copying from/to guest
    physical address
  xen/arm: Introduce copy_to_guest_phys_flush_dcache
  xen/arm: kernel: Rework kernel_zimage_load to use the generic copy
    helper
  xen/arm: domain_build: Rework initrd_load to use the generic copy
    helper
  xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load
  xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync
  xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it
  xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync
  xen/arm: traps: Remove the field gva from mmio_info_t
  xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio
  xen/arm: traps: Merge do_trap_instr_abort_guest and
    do_trap_data_abort_guest

 xen/arch/arm/domain_build.c        |  39 +++-----
 xen/arch/arm/guestcopy.c           | 182 +++++++++++++++----------------------
 xen/arch/arm/kernel.c              |  33 +++----
 xen/arch/arm/kernel.h              |   2 +
 xen/arch/arm/p2m.c                 |  53 +++++------
 xen/arch/arm/traps.c               | 161 ++++++++++++--------------------
 xen/include/asm-arm/guest_access.h |   6 ++
 xen/include/asm-arm/mmio.h         |   1 -
 xen/include/asm-arm/p2m.h          |   2 +
 9 files changed, 191 insertions(+), 288 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH for-next 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
@ 2017-11-23 18:31 ` Julien Grall
  2017-12-06  0:51   ` Stefano Stabellini
  2017-11-23 18:31 ` [PATCH for-next 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it Julien Grall
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:31 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

In a follow-up patch, it will be necessary to pass more flags to the
function.

Rename flush_dcache to flags and introduce a define to tell whether the
cache needs to be flushed after the copy.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/guestcopy.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 4ee07fcea3..2620e659b4 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -5,8 +5,10 @@
 #include <asm/current.h>
 #include <asm/guest_access.h>
 
+#define COPY_flush_dcache   (1U << 0)
+
 static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
-                                              unsigned len, int flush_dcache)
+                                              unsigned len, int flags)
 {
     /* XXX needs to handle faults */
     unsigned offset = (vaddr_t)to & ~PAGE_MASK;
@@ -24,7 +26,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         p = __map_domain_page(page);
         p += offset;
         memcpy(p, from, size);
-        if ( flush_dcache )
+        if ( flags & COPY_flush_dcache )
             clean_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
@@ -50,7 +52,7 @@ unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
-    return raw_copy_to_guest_helper(to, from, len, 1);
+    return raw_copy_to_guest_helper(to, from, len, COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
  2017-11-23 18:31 ` [PATCH for-next 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags Julien Grall
@ 2017-11-23 18:31 ` Julien Grall
  2017-12-06  1:04   ` Stefano Stabellini
  2017-11-23 18:31 ` [PATCH for-next 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it Julien Grall
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:31 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

All the helpers within arch/arm/guestcopy.c are doing the same things:
copy data from/to the guest.

At the moment, the logic is duplicated in each helpers making more
difficult to implement new variant.

The first step for the consolidation is to get a common prototype and a
base. For convenience (it is at the beginning of the file!),
raw_copy_to_guest_helper is chosen.

The function is now renamed copy_guest to show it will be a
generic function to copy data from/to the guest. Note that for now, only
copying to guest virtual address is supported. Follow-up patches will
extend the support.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/guestcopy.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 2620e659b4..d1cfbe922c 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -7,11 +7,11 @@
 
 #define COPY_flush_dcache   (1U << 0)
 
-static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
-                                              unsigned len, int flags)
+static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
+                                unsigned int flags)
 {
     /* XXX needs to handle faults */
-    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
+    unsigned offset = addr & ~PAGE_MASK;
 
     while ( len )
     {
@@ -19,21 +19,21 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
+        page = get_page_from_gva(current, addr, GV2M_WRITE);
         if ( page == NULL )
             return len;
 
         p = __map_domain_page(page);
         p += offset;
-        memcpy(p, from, size);
+        memcpy(p, buf, size);
         if ( flags & COPY_flush_dcache )
             clean_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
         put_page(page);
         len -= size;
-        from += size;
-        to += size;
+        buf += size;
+        addr += size;
         /*
          * After the first iteration, guest virtual address is correctly
          * aligned to PAGE_SIZE.
@@ -46,13 +46,13 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
-    return raw_copy_to_guest_helper(to, from, len, 0);
+    return copy_guest((void *)from, (unsigned long)to, len, 0);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
-    return raw_copy_to_guest_helper(to, from, len, COPY_flush_dcache);
+    return copy_guest((void *)from, (unsigned long)to, len, COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
  2017-11-23 18:31 ` [PATCH for-next 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags Julien Grall
  2017-11-23 18:31 ` [PATCH for-next 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it Julien Grall
@ 2017-11-23 18:31 ` Julien Grall
  2017-12-06  1:05   ` Stefano Stabellini
  2017-11-23 18:31 ` [PATCH for-next 04/16] xen/arm: Extend copy_to_guest to support zeroing " Julien Grall
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:31 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The only differences between copy_to_guest (formerly called
raw_copy_to_guest_helper) and raw_copy_from_guest is:
    - The direction of the memcpy
    - The permission use for translating the address

Extend copy_to_guest to support copying from guest VA by adding using a
bit in the flags to tell the direction of the copy.

Lastly, reimplement raw_copy_from_guest using copy_to_guest.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/guestcopy.c | 46 +++++++++++++---------------------------------
 1 file changed, 13 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index d1cfbe922c..1ffa717ca6 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -6,6 +6,8 @@
 #include <asm/guest_access.h>
 
 #define COPY_flush_dcache   (1U << 0)
+#define COPY_from_guest     (0U << 1)
+#define COPY_to_guest       (1U << 1)
 
 static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
                                 unsigned int flags)
@@ -19,13 +21,18 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current, addr, GV2M_WRITE);
+        page = get_page_from_gva(current, addr,
+                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
         if ( page == NULL )
             return len;
 
         p = __map_domain_page(page);
         p += offset;
-        memcpy(p, buf, size);
+        if ( flags & COPY_to_guest )
+            memcpy(p, buf, size);
+        else
+            memcpy(buf, p, size);
+
         if ( flags & COPY_flush_dcache )
             clean_dcache_va_range(p, size);
 
@@ -46,13 +53,14 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
-    return copy_guest((void *)from, (unsigned long)to, len, 0);
+    return copy_guest((void *)from, (unsigned long)to, len, COPY_to_guest);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
-    return copy_guest((void *)from, (unsigned long)to, len, COPY_flush_dcache);
+    return copy_guest((void *)from, (unsigned long)to, len,
+                      COPY_to_guest | COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
@@ -90,35 +98,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
 {
-    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
-
-    while ( len )
-    {
-        void *p;
-        unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
-        struct page_info *page;
-
-        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
-        if ( page == NULL )
-            return len;
-
-        p = __map_domain_page(page);
-        p += ((vaddr_t)from & (~PAGE_MASK));
-
-        memcpy(to, p, size);
-
-        unmap_domain_page(p);
-        put_page(page);
-        len -= size;
-        from += size;
-        to += size;
-        /*
-         * After the first iteration, guest virtual address is correctly
-         * aligned to PAGE_SIZE.
-         */
-        offset = 0;
-    }
-    return 0;
+    return copy_guest(to, (unsigned long)from, len, COPY_from_guest);
 }
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 04/16] xen/arm: Extend copy_to_guest to support zeroing guest VA and use it
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (2 preceding siblings ...)
  2017-11-23 18:31 ` [PATCH for-next 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it Julien Grall
@ 2017-11-23 18:31 ` Julien Grall
  2017-12-06  1:08   ` Stefano Stabellini
  2017-11-23 18:31 ` [PATCH for-next 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU Julien Grall
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:31 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The function copy_to_guest can easily be extended to support zeroing
guest VA. To avoid using a new bit, it is considered that a NULL buffer
(i.e buf == NULL) means the guest memory will be zeroed.

Lastly, reimplement raw_clear_guest using copy_to_guest.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/guestcopy.c | 41 +++++++++++------------------------------
 1 file changed, 11 insertions(+), 30 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 1ffa717ca6..3aaa80859e 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -29,7 +29,16 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
         p = __map_domain_page(page);
         p += offset;
         if ( flags & COPY_to_guest )
-            memcpy(p, buf, size);
+        {
+            /*
+             * buf will be NULL when the caller request to zero the
+             * guest memory.
+             */
+            if ( buf )
+                memcpy(p, buf, size);
+            else
+                memset(p, 0, size);
+        }
         else
             memcpy(buf, p, size);
 
@@ -65,35 +74,7 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
 
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
-    /* XXX needs to handle faults */
-    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
-
-    while ( len )
-    {
-        void *p;
-        unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
-        struct page_info *page;
-
-        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
-        if ( page == NULL )
-            return len;
-
-        p = __map_domain_page(page);
-        p += offset;
-        memset(p, 0x00, size);
-
-        unmap_domain_page(p - offset);
-        put_page(page);
-        len -= size;
-        to += size;
-        /*
-         * After the first iteration, guest virtual address is correctly
-         * aligned to PAGE_SIZE.
-         */
-        offset = 0;
-    }
-
-    return 0;
+    return copy_guest(NULL, (unsigned long)to, len, COPY_to_guest);
 }
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (3 preceding siblings ...)
  2017-11-23 18:31 ` [PATCH for-next 04/16] xen/arm: Extend copy_to_guest to support zeroing " Julien Grall
@ 2017-11-23 18:31 ` Julien Grall
  2017-12-06  1:11   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address Julien Grall
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:31 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

Currently, guest_copy assumes the copy will only be done for the current
vCPU. A follow-up patch will require to use a different vCPU.

So extend the prototype to pass the vCPU.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/guestcopy.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 3aaa80859e..487f5ab82d 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -10,7 +10,7 @@
 #define COPY_to_guest       (1U << 1)
 
 static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
-                                unsigned int flags)
+                                struct vcpu *v, unsigned int flags)
 {
     /* XXX needs to handle faults */
     unsigned offset = addr & ~PAGE_MASK;
@@ -21,7 +21,7 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(current, addr,
+        page = get_page_from_gva(v, addr,
                                  (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
         if ( page == NULL )
             return len;
@@ -62,24 +62,25 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
-    return copy_guest((void *)from, (unsigned long)to, len, COPY_to_guest);
+    return copy_guest((void *)from, (unsigned long)to, len,
+                      current, COPY_to_guest);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
     return copy_guest((void *)from, (unsigned long)to, len,
-                      COPY_to_guest | COPY_flush_dcache);
+                      current, COPY_to_guest | COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
-    return copy_guest(NULL, (unsigned long)to, len, COPY_to_guest);
+    return copy_guest(NULL, (unsigned long)to, len, current, COPY_to_guest);
 }
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
 {
-    return copy_guest(to, (unsigned long)from, len, COPY_from_guest);
+    return copy_guest(to, (unsigned long)from, len, current, COPY_from_guest);
 }
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (4 preceding siblings ...)
  2017-11-23 18:31 ` [PATCH for-next 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-06  1:22   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache Julien Grall
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The only differences between copy_to_guest and access_guest_memory_by_ipa are:
    - The latter does not support copying data crossing page boundary
    - The former is copying from/to guest VA whilst the latter from
    guest PA

copy_to_guest can easily be extended to support copying from/to guest
physical address. For that a new bit is used to tell whether linear
address or ipa is been used.

Lastly access_guest_memory_by_ipa is reimplemented using copy_to_guest.
This also has the benefits to extend the use of it, it is now possible
to copy data crossing page boundary.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/guestcopy.c | 86 ++++++++++++++++++++++--------------------------
 1 file changed, 39 insertions(+), 47 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 487f5ab82d..be53bee559 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -8,6 +8,31 @@
 #define COPY_flush_dcache   (1U << 0)
 #define COPY_from_guest     (0U << 1)
 #define COPY_to_guest       (1U << 1)
+#define COPY_ipa            (0U << 2)
+#define COPY_linear         (1U << 2)
+
+static struct page_info *translate_get_page(struct vcpu *v, paddr_t addr,
+                                            bool linear, bool write)
+{
+    p2m_type_t p2mt;
+    struct page_info *page;
+
+    if ( linear )
+        return get_page_from_gva(v, addr, write ? GV2M_WRITE : GV2M_READ);
+
+    page = get_page_from_gfn(v->domain, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
+
+    if ( !page )
+        return NULL;
+
+    if ( !p2m_is_ram(p2mt) )
+    {
+        put_page(page);
+        return NULL;
+    }
+
+    return page;
+}
 
 static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
                                 struct vcpu *v, unsigned int flags)
@@ -21,8 +46,8 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
         struct page_info *page;
 
-        page = get_page_from_gva(v, addr,
-                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
+        page = translate_get_page(v, addr, flags & COPY_linear,
+                                  flags & COPY_to_guest);
         if ( page == NULL )
             return len;
 
@@ -63,73 +88,40 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
     return copy_guest((void *)from, (unsigned long)to, len,
-                      current, COPY_to_guest);
+                      current, COPY_to_guest | COPY_linear);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
                                              unsigned len)
 {
     return copy_guest((void *)from, (unsigned long)to, len,
-                      current, COPY_to_guest | COPY_flush_dcache);
+                      current, COPY_to_guest | COPY_flush_dcache | COPY_linear);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
-    return copy_guest(NULL, (unsigned long)to, len, current, COPY_to_guest);
+    return copy_guest(NULL, (unsigned long)to, len, current,
+                      COPY_to_guest | COPY_linear);
 }
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
 {
-    return copy_guest(to, (unsigned long)from, len, current, COPY_from_guest);
+    return copy_guest(to, (unsigned long)from, len, current,
+                      COPY_from_guest | COPY_linear);
 }
 
-/*
- * Temporarily map one physical guest page and copy data to or from it.
- * The data to be copied cannot cross a page boundary.
- */
 int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
                                uint32_t size, bool is_write)
 {
-    struct page_info *page;
-    uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
-    p2m_type_t p2mt;
-    void *p;
-
-    /* Do not cross a page boundary. */
-    if ( size > (PAGE_SIZE - offset) )
-    {
-        printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page boundary.\n",
-               d->domain_id);
-        return -EINVAL;
-    }
-
-    page = get_page_from_gfn(d, paddr_to_pfn(gpa), &p2mt, P2M_ALLOC);
-    if ( !page )
-    {
-        printk(XENLOG_G_ERR "d%d: guestcopy: failed to get table entry.\n",
-               d->domain_id);
-        return -EINVAL;
-    }
-
-    if ( !p2m_is_ram(p2mt) )
-    {
-        put_page(page);
-        printk(XENLOG_G_ERR "d%d: guestcopy: guest memory should be RAM.\n",
-               d->domain_id);
-        return -EINVAL;
-    }
+    unsigned long left;
+    int flags = COPY_ipa;
 
-    p = __map_domain_page(page);
+    flags |= is_write ? COPY_to_guest : COPY_from_guest;
 
-    if ( is_write )
-        memcpy(p + offset, buf, size);
-    else
-        memcpy(buf, p + offset, size);
+    /* P2M is shared between all vCPUs, so the vcpu used does not matter. */
+    left = copy_guest(buf, gpa, size, d->vcpu[0], flags);
 
-    unmap_domain_page(p);
-    put_page(page);
-
-    return 0;
+    return (!left) ? 0 : -EINVAL;
 }
 
 /*
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (5 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-11-23 18:49   ` Andrew Cooper
  2017-11-23 18:32 ` [PATCH for-next 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper Julien Grall
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

This new function will be used in a follow-up patch to copy data to the guest
using the IPA (aka guest physical address) and then clean the cache.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/guestcopy.c           | 10 ++++++++++
 xen/include/asm-arm/guest_access.h |  6 ++++++
 2 files changed, 16 insertions(+)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index be53bee559..7958663970 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
                       COPY_from_guest | COPY_linear);
 }
 
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+                                              paddr_t gpa,
+                                              void *buf,
+                                              unsigned int len)
+{
+    /* P2M is shared between all vCPUs, so the vCPU used does not matter. */
+    return copy_guest(buf, gpa, len, d->vcpu[0],
+                      COPY_to_guest | COPY_ipa | COPY_flush_dcache);
+}
+
 int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
                                uint32_t size, bool is_write)
 {
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 6796801cfe..224d2a033b 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -11,6 +11,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
 unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
 unsigned long raw_clear_guest(void *to, unsigned len);
 
+/* Copy data to guest physical address, then clean the region. */
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+                                              paddr_t phys,
+                                              void *buf,
+                                              unsigned int len);
+
 int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
                                uint32_t size, bool is_write);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (6 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-06  1:38   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 09/16] xen/arm: domain_build: Rework initrd_load " Julien Grall
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The function kernel_zimage is dealing with IPA but uses gvirt_to_maddr to
do the translation. This is currently working fine because the stage-1 MMU
is disabled.

Furthermore, the function is implementing its own copy to guest resulting
in code duplication and making more difficult to update the logic in
page-tables (such support for Populate On Demand).

The new copy_to_guest_phys_flush_dcache could be used here by
temporarily mapping the full kernel in the virtual space.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/domain_build.c |  1 +
 xen/arch/arm/kernel.c       | 33 ++++++++++++---------------------
 xen/arch/arm/kernel.h       |  2 ++
 3 files changed, 15 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c74f4dd69d..3f87bf2051 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2133,6 +2133,7 @@ int construct_dom0(struct domain *d)
     d->max_pages = ~0U;
 
     kinfo.unassigned_mem = dom0_mem;
+    kinfo.d = d;
 
     rc = kernel_probe(&kinfo);
     if ( rc < 0 )
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index a6c6413712..2fb0b9684d 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -15,6 +15,8 @@
 #include <xen/gunzip.h>
 #include <xen/vmap.h>
 
+#include <asm/guest_access.h>
+
 #include "kernel.h"
 
 #define UIMAGE_MAGIC          0x27051956
@@ -157,7 +159,8 @@ static void kernel_zimage_load(struct kernel_info *info)
     paddr_t load_addr = kernel_zimage_place(info);
     paddr_t paddr = info->zimage.kernel_addr;
     paddr_t len = info->zimage.len;
-    unsigned long offs;
+    void *kernel;
+    int rc;
 
     info->entry = load_addr;
 
@@ -165,29 +168,17 @@ static void kernel_zimage_load(struct kernel_info *info)
 
     printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
            paddr, load_addr, load_addr + len);
-    for ( offs = 0; offs < len; )
-    {
-        uint64_t par;
-        paddr_t s, l, ma = 0;
-        void *dst;
-
-        s = offs & ~PAGE_MASK;
-        l = min(PAGE_SIZE - s, len);
-
-        par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
-        if ( par )
-        {
-            panic("Unable to map translate guest address");
-            return;
-        }
 
-        dst = map_domain_page(maddr_to_mfn(ma));
+    kernel = ioremap_wc(paddr, len);
+    if ( !kernel )
+        panic("Unable to map the hwdom kernel");
 
-        copy_from_paddr(dst + s, paddr + offs, l);
+    rc = copy_to_guest_phys_flush_dcache(info->d, load_addr,
+                                         kernel, len);
+    if ( rc != 0 )
+        panic("Unable to copy the kernel in the hwdom memory");
 
-        unmap_domain_page(dst);
-        offs += l;
-    }
+    iounmap(kernel);
 }
 
 /*
diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index c1b07d4f7b..6d695097b5 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -15,6 +15,8 @@ struct kernel_info {
     enum domain_type type;
 #endif
 
+    struct domain *d;
+
     void *fdt; /* flat device tree */
     paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
     struct meminfo mem;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 09/16] xen/arm: domain_build: Rework initrd_load to use the generic copy helper
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (7 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-06  1:56   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load Julien Grall
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The function initrd_load is dealing with IPA but uses gvirt_to_maddr to
do the translation. This is currently working fine because the stage-1 MMU
is disabled.

Furthermore, the function is implementing its own copy to guest resulting
in code duplication and making more difficult to update the logic in
page-tables (such support for Populate On Demand).

The new copy_to_guest_phys_flush_dcache could be used here by temporarily
mapping the full initrd in the virtual space.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/domain_build.c | 31 ++++++++-----------------------
 1 file changed, 8 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 3f87bf2051..42c2e16ef6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1966,11 +1966,11 @@ static void initrd_load(struct kernel_info *kinfo)
     const struct bootmodule *mod = kinfo->initrd_bootmodule;
     paddr_t load_addr = kinfo->initrd_paddr;
     paddr_t paddr, len;
-    unsigned long offs;
     int node;
     int res;
     __be32 val[2];
     __be32 *cellp;
+    void __iomem *initrd;
 
     if ( !mod || !mod->size )
         return;
@@ -2000,29 +2000,14 @@ static void initrd_load(struct kernel_info *kinfo)
     if ( res )
         panic("Cannot fix up \"linux,initrd-end\" property");
 
-    for ( offs = 0; offs < len; )
-    {
-        uint64_t par;
-        paddr_t s, l, ma = 0;
-        void *dst;
-
-        s = offs & ~PAGE_MASK;
-        l = min(PAGE_SIZE - s, len);
-
-        par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
-        if ( par )
-        {
-            panic("Unable to translate guest address");
-            return;
-        }
-
-        dst = map_domain_page(maddr_to_mfn(ma));
+    initrd = ioremap_wc(paddr, len);
+    if ( !initrd )
+        panic("Unable to map the hwdom initrd");
 
-        copy_from_paddr(dst + s, paddr + offs, l);
-
-        unmap_domain_page(dst);
-        offs += l;
-    }
+    res = copy_to_guest_phys_flush_dcache(kinfo->d, load_addr,
+                                          initrd, len);
+    if ( res != 0 )
+        panic("Unable to copy the initrd in the hwdom memory");
 }
 
 static void evtchn_fixup(struct domain *d, struct kernel_info *kinfo)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (8 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 09/16] xen/arm: domain_build: Rework initrd_load " Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-06  1:59   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync Julien Grall
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The function dtb_load is dealing with IPA but uses gvirt_to_maddr to do
the translation. This is currently working fine because the stage-1 MMU
is disabled.

Rather than relying on such assumption, use the new
copy_to_guest_phys_flush_dcache. This also result to a slightly more
comprehensible code.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/domain_build.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 42c2e16ef6..9245753a6b 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1948,14 +1948,15 @@ static int prepare_acpi(struct domain *d, struct kernel_info *kinfo)
 #endif
 static void dtb_load(struct kernel_info *kinfo)
 {
-    void * __user dtb_virt = (void * __user)(register_t)kinfo->dtb_paddr;
     unsigned long left;
 
     printk("Loading dom0 DTB to 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
            kinfo->dtb_paddr, kinfo->dtb_paddr + fdt_totalsize(kinfo->fdt));
 
-    left = raw_copy_to_guest_flush_dcache(dtb_virt, kinfo->fdt,
-                                        fdt_totalsize(kinfo->fdt));
+    left = copy_to_guest_phys_flush_dcache(kinfo->d, kinfo->dtb_paddr,
+                                           kinfo->fdt,
+                                           fdt_totalsize(kinfo->fdt));
+
     if ( left != 0 )
         panic("Unable to copy the DTB to dom0 memory (left = %lu bytes)", left);
     xfree(kinfo->fdt);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (9 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-07 22:10   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it Julien Grall
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

Rename p2m_flush_tlb and p2m_flush_tlb_sync to respectively
p2m_tlb_flush and p2m_force_tlb_flush_sync.

At first glance, inverting 'flush' and 'tlb'  might seem pointless but
would be helpful in the future in order to get more easily some code ported
from x86 P2M or even to shared with.

For p2m_flush_tlb_sync, the 'force' was added because the TLBs are
flush unconditionally. A follow-up patch will add an helper to flush
TLBs only in certain cases.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 417609ede2..d466a5bc43 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -52,7 +52,7 @@ static const paddr_t level_masks[] =
 static const uint8_t level_orders[] =
     { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
-static void p2m_flush_tlb(struct p2m_domain *p2m);
+static void p2m_tlb_flush(struct p2m_domain *p2m);
 
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
@@ -65,7 +65,7 @@ void p2m_write_unlock(struct p2m_domain *p2m)
          * to avoid someone else modify the P2M before the TLB
          * invalidation has completed.
          */
-        p2m_flush_tlb(p2m);
+        p2m_tlb_flush(p2m);
     }
 
     write_unlock(&p2m->lock);
@@ -138,7 +138,7 @@ void p2m_restore_state(struct vcpu *n)
     *last_vcpu_ran = n->vcpu_id;
 }
 
-static void p2m_flush_tlb(struct p2m_domain *p2m)
+static void p2m_tlb_flush(struct p2m_domain *p2m)
 {
     unsigned long flags = 0;
     uint64_t ovttbr;
@@ -170,11 +170,11 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
  *
  * Must be called with the p2m lock held.
  */
-static void p2m_flush_tlb_sync(struct p2m_domain *p2m)
+static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
 {
     ASSERT(p2m_is_write_locked(p2m));
 
-    p2m_flush_tlb(p2m);
+    p2m_tlb_flush(p2m);
     p2m->need_flush = false;
 }
 
@@ -675,7 +675,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
      * flush?
      */
     if ( p2m->need_flush )
-        p2m_flush_tlb_sync(p2m);
+        p2m_force_tlb_flush_sync(p2m);
 
     mfn = _mfn(entry.p2m.base);
     ASSERT(mfn_valid(mfn));
@@ -864,7 +864,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
          * For more details see (D4.7.1 in ARM DDI 0487A.j).
          */
         p2m_remove_pte(entry, p2m->clean_pte);
-        p2m_flush_tlb_sync(p2m);
+        p2m_force_tlb_flush_sync(p2m);
 
         p2m_write_pte(entry, split_pte, p2m->clean_pte);
 
@@ -940,7 +940,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
         {
             if ( likely(!p2m->mem_access_enabled) ||
                  P2M_CLEAR_PERM(pte) != P2M_CLEAR_PERM(orig_pte) )
-                p2m_flush_tlb_sync(p2m);
+                p2m_force_tlb_flush_sync(p2m);
             else
                 p2m->need_flush = true;
         }
@@ -1144,7 +1144,7 @@ static int p2m_alloc_table(struct domain *d)
      * Make sure that all TLBs corresponding to the new VMID are flushed
      * before using it
      */
-    p2m_flush_tlb(p2m);
+    p2m_tlb_flush(p2m);
 
     return 0;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (10 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-07 22:14   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync Julien Grall
                   ` (3 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

Multiple places in the code requires to flush the TLBs wonly when
p2m->need_flush is set.

Rather than open-coding it, introduce a new helper p2m_tlb_flush_sync to
do it.

Note that p2m_tlb_flush_sync is exported as it might be used by other
part of Xen.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c        | 27 +++++++++++++--------------
 xen/include/asm-arm/p2m.h |  2 ++
 2 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d466a5bc43..37498d8ff1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -52,21 +52,15 @@ static const paddr_t level_masks[] =
 static const uint8_t level_orders[] =
     { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
-static void p2m_tlb_flush(struct p2m_domain *p2m);
-
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
-    if ( p2m->need_flush )
-    {
-        p2m->need_flush = false;
-        /*
-         * The final flush is done with the P2M write lock taken to
-         * to avoid someone else modify the P2M before the TLB
-         * invalidation has completed.
-         */
-        p2m_tlb_flush(p2m);
-    }
+    /*
+     * The final flush is done with the P2M write lock taken to avoid
+     * someone else modifying the P2M wbefore the TLB invalidation has
+     * completed.
+     */
+    p2m_tlb_flush_sync(p2m);
 
     write_unlock(&p2m->lock);
 }
@@ -178,6 +172,12 @@ static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
     p2m->need_flush = false;
 }
 
+void p2m_tlb_flush_sync(struct p2m_domain *p2m)
+{
+    if ( p2m->need_flush )
+        p2m_force_tlb_flush_sync(p2m);
+}
+
 /*
  * Find and map the root page table. The caller is responsible for
  * unmapping the table.
@@ -674,8 +674,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
      * XXX: Should we defer the free of the page table to avoid the
      * flush?
      */
-    if ( p2m->need_flush )
-        p2m_force_tlb_flush_sync(p2m);
+    p2m_tlb_flush_sync(p2m);
 
     mfn = _mfn(entry.p2m.base);
     ASSERT(mfn_valid(mfn));
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index faadcfe8fe..a0abc84ed8 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -204,6 +204,8 @@ static inline int p2m_is_write_locked(struct p2m_domain *p2m)
     return rw_is_write_locked(&p2m->lock);
 }
 
+void p2m_tlb_flush_sync(struct p2m_domain *p2m);
+
 /* Look up the MFN corresponding to a domain's GFN. */
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (11 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-07 22:26   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 14/16] xen/arm: traps: Remove the field gva from mmio_info_t Julien Grall
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

p2m_tlb_flush is called in 2 places: p2m_alloc_table and
p2m_force_tlb_flush_sync.

p2m_alloc_table is called when the domain is initialized and could be
replace by a call to p2m_force_tlb_flush_sync with the P2M write locked.

This seems a bit pointless but would allow to have a single API for
flushing and avoid misusage in the P2M code.

So update p2m_alloc_table to use p2m_force_tlb_flush_sync and fold
p2m_tlb_flush in p2m_force_tlb_flush_sync.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 37498d8ff1..5294113afe 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -132,11 +132,18 @@ void p2m_restore_state(struct vcpu *n)
     *last_vcpu_ran = n->vcpu_id;
 }
 
-static void p2m_tlb_flush(struct p2m_domain *p2m)
+/*
+ * Force a synchronous P2M TLB flush.
+ *
+ * Must be called with the p2m lock held.
+ */
+static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
 {
     unsigned long flags = 0;
     uint64_t ovttbr;
 
+    ASSERT(p2m_is_write_locked(p2m));
+
     /*
      * ARM only provides an instruction to flush TLBs for the current
      * VMID. So switch to the VTTBR of a given P2M if different.
@@ -157,18 +164,7 @@ static void p2m_tlb_flush(struct p2m_domain *p2m)
         isb();
         local_irq_restore(flags);
     }
-}
-
-/*
- * Force a synchronous P2M TLB flush.
- *
- * Must be called with the p2m lock held.
- */
-static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
-{
-    ASSERT(p2m_is_write_locked(p2m));
 
-    p2m_tlb_flush(p2m);
     p2m->need_flush = false;
 }
 
@@ -1143,7 +1139,9 @@ static int p2m_alloc_table(struct domain *d)
      * Make sure that all TLBs corresponding to the new VMID are flushed
      * before using it
      */
-    p2m_tlb_flush(p2m);
+    p2m_write_lock(p2m);
+    p2m_force_tlb_flush_sync(p2m);
+    p2m_write_unlock(p2m);
 
     return 0;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 14/16] xen/arm: traps: Remove the field gva from mmio_info_t
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (12 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-07 22:29   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio Julien Grall
  2017-11-23 18:32 ` [PATCH for-next 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest Julien Grall
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

mmio_info_t is used to gather information in order do emulation a
region. Guest virtual address is unlikely to be a useful information and
not currently used. So remove the field gva from mmio_info_t and replace
by a local variable.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/traps.c       | 13 +++++++------
 xen/include/asm-arm/mmio.h |  1 -
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index f6f6de3691..e30dd9b7e2 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2001,6 +2001,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
 {
     const struct hsr_dabt dabt = hsr.dabt;
     int rc;
+    vaddr_t gva;
     mmio_info_t info;
     uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
     mfn_t mfn;
@@ -2014,13 +2015,13 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
 
     info.dabt = dabt;
 
-    info.gva = get_hfar(true /* is_data */);
+    gva = get_hfar(true /* is_data */);
 
     if ( hpfar_is_valid(dabt.s1ptw, fsc) )
-        info.gpa = get_faulting_ipa(info.gva);
+        info.gpa = get_faulting_ipa(gva);
     else
     {
-        rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
+        rc = gva_to_ipa(gva, &info.gpa, GV2M_READ);
         /*
          * We may not be able to translate because someone is
          * playing with the Stage-2 page table of the domain.
@@ -2041,7 +2042,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
             .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
         };
 
-        p2m_mem_access_check(info.gpa, info.gva, npfec);
+        p2m_mem_access_check(info.gpa, gva, npfec);
         /*
          * The only way to get here right now is because of mem_access,
          * thus reinjecting the exception to the guest is never required.
@@ -2078,8 +2079,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     }
 
     gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
-             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, info.gva, info.gpa);
-    inject_dabt_exception(regs, info.gva, hsr.len);
+             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, info.gpa);
+    inject_dabt_exception(regs, gva, hsr.len);
 }
 
 static void enter_hypervisor_head(struct cpu_user_regs *regs)
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index c620eed4cd..37e2b7a707 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -29,7 +29,6 @@
 typedef struct
 {
     struct hsr_dabt dabt;
-    vaddr_t gva;
     paddr_t gpa;
 } mmio_info_t;
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (13 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 14/16] xen/arm: traps: Remove the field gva from mmio_info_t Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-07 22:43   ` Stefano Stabellini
  2017-11-23 18:32 ` [PATCH for-next 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest Julien Grall
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

mmio_info_t is currently filled by do_trap_data_guest_abort but only
important when emulation an MMIO region.

A follow-up patch will merge stage-2 prefetch abort and stage-2 data abort
in a single helper. To prepare that, mmio_info_t is now filled by
try_handle_mmio.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/traps.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index e30dd9b7e2..a68e01b457 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1936,9 +1936,14 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
 }
 
 static bool try_handle_mmio(struct cpu_user_regs *regs,
-                            mmio_info_t *info)
+                            const union hsr hsr,
+                            paddr_t gpa)
 {
-    const struct hsr_dabt dabt = info->dabt;
+    const struct hsr_dabt dabt = hsr.dabt;
+    mmio_info_t info = {
+        .gpa = gpa,
+        .dabt = dabt
+    };
     int rc;
 
     /* stage-1 page table should never live in an emulated MMIO region */
@@ -1956,7 +1961,7 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
     if ( check_workaround_766422() && (regs->cpsr & PSR_THUMB) &&
          dabt.write )
     {
-        rc = decode_instruction(regs, &info->dabt);
+        rc = decode_instruction(regs, &info.dabt);
         if ( rc )
         {
             gprintk(XENLOG_DEBUG, "Unable to decode instruction\n");
@@ -1964,7 +1969,7 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
         }
     }
 
-    return !!handle_mmio(info);
+    return !!handle_mmio(&info);
 }
 
 /*
@@ -2002,7 +2007,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     const struct hsr_dabt dabt = hsr.dabt;
     int rc;
     vaddr_t gva;
-    mmio_info_t info;
+    paddr_t gpa;
     uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
     mfn_t mfn;
 
@@ -2013,15 +2018,13 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     if ( dabt.eat )
         return __do_trap_serror(regs, true);
 
-    info.dabt = dabt;
-
     gva = get_hfar(true /* is_data */);
 
     if ( hpfar_is_valid(dabt.s1ptw, fsc) )
-        info.gpa = get_faulting_ipa(gva);
+        gpa = get_faulting_ipa(gva);
     else
     {
-        rc = gva_to_ipa(gva, &info.gpa, GV2M_READ);
+        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
         /*
          * We may not be able to translate because someone is
          * playing with the Stage-2 page table of the domain.
@@ -2042,7 +2045,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
             .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
         };
 
-        p2m_mem_access_check(info.gpa, gva, npfec);
+        p2m_mem_access_check(gpa, gva, npfec);
         /*
          * The only way to get here right now is because of mem_access,
          * thus reinjecting the exception to the guest is never required.
@@ -2054,7 +2057,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
          * Attempt first to emulate the MMIO as the data abort will
          * likely happen in an emulated region.
          */
-        if ( try_handle_mmio(regs, &info) )
+        if ( try_handle_mmio(regs, hsr, gpa) )
         {
             advance_pc(regs, hsr);
             return;
@@ -2065,11 +2068,11 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
          * with the Stage-2 page table. Walk the Stage-2 PT to check
          * if the entry exists. If it's the case, return to the guest
          */
-        mfn = gfn_to_mfn(current->domain, gaddr_to_gfn(info.gpa));
+        mfn = gfn_to_mfn(current->domain, gaddr_to_gfn(gpa));
         if ( !mfn_eq(mfn, INVALID_MFN) )
             return;
 
-        if ( try_map_mmio(gaddr_to_gfn(info.gpa)) )
+        if ( try_map_mmio(gaddr_to_gfn(gpa)) )
             return;
 
         break;
@@ -2079,7 +2082,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     }
 
     gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
-             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, info.gpa);
+             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, gpa);
     inject_dabt_exception(regs, gva, hsr.len);
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH for-next 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest
  2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
                   ` (14 preceding siblings ...)
  2017-11-23 18:32 ` [PATCH for-next 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio Julien Grall
@ 2017-11-23 18:32 ` Julien Grall
  2017-12-07 22:43   ` Stefano Stabellini
  15 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 18:32 UTC (permalink / raw)
  To: xen-devel; +Cc: sstabellini, Julien Grall, andre.przywara

The two helpers do_trap_instr_abort_guest and do_trap_data_abort_guest
are used trap stage-2 abort. While the former is only handling prefetch
abort and the latter data abort, they are very similarly and does not
warrant to have separate helpers.

For instance, merging the both will make easier to maintain stage-2 abort
handling. So consolidate the two helpers in a new helper
do_trap_stage2_abort.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/traps.c | 133 ++++++++++++++++-----------------------------------
 1 file changed, 41 insertions(+), 92 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index a68e01b457..b83a2d9244 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1862,79 +1862,6 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
     return s1ptw || (fsc == FSC_FLT_TRANS && !check_workaround_834220());
 }
 
-static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
-                                      const union hsr hsr)
-{
-    int rc;
-    register_t gva;
-    uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
-    paddr_t gpa;
-    mfn_t mfn;
-
-    gva = get_hfar(false /* is_data */);
-
-    /*
-     * If this bit has been set, it means that this instruction abort is caused
-     * by a guest external abort. We can handle this instruction abort as guest
-     * SError.
-     */
-    if ( hsr.iabt.eat )
-        return __do_trap_serror(regs, true);
-
-
-    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
-        gpa = get_faulting_ipa(gva);
-    else
-    {
-        /*
-         * Flush the TLB to make sure the DTLB is clear before
-         * doing GVA->IPA translation. If we got here because of
-         * an entry only present in the ITLB, this translation may
-         * still be inaccurate.
-         */
-        flush_tlb_local();
-
-        /*
-         * We may not be able to translate because someone is
-         * playing with the Stage-2 page table of the domain.
-         * Return to the guest.
-         */
-        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
-        if ( rc == -EFAULT )
-            return; /* Try again */
-    }
-
-    switch ( fsc )
-    {
-    case FSC_FLT_PERM:
-    {
-        const struct npfec npfec = {
-            .insn_fetch = 1,
-            .gla_valid = 1,
-            .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
-        };
-
-        p2m_mem_access_check(gpa, gva, npfec);
-        /*
-         * The only way to get here right now is because of mem_access,
-         * thus reinjecting the exception to the guest is never required.
-         */
-        return;
-    }
-    case FSC_FLT_TRANS:
-        /*
-         * The PT walk may have failed because someone was playing
-         * with the Stage-2 page table. Walk the Stage-2 PT to check
-         * if the entry exists. If it's the case, return to the guest
-         */
-        mfn = gfn_to_mfn(current->domain, _gfn(paddr_to_pfn(gpa)));
-        if ( !mfn_eq(mfn, INVALID_MFN) )
-            return;
-    }
-
-    inject_iabt_exception(regs, gva, hsr.len);
-}
-
 static bool try_handle_mmio(struct cpu_user_regs *regs,
                             const union hsr hsr,
                             paddr_t gpa)
@@ -1946,6 +1873,8 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
     };
     int rc;
 
+    ASSERT(hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
+
     /* stage-1 page table should never live in an emulated MMIO region */
     if ( dabt.s1ptw )
         return false;
@@ -2001,29 +1930,43 @@ static bool try_map_mmio(gfn_t gfn)
     return !map_regions_p2mt(d, gfn, 1, mfn, p2m_mmio_direct_c);
 }
 
-static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
-                                     const union hsr hsr)
+static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
+                                       const union hsr hsr)
 {
-    const struct hsr_dabt dabt = hsr.dabt;
+    /*
+     * The encoding of hsr_iabt is a subset of hsr_dabt. So use
+     * hsr_dabt to represent an abort fault.
+     */
+    const struct hsr_xabt xabt = hsr.xabt;
     int rc;
     vaddr_t gva;
     paddr_t gpa;
-    uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
+    uint8_t fsc = xabt.fsc & ~FSC_LL_MASK;
     mfn_t mfn;
+    bool is_data = (hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
 
     /*
-     * If this bit has been set, it means that this data abort is caused
-     * by a guest external abort. We treat this data abort as guest SError.
+     * If this bit has been set, it means that this stage-2 abort is caused
+     * by a guest external abort. We treat this stage-2 abort as guest SError.
      */
-    if ( dabt.eat )
+    if ( xabt.eat )
         return __do_trap_serror(regs, true);
 
-    gva = get_hfar(true /* is_data */);
+    gva = get_hfar(is_data);
 
-    if ( hpfar_is_valid(dabt.s1ptw, fsc) )
+    if ( hpfar_is_valid(xabt.s1ptw, fsc) )
         gpa = get_faulting_ipa(gva);
     else
     {
+        /*
+         * Flush the TLB to make sure the DTLB is clear before
+         * doing GVA->IPA translation. If we got here because of
+         * an entry only present in the ITLB, this translation may
+         * still be inaccurate.
+         */
+        if ( !is_data )
+            flush_tlb_local();
+
         rc = gva_to_ipa(gva, &gpa, GV2M_READ);
         /*
          * We may not be able to translate because someone is
@@ -2039,10 +1982,11 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     case FSC_FLT_PERM:
     {
         const struct npfec npfec = {
-            .read_access = !dabt.write,
-            .write_access = dabt.write,
+            .insn_fetch = !is_data,
+            .read_access = is_data && !hsr.dabt.write,
+            .write_access = is_data && !hsr.dabt.write,
             .gla_valid = 1,
-            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
+            .kind = xabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
         };
 
         p2m_mem_access_check(gpa, gva, npfec);
@@ -2056,8 +2000,10 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
         /*
          * Attempt first to emulate the MMIO as the data abort will
          * likely happen in an emulated region.
+         *
+         * Note that emulated region cannot be executed
          */
-        if ( try_handle_mmio(regs, hsr, gpa) )
+        if ( is_data && try_handle_mmio(regs, hsr, gpa) )
         {
             advance_pc(regs, hsr);
             return;
@@ -2072,18 +2018,21 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
         if ( !mfn_eq(mfn, INVALID_MFN) )
             return;
 
-        if ( try_map_mmio(gaddr_to_gfn(gpa)) )
+        if ( is_data && try_map_mmio(gaddr_to_gfn(gpa)) )
             return;
 
         break;
     default:
-        gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
-                hsr.bits, dabt.dfsc);
+        gprintk(XENLOG_WARNING, "Unsupported FSC: HSR=%#x DFSC=%#x\n",
+                hsr.bits, xabt.fsc);
     }
 
     gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
              " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, gpa);
-    inject_dabt_exception(regs, gva, hsr.len);
+    if ( is_data )
+        inject_dabt_exception(regs, gva, hsr.len);
+    else
+        inject_iabt_exception(regs, gva, hsr.len);
 }
 
 static void enter_hypervisor_head(struct cpu_user_regs *regs)
@@ -2216,11 +2165,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
 
     case HSR_EC_INSTR_ABORT_LOWER_EL:
         perfc_incr(trap_iabt);
-        do_trap_instr_abort_guest(regs, hsr);
+        do_trap_stage2_abort_guest(regs, hsr);
         break;
     case HSR_EC_DATA_ABORT_LOWER_EL:
         perfc_incr(trap_dabt);
-        do_trap_data_abort_guest(regs, hsr);
+        do_trap_stage2_abort_guest(regs, hsr);
         break;
 
     default:
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-11-23 18:32 ` [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache Julien Grall
@ 2017-11-23 18:49   ` Andrew Cooper
  2017-11-23 19:02     ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Andrew Cooper @ 2017-11-23 18:49 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: sstabellini, andre.przywara

On 23/11/17 18:32, Julien Grall wrote:
> This new function will be used in a follow-up patch to copy data to the guest
> using the IPA (aka guest physical address) and then clean the cache.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/guestcopy.c           | 10 ++++++++++
>  xen/include/asm-arm/guest_access.h |  6 ++++++
>  2 files changed, 16 insertions(+)
>
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index be53bee559..7958663970 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
>                        COPY_from_guest | COPY_linear);
>  }
>  
> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
> +                                              paddr_t gpa,
> +                                              void *buf,
> +                                              unsigned int len)
> +{
> +    /* P2M is shared between all vCPUs, so the vCPU used does not matter. */

Be very careful with this line of thinking.  It is only works after
DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
NULL pointer dereference.

Also, what about vcpus configured with alternative views?

~Andrew

> +    return copy_guest(buf, gpa, len, d->vcpu[0],
> +                      COPY_to_guest | COPY_ipa | COPY_flush_dcache);
> +}
> +
>  int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
>                                 uint32_t size, bool is_write)
>  {
> diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
> index 6796801cfe..224d2a033b 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -11,6 +11,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>  unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
>  unsigned long raw_clear_guest(void *to, unsigned len);
>  
> +/* Copy data to guest physical address, then clean the region. */
> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
> +                                              paddr_t phys,
> +                                              void *buf,
> +                                              unsigned int len);
> +
>  int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
>                                 uint32_t size, bool is_write);
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-11-23 18:49   ` Andrew Cooper
@ 2017-11-23 19:02     ` Julien Grall
  2017-12-06  1:26       ` Stefano Stabellini
  0 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-11-23 19:02 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel; +Cc: sstabellini, andre.przywara

Hi Andrew,

On 23/11/17 18:49, Andrew Cooper wrote:
> On 23/11/17 18:32, Julien Grall wrote:
>> This new function will be used in a follow-up patch to copy data to the guest
>> using the IPA (aka guest physical address) and then clean the cache.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> ---
>>   xen/arch/arm/guestcopy.c           | 10 ++++++++++
>>   xen/include/asm-arm/guest_access.h |  6 ++++++
>>   2 files changed, 16 insertions(+)
>>
>> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
>> index be53bee559..7958663970 100644
>> --- a/xen/arch/arm/guestcopy.c
>> +++ b/xen/arch/arm/guestcopy.c
>> @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
>>                         COPY_from_guest | COPY_linear);
>>   }
>>   
>> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
>> +                                              paddr_t gpa,
>> +                                              void *buf,
>> +                                              unsigned int len)
>> +{
>> +    /* P2M is shared between all vCPUs, so the vCPU used does not matter. */
> 
> Be very careful with this line of thinking.  It is only works after
> DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
> NULL pointer dereference.

I really don't expect that function been used before DOMCT_max_vcpus is 
set. It is only used for hardware emulation or Xen loading image into 
the hardware domain memory. I could add a check d->vcpus to be safe.

> 
> Also, what about vcpus configured with alternative views?

It is not important because the underlying call is get_page_from_gfn 
that does not care about the alternative view (that function take a 
domain in parameter). I can update the comment.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags
  2017-11-23 18:31 ` [PATCH for-next 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags Julien Grall
@ 2017-12-06  0:51   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  0:51 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> In a follow-up patch, it will be necessary to pass more flags to the
> function.
> 
> Rename flush_dcache to flags and introduce a define to tell whether the
> cache needs to be flushed after the copy.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
>  xen/arch/arm/guestcopy.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 4ee07fcea3..2620e659b4 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -5,8 +5,10 @@
>  #include <asm/current.h>
>  #include <asm/guest_access.h>
>  
> +#define COPY_flush_dcache   (1U << 0)
> +
>  static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
> -                                              unsigned len, int flush_dcache)
> +                                              unsigned len, int flags)
>  {
>      /* XXX needs to handle faults */
>      unsigned offset = (vaddr_t)to & ~PAGE_MASK;
> @@ -24,7 +26,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>          p = __map_domain_page(page);
>          p += offset;
>          memcpy(p, from, size);
> -        if ( flush_dcache )
> +        if ( flags & COPY_flush_dcache )
>              clean_dcache_va_range(p, size);
>  
>          unmap_domain_page(p - offset);
> @@ -50,7 +52,7 @@ unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>                                               unsigned len)
>  {
> -    return raw_copy_to_guest_helper(to, from, len, 1);
> +    return raw_copy_to_guest_helper(to, from, len, COPY_flush_dcache);
>  }
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it
  2017-11-23 18:31 ` [PATCH for-next 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it Julien Grall
@ 2017-12-06  1:04   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:04 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> All the helpers within arch/arm/guestcopy.c are doing the same things:
> copy data from/to the guest.
> 
> At the moment, the logic is duplicated in each helpers making more
> difficult to implement new variant.
> 
> The first step for the consolidation is to get a common prototype and a
> base. For convenience (it is at the beginning of the file!),
> raw_copy_to_guest_helper is chosen.
> 
> The function is now renamed copy_guest to show it will be a
> generic function to copy data from/to the guest. Note that for now, only
> copying to guest virtual address is supported. Follow-up patches will
> extend the support.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/guestcopy.c | 18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 2620e659b4..d1cfbe922c 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -7,11 +7,11 @@
>  
>  #define COPY_flush_dcache   (1U << 0)
>  
> -static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
> -                                              unsigned len, int flags)
> +static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
> +                                unsigned int flags)

Why not use vaddr_t ?


>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
> +    unsigned offset = addr & ~PAGE_MASK;
>  
>      while ( len )
>      {
> @@ -19,21 +19,21 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
> +        page = get_page_from_gva(current, addr, GV2M_WRITE);
>          if ( page == NULL )
>              return len;
>  
>          p = __map_domain_page(page);
>          p += offset;
> -        memcpy(p, from, size);
> +        memcpy(p, buf, size);
>          if ( flags & COPY_flush_dcache )
>              clean_dcache_va_range(p, size);
>  
>          unmap_domain_page(p - offset);
>          put_page(page);
>          len -= size;
> -        from += size;
> -        to += size;
> +        buf += size;
> +        addr += size;
>          /*
>           * After the first iteration, guest virtual address is correctly
>           * aligned to PAGE_SIZE.
> @@ -46,13 +46,13 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>  
>  unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>  {
> -    return raw_copy_to_guest_helper(to, from, len, 0);
> +    return copy_guest((void *)from, (unsigned long)to, len, 0);
>  }
>  
>  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>                                               unsigned len)
>  {
> -    return raw_copy_to_guest_helper(to, from, len, COPY_flush_dcache);
> +    return copy_guest((void *)from, (unsigned long)to, len, COPY_flush_dcache);
>  }
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it
  2017-11-23 18:31 ` [PATCH for-next 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it Julien Grall
@ 2017-12-06  1:05   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:05 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> The only differences between copy_to_guest (formerly called
> raw_copy_to_guest_helper) and raw_copy_from_guest is:
>     - The direction of the memcpy
>     - The permission use for translating the address
> 
> Extend copy_to_guest to support copying from guest VA by adding using a
> bit in the flags to tell the direction of the copy.
> 
> Lastly, reimplement raw_copy_from_guest using copy_to_guest.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/guestcopy.c | 46 +++++++++++++---------------------------------
>  1 file changed, 13 insertions(+), 33 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index d1cfbe922c..1ffa717ca6 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -6,6 +6,8 @@
>  #include <asm/guest_access.h>
>  
>  #define COPY_flush_dcache   (1U << 0)
> +#define COPY_from_guest     (0U << 1)
> +#define COPY_to_guest       (1U << 1)
>  
>  static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>                                  unsigned int flags)
> @@ -19,13 +21,18 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(current, addr, GV2M_WRITE);
> +        page = get_page_from_gva(current, addr,
> +                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
>          if ( page == NULL )
>              return len;
>  
>          p = __map_domain_page(page);
>          p += offset;
> -        memcpy(p, buf, size);
> +        if ( flags & COPY_to_guest )
> +            memcpy(p, buf, size);
> +        else
> +            memcpy(buf, p, size);
> +
>          if ( flags & COPY_flush_dcache )
>              clean_dcache_va_range(p, size);
>  
> @@ -46,13 +53,14 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>  
>  unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>  {
> -    return copy_guest((void *)from, (unsigned long)to, len, 0);
> +    return copy_guest((void *)from, (unsigned long)to, len, COPY_to_guest);
>  }
>  
>  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>                                               unsigned len)
>  {
> -    return copy_guest((void *)from, (unsigned long)to, len, COPY_flush_dcache);
> +    return copy_guest((void *)from, (unsigned long)to, len,
> +                      COPY_to_guest | COPY_flush_dcache);
>  }
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
> @@ -90,35 +98,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
> -    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
> -
> -    while ( len )
> -    {
> -        void *p;
> -        unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
> -        struct page_info *page;
> -
> -        page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
> -        if ( page == NULL )
> -            return len;
> -
> -        p = __map_domain_page(page);
> -        p += ((vaddr_t)from & (~PAGE_MASK));
> -
> -        memcpy(to, p, size);
> -
> -        unmap_domain_page(p);
> -        put_page(page);
> -        len -= size;
> -        from += size;
> -        to += size;
> -        /*
> -         * After the first iteration, guest virtual address is correctly
> -         * aligned to PAGE_SIZE.
> -         */
> -        offset = 0;
> -    }
> -    return 0;
> +    return copy_guest(to, (unsigned long)from, len, COPY_from_guest);
>  }

Same question about vaddr_t for the cast.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 04/16] xen/arm: Extend copy_to_guest to support zeroing guest VA and use it
  2017-11-23 18:31 ` [PATCH for-next 04/16] xen/arm: Extend copy_to_guest to support zeroing " Julien Grall
@ 2017-12-06  1:08   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:08 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> The function copy_to_guest can easily be extended to support zeroing
> guest VA. To avoid using a new bit, it is considered that a NULL buffer
> (i.e buf == NULL) means the guest memory will be zeroed.
> 
> Lastly, reimplement raw_clear_guest using copy_to_guest.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Aside from the usual question about vaddr_t:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
>  xen/arch/arm/guestcopy.c | 41 +++++++++++------------------------------
>  1 file changed, 11 insertions(+), 30 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 1ffa717ca6..3aaa80859e 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -29,7 +29,16 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>          p = __map_domain_page(page);
>          p += offset;
>          if ( flags & COPY_to_guest )
> -            memcpy(p, buf, size);
> +        {
> +            /*
> +             * buf will be NULL when the caller request to zero the
> +             * guest memory.
> +             */
> +            if ( buf )
> +                memcpy(p, buf, size);
> +            else
> +                memset(p, 0, size);
> +        }
>          else
>              memcpy(buf, p, size);
>  
> @@ -65,35 +74,7 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
> -    /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
> -
> -    while ( len )
> -    {
> -        void *p;
> -        unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
> -        struct page_info *page;
> -
> -        page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
> -        if ( page == NULL )
> -            return len;
> -
> -        p = __map_domain_page(page);
> -        p += offset;
> -        memset(p, 0x00, size);
> -
> -        unmap_domain_page(p - offset);
> -        put_page(page);
> -        len -= size;
> -        to += size;
> -        /*
> -         * After the first iteration, guest virtual address is correctly
> -         * aligned to PAGE_SIZE.
> -         */
> -        offset = 0;
> -    }
> -
> -    return 0;
> +    return copy_guest(NULL, (unsigned long)to, len, COPY_to_guest);
>  }
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU
  2017-11-23 18:31 ` [PATCH for-next 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU Julien Grall
@ 2017-12-06  1:11   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:11 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> Currently, guest_copy assumes the copy will only be done for the current
> vCPU. A follow-up patch will require to use a different vCPU.
> 
> So extend the prototype to pass the vCPU.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/guestcopy.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 3aaa80859e..487f5ab82d 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -10,7 +10,7 @@
>  #define COPY_to_guest       (1U << 1)
>  
>  static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
> -                                unsigned int flags)
> +                                struct vcpu *v, unsigned int flags)
>  {
>      /* XXX needs to handle faults */
>      unsigned offset = addr & ~PAGE_MASK;
> @@ -21,7 +21,7 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(current, addr,
> +        page = get_page_from_gva(v, addr,
>                                   (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
>          if ( page == NULL )
>              return len;
> @@ -62,24 +62,25 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>  
>  unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>  {
> -    return copy_guest((void *)from, (unsigned long)to, len, COPY_to_guest);
> +    return copy_guest((void *)from, (unsigned long)to, len,
> +                      current, COPY_to_guest);
>  }
>  
>  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>                                               unsigned len)
>  {
>      return copy_guest((void *)from, (unsigned long)to, len,
> -                      COPY_to_guest | COPY_flush_dcache);
> +                      current, COPY_to_guest | COPY_flush_dcache);
>  }
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
> -    return copy_guest(NULL, (unsigned long)to, len, COPY_to_guest);
> +    return copy_guest(NULL, (unsigned long)to, len, current, COPY_to_guest);
>  }
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
> -    return copy_guest(to, (unsigned long)from, len, COPY_from_guest);
> +    return copy_guest(to, (unsigned long)from, len, current, COPY_from_guest);
>  }
>  
>  /*
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address
  2017-11-23 18:32 ` [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address Julien Grall
@ 2017-12-06  1:22   ` Stefano Stabellini
  2017-12-06  1:24     ` Stefano Stabellini
  2017-12-06 12:22     ` Julien Grall
  0 siblings, 2 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:22 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> The only differences between copy_to_guest and access_guest_memory_by_ipa are:
>     - The latter does not support copying data crossing page boundary
>     - The former is copying from/to guest VA whilst the latter from
>     guest PA
> 
> copy_to_guest can easily be extended to support copying from/to guest
> physical address. For that a new bit is used to tell whether linear
> address or ipa is been used.
> 
> Lastly access_guest_memory_by_ipa is reimplemented using copy_to_guest.
> This also has the benefits to extend the use of it, it is now possible
> to copy data crossing page boundary.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Ah! This is the reason why previous patches were not using vaddr_t. It
makes sense now. May I suggest we use something different from paddr_t
in copy_guest for addr type? I don't think is correct to specify addr as
paddr_t when it could be vaddr_t; in the future we could have type
checks on them.

I suggest we specify it as u64, but if you have a better idea go for it.


> ---
>  xen/arch/arm/guestcopy.c | 86 ++++++++++++++++++++++--------------------------
>  1 file changed, 39 insertions(+), 47 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 487f5ab82d..be53bee559 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -8,6 +8,31 @@
>  #define COPY_flush_dcache   (1U << 0)
>  #define COPY_from_guest     (0U << 1)
>  #define COPY_to_guest       (1U << 1)
> +#define COPY_ipa            (0U << 2)
> +#define COPY_linear         (1U << 2)
> +
> +static struct page_info *translate_get_page(struct vcpu *v, paddr_t addr,
> +                                            bool linear, bool write)
> +{
> +    p2m_type_t p2mt;
> +    struct page_info *page;
> +
> +    if ( linear )
> +        return get_page_from_gva(v, addr, write ? GV2M_WRITE : GV2M_READ);
> +
> +    page = get_page_from_gfn(v->domain, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
> +
> +    if ( !page )
> +        return NULL;
> +
> +    if ( !p2m_is_ram(p2mt) )
> +    {
> +        put_page(page);
> +        return NULL;
> +    }
> +
> +    return page;
> +}
>  
>  static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>                                  struct vcpu *v, unsigned int flags)
> @@ -21,8 +46,8 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>          struct page_info *page;
>  
> -        page = get_page_from_gva(v, addr,
> -                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
> +        page = translate_get_page(v, addr, flags & COPY_linear,
> +                                  flags & COPY_to_guest);
>          if ( page == NULL )
>              return len;
>  
> @@ -63,73 +88,40 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>  unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>  {
>      return copy_guest((void *)from, (unsigned long)to, len,
> -                      current, COPY_to_guest);
> +                      current, COPY_to_guest | COPY_linear);
>  }
>  
>  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>                                               unsigned len)
>  {
>      return copy_guest((void *)from, (unsigned long)to, len,
> -                      current, COPY_to_guest | COPY_flush_dcache);
> +                      current, COPY_to_guest | COPY_flush_dcache | COPY_linear);
>  }
>  
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
> -    return copy_guest(NULL, (unsigned long)to, len, current, COPY_to_guest);
> +    return copy_guest(NULL, (unsigned long)to, len, current,
> +                      COPY_to_guest | COPY_linear);
>  }
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
> -    return copy_guest(to, (unsigned long)from, len, current, COPY_from_guest);
> +    return copy_guest(to, (unsigned long)from, len, current,
> +                      COPY_from_guest | COPY_linear);
>  }
>  
> -/*
> - * Temporarily map one physical guest page and copy data to or from it.
> - * The data to be copied cannot cross a page boundary.
> - */
>  int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
>                                 uint32_t size, bool is_write)
>  {
> -    struct page_info *page;
> -    uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
> -    p2m_type_t p2mt;
> -    void *p;
> -
> -    /* Do not cross a page boundary. */
> -    if ( size > (PAGE_SIZE - offset) )
> -    {
> -        printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page boundary.\n",
> -               d->domain_id);
> -        return -EINVAL;
> -    }

I don't know if we necessarely care about this, but with this change
this error path goes away. Do we want to keep it?


> -    page = get_page_from_gfn(d, paddr_to_pfn(gpa), &p2mt, P2M_ALLOC);
> -    if ( !page )
> -    {
> -        printk(XENLOG_G_ERR "d%d: guestcopy: failed to get table entry.\n",
> -               d->domain_id);
> -        return -EINVAL;
> -    }
> -
> -    if ( !p2m_is_ram(p2mt) )
> -    {
> -        put_page(page);
> -        printk(XENLOG_G_ERR "d%d: guestcopy: guest memory should be RAM.\n",
> -               d->domain_id);
> -        return -EINVAL;
> -    }
> +    unsigned long left;
> +    int flags = COPY_ipa;
>  
> -    p = __map_domain_page(page);
> +    flags |= is_write ? COPY_to_guest : COPY_from_guest;
>  
> -    if ( is_write )
> -        memcpy(p + offset, buf, size);
> -    else
> -        memcpy(buf, p + offset, size);
> +    /* P2M is shared between all vCPUs, so the vcpu used does not matter. */
> +    left = copy_guest(buf, gpa, size, d->vcpu[0], flags);

fair enough, then why not use current?


> -    unmap_domain_page(p);
> -    put_page(page);
> -
> -    return 0;
> +    return (!left) ? 0 : -EINVAL;
>  }
>  
>  /*
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address
  2017-12-06  1:22   ` Stefano Stabellini
@ 2017-12-06  1:24     ` Stefano Stabellini
  2017-12-06 12:22     ` Julien Grall
  1 sibling, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:24 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Julien Grall, andre.przywara, xen-devel

On Tue, 5 Dec 2017, Stefano Stabellini wrote:
> On Thu, 23 Nov 2017, Julien Grall wrote:
> > The only differences between copy_to_guest and access_guest_memory_by_ipa are:
> >     - The latter does not support copying data crossing page boundary
> >     - The former is copying from/to guest VA whilst the latter from
> >     guest PA
> > 
> > copy_to_guest can easily be extended to support copying from/to guest
> > physical address. For that a new bit is used to tell whether linear
> > address or ipa is been used.
> > 
> > Lastly access_guest_memory_by_ipa is reimplemented using copy_to_guest.
> > This also has the benefits to extend the use of it, it is now possible
> > to copy data crossing page boundary.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Ah! This is the reason why previous patches were not using vaddr_t. It
> makes sense now. May I suggest we use something different from paddr_t
> in copy_guest for addr type? I don't think is correct to specify addr as
> paddr_t when it could be vaddr_t; in the future we could have type
> checks on them.
> 
> I suggest we specify it as u64, but if you have a better idea go for it.
> 
> 
> > ---
> >  xen/arch/arm/guestcopy.c | 86 ++++++++++++++++++++++--------------------------
> >  1 file changed, 39 insertions(+), 47 deletions(-)
> > 
> > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> > index 487f5ab82d..be53bee559 100644
> > --- a/xen/arch/arm/guestcopy.c
> > +++ b/xen/arch/arm/guestcopy.c
> > @@ -8,6 +8,31 @@
> >  #define COPY_flush_dcache   (1U << 0)
> >  #define COPY_from_guest     (0U << 1)
> >  #define COPY_to_guest       (1U << 1)
> > +#define COPY_ipa            (0U << 2)
> > +#define COPY_linear         (1U << 2)
> > +
> > +static struct page_info *translate_get_page(struct vcpu *v, paddr_t addr,
> > +                                            bool linear, bool write)
> > +{
> > +    p2m_type_t p2mt;
> > +    struct page_info *page;
> > +
> > +    if ( linear )
> > +        return get_page_from_gva(v, addr, write ? GV2M_WRITE : GV2M_READ);
> > +
> > +    page = get_page_from_gfn(v->domain, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
> > +
> > +    if ( !page )
> > +        return NULL;
> > +
> > +    if ( !p2m_is_ram(p2mt) )
> > +    {
> > +        put_page(page);
> > +        return NULL;
> > +    }
> > +
> > +    return page;
> > +}
> >  
> >  static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
> >                                  struct vcpu *v, unsigned int flags)
> > @@ -21,8 +46,8 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
> >          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
> >          struct page_info *page;
> >  
> > -        page = get_page_from_gva(v, addr,
> > -                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
> > +        page = translate_get_page(v, addr, flags & COPY_linear,
> > +                                  flags & COPY_to_guest);
> >          if ( page == NULL )
> >              return len;
> >  
> > @@ -63,73 +88,40 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
> >  unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
> >  {
> >      return copy_guest((void *)from, (unsigned long)to, len,
> > -                      current, COPY_to_guest);
> > +                      current, COPY_to_guest | COPY_linear);
> >  }
> >  
> >  unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
> >                                               unsigned len)
> >  {
> >      return copy_guest((void *)from, (unsigned long)to, len,
> > -                      current, COPY_to_guest | COPY_flush_dcache);
> > +                      current, COPY_to_guest | COPY_flush_dcache | COPY_linear);
> >  }
> >  
> >  unsigned long raw_clear_guest(void *to, unsigned len)
> >  {
> > -    return copy_guest(NULL, (unsigned long)to, len, current, COPY_to_guest);
> > +    return copy_guest(NULL, (unsigned long)to, len, current,
> > +                      COPY_to_guest | COPY_linear);
> >  }
> >  
> >  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
> >  {
> > -    return copy_guest(to, (unsigned long)from, len, current, COPY_from_guest);
> > +    return copy_guest(to, (unsigned long)from, len, current,
> > +                      COPY_from_guest | COPY_linear);
> >  }
> >  
> > -/*
> > - * Temporarily map one physical guest page and copy data to or from it.
> > - * The data to be copied cannot cross a page boundary.
> > - */
> >  int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
> >                                 uint32_t size, bool is_write)
> >  {
> > -    struct page_info *page;
> > -    uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
> > -    p2m_type_t p2mt;
> > -    void *p;
> > -
> > -    /* Do not cross a page boundary. */
> > -    if ( size > (PAGE_SIZE - offset) )
> > -    {
> > -        printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page boundary.\n",
> > -               d->domain_id);
> > -        return -EINVAL;
> > -    }
> 
> I don't know if we necessarely care about this, but with this change
> this error path goes away. Do we want to keep it?
> 
> 
> > -    page = get_page_from_gfn(d, paddr_to_pfn(gpa), &p2mt, P2M_ALLOC);
> > -    if ( !page )
> > -    {
> > -        printk(XENLOG_G_ERR "d%d: guestcopy: failed to get table entry.\n",
> > -               d->domain_id);
> > -        return -EINVAL;
> > -    }
> > -
> > -    if ( !p2m_is_ram(p2mt) )
> > -    {
> > -        put_page(page);
> > -        printk(XENLOG_G_ERR "d%d: guestcopy: guest memory should be RAM.\n",
> > -               d->domain_id);
> > -        return -EINVAL;
> > -    }
> > +    unsigned long left;
> > +    int flags = COPY_ipa;
> >  
> > -    p = __map_domain_page(page);
> > +    flags |= is_write ? COPY_to_guest : COPY_from_guest;
> >  
> > -    if ( is_write )
> > -        memcpy(p + offset, buf, size);
> > -    else
> > -        memcpy(buf, p + offset, size);
> > +    /* P2M is shared between all vCPUs, so the vcpu used does not matter. */
> > +    left = copy_guest(buf, gpa, size, d->vcpu[0], flags);
> 
> fair enough, then why not use current?

Because it could be called from another domain, of course. Makes sense.

 
> > -    unmap_domain_page(p);
> > -    put_page(page);
> > -
> > -    return 0;
> > +    return (!left) ? 0 : -EINVAL;
> >  }
> >  
> >  /*
> > -- 
> > 2.11.0
> > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-11-23 19:02     ` Julien Grall
@ 2017-12-06  1:26       ` Stefano Stabellini
  2017-12-06 12:27         ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:26 UTC (permalink / raw)
  To: Julien Grall; +Cc: Andrew Cooper, sstabellini, andre.przywara, xen-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 2077 bytes --]

On Thu, 23 Nov 2017, Julien Grall wrote:
> Hi Andrew,
> 
> On 23/11/17 18:49, Andrew Cooper wrote:
> > On 23/11/17 18:32, Julien Grall wrote:
> > > This new function will be used in a follow-up patch to copy data to the
> > > guest
> > > using the IPA (aka guest physical address) and then clean the cache.
> > > 
> > > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > > ---
> > >   xen/arch/arm/guestcopy.c           | 10 ++++++++++
> > >   xen/include/asm-arm/guest_access.h |  6 ++++++
> > >   2 files changed, 16 insertions(+)
> > > 
> > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> > > index be53bee559..7958663970 100644
> > > --- a/xen/arch/arm/guestcopy.c
> > > +++ b/xen/arch/arm/guestcopy.c
> > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const
> > > void __user *from, unsigned le
> > >                         COPY_from_guest | COPY_linear);
> > >   }
> > >   +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
> > > +                                              paddr_t gpa,
> > > +                                              void *buf,
> > > +                                              unsigned int len)
> > > +{
> > > +    /* P2M is shared between all vCPUs, so the vCPU used does not matter.
> > > */
> > 
> > Be very careful with this line of thinking.  It is only works after
> > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
> > NULL pointer dereference.
> 
> I really don't expect that function been used before DOMCT_max_vcpus is set.
> It is only used for hardware emulation or Xen loading image into the hardware
> domain memory. I could add a check d->vcpus to be safe.
> 
> > 
> > Also, what about vcpus configured with alternative views?
> 
> It is not important because the underlying call is get_page_from_gfn that does
> not care about the alternative view (that function take a domain in
> parameter). I can update the comment.
 
Since this is a new function, would it make sense to take a struct
vcpu* as parameter, instead of a struct domain* ?

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper
  2017-11-23 18:32 ` [PATCH for-next 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper Julien Grall
@ 2017-12-06  1:38   ` Stefano Stabellini
  2017-12-06  1:42     ` Stefano Stabellini
  0 siblings, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:38 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> The function kernel_zimage is dealing with IPA but uses gvirt_to_maddr to
> do the translation. This is currently working fine because the stage-1 MMU
> is disabled.
> 
> Furthermore, the function is implementing its own copy to guest resulting
> in code duplication and making more difficult to update the logic in
> page-tables (such support for Populate On Demand).
> 
> The new copy_to_guest_phys_flush_dcache could be used here by
> temporarily mapping the full kernel in the virtual space.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/domain_build.c |  1 +
>  xen/arch/arm/kernel.c       | 33 ++++++++++++---------------------
>  xen/arch/arm/kernel.h       |  2 ++
>  3 files changed, 15 insertions(+), 21 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index c74f4dd69d..3f87bf2051 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2133,6 +2133,7 @@ int construct_dom0(struct domain *d)
>      d->max_pages = ~0U;
>  
>      kinfo.unassigned_mem = dom0_mem;
> +    kinfo.d = d;
>  
>      rc = kernel_probe(&kinfo);
>      if ( rc < 0 )
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index a6c6413712..2fb0b9684d 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -15,6 +15,8 @@
>  #include <xen/gunzip.h>
>  #include <xen/vmap.h>
>  
> +#include <asm/guest_access.h>
> +
>  #include "kernel.h"
>  
>  #define UIMAGE_MAGIC          0x27051956
> @@ -157,7 +159,8 @@ static void kernel_zimage_load(struct kernel_info *info)
>      paddr_t load_addr = kernel_zimage_place(info);
>      paddr_t paddr = info->zimage.kernel_addr;
>      paddr_t len = info->zimage.len;
> -    unsigned long offs;
> +    void *kernel;
> +    int rc;
>  
>      info->entry = load_addr;
>  
> @@ -165,29 +168,17 @@ static void kernel_zimage_load(struct kernel_info *info)
>  
>      printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
>             paddr, load_addr, load_addr + len);
> -    for ( offs = 0; offs < len; )
> -    {
> -        uint64_t par;
> -        paddr_t s, l, ma = 0;
> -        void *dst;
> -
> -        s = offs & ~PAGE_MASK;
> -        l = min(PAGE_SIZE - s, len);
> -
> -        par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
> -        if ( par )
> -        {
> -            panic("Unable to map translate guest address");
> -            return;
> -        }
>  
> -        dst = map_domain_page(maddr_to_mfn(ma));
> +    kernel = ioremap_wc(paddr, len);

Why ioremap_wc?


> +    if ( !kernel )
> +        panic("Unable to map the hwdom kernel");
>  
> -        copy_from_paddr(dst + s, paddr + offs, l);
> +    rc = copy_to_guest_phys_flush_dcache(info->d, load_addr,
> +                                         kernel, len);
> +    if ( rc != 0 )
> +        panic("Unable to copy the kernel in the hwdom memory");
>  
> -        unmap_domain_page(dst);
> -        offs += l;
> -    }
> +    iounmap(kernel);
>  }
>  
>  /*
> diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
> index c1b07d4f7b..6d695097b5 100644
> --- a/xen/arch/arm/kernel.h
> +++ b/xen/arch/arm/kernel.h
> @@ -15,6 +15,8 @@ struct kernel_info {
>      enum domain_type type;
>  #endif
>  
> +    struct domain *d;
> +
>      void *fdt; /* flat device tree */
>      paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
>      struct meminfo mem;
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper
  2017-12-06  1:38   ` Stefano Stabellini
@ 2017-12-06  1:42     ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:42 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Julien Grall, andre.przywara, xen-devel

On Tue, 5 Dec 2017, Stefano Stabellini wrote:
> On Thu, 23 Nov 2017, Julien Grall wrote:
> > The function kernel_zimage is dealing with IPA but uses gvirt_to_maddr to
> > do the translation. This is currently working fine because the stage-1 MMU
> > is disabled.
> > 
> > Furthermore, the function is implementing its own copy to guest resulting
> > in code duplication and making more difficult to update the logic in
> > page-tables (such support for Populate On Demand).
> > 
> > The new copy_to_guest_phys_flush_dcache could be used here by
> > temporarily mapping the full kernel in the virtual space.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > ---
> >  xen/arch/arm/domain_build.c |  1 +
> >  xen/arch/arm/kernel.c       | 33 ++++++++++++---------------------
> >  xen/arch/arm/kernel.h       |  2 ++
> >  3 files changed, 15 insertions(+), 21 deletions(-)
> > 
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index c74f4dd69d..3f87bf2051 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -2133,6 +2133,7 @@ int construct_dom0(struct domain *d)
> >      d->max_pages = ~0U;
> >  
> >      kinfo.unassigned_mem = dom0_mem;
> > +    kinfo.d = d;
> >  
> >      rc = kernel_probe(&kinfo);
> >      if ( rc < 0 )
> > diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> > index a6c6413712..2fb0b9684d 100644
> > --- a/xen/arch/arm/kernel.c
> > +++ b/xen/arch/arm/kernel.c
> > @@ -15,6 +15,8 @@
> >  #include <xen/gunzip.h>
> >  #include <xen/vmap.h>
> >  
> > +#include <asm/guest_access.h>
> > +
> >  #include "kernel.h"
> >  
> >  #define UIMAGE_MAGIC          0x27051956
> > @@ -157,7 +159,8 @@ static void kernel_zimage_load(struct kernel_info *info)
> >      paddr_t load_addr = kernel_zimage_place(info);
> >      paddr_t paddr = info->zimage.kernel_addr;
> >      paddr_t len = info->zimage.len;
> > -    unsigned long offs;
> > +    void *kernel;
> > +    int rc;
> >  
> >      info->entry = load_addr;
> >  
> > @@ -165,29 +168,17 @@ static void kernel_zimage_load(struct kernel_info *info)
> >  
> >      printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
> >             paddr, load_addr, load_addr + len);
> > -    for ( offs = 0; offs < len; )
> > -    {
> > -        uint64_t par;
> > -        paddr_t s, l, ma = 0;
> > -        void *dst;
> > -
> > -        s = offs & ~PAGE_MASK;
> > -        l = min(PAGE_SIZE - s, len);
> > -
> > -        par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
> > -        if ( par )
> > -        {
> > -            panic("Unable to map translate guest address");
> > -            return;
> > -        }
> >  
> > -        dst = map_domain_page(maddr_to_mfn(ma));
> > +    kernel = ioremap_wc(paddr, len);
> 
> Why ioremap_wc?

That is because we kept the same attributes used today by
copy_from_paddr. Makes sense.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> > +    if ( !kernel )
> > +        panic("Unable to map the hwdom kernel");
> >  
> > -        copy_from_paddr(dst + s, paddr + offs, l);
> > +    rc = copy_to_guest_phys_flush_dcache(info->d, load_addr,
> > +                                         kernel, len);
> > +    if ( rc != 0 )
> > +        panic("Unable to copy the kernel in the hwdom memory");
> >  
> > -        unmap_domain_page(dst);
> > -        offs += l;
> > -    }
> > +    iounmap(kernel);
> >  }
> >  
> >  /*
> > diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
> > index c1b07d4f7b..6d695097b5 100644
> > --- a/xen/arch/arm/kernel.h
> > +++ b/xen/arch/arm/kernel.h
> > @@ -15,6 +15,8 @@ struct kernel_info {
> >      enum domain_type type;
> >  #endif
> >  
> > +    struct domain *d;
> > +
> >      void *fdt; /* flat device tree */
> >      paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
> >      struct meminfo mem;
> > -- 
> > 2.11.0
> > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 09/16] xen/arm: domain_build: Rework initrd_load to use the generic copy helper
  2017-11-23 18:32 ` [PATCH for-next 09/16] xen/arm: domain_build: Rework initrd_load " Julien Grall
@ 2017-12-06  1:56   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:56 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> The function initrd_load is dealing with IPA but uses gvirt_to_maddr to
> do the translation. This is currently working fine because the stage-1 MMU
> is disabled.
> 
> Furthermore, the function is implementing its own copy to guest resulting
> in code duplication and making more difficult to update the logic in
> page-tables (such support for Populate On Demand).
> 
> The new copy_to_guest_phys_flush_dcache could be used here by temporarily
> mapping the full initrd in the virtual space.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/domain_build.c | 31 ++++++++-----------------------
>  1 file changed, 8 insertions(+), 23 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 3f87bf2051..42c2e16ef6 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1966,11 +1966,11 @@ static void initrd_load(struct kernel_info *kinfo)
>      const struct bootmodule *mod = kinfo->initrd_bootmodule;
>      paddr_t load_addr = kinfo->initrd_paddr;
>      paddr_t paddr, len;
> -    unsigned long offs;
>      int node;
>      int res;
>      __be32 val[2];
>      __be32 *cellp;
> +    void __iomem *initrd;
>  
>      if ( !mod || !mod->size )
>          return;
> @@ -2000,29 +2000,14 @@ static void initrd_load(struct kernel_info *kinfo)
>      if ( res )
>          panic("Cannot fix up \"linux,initrd-end\" property");
>  
> -    for ( offs = 0; offs < len; )
> -    {
> -        uint64_t par;
> -        paddr_t s, l, ma = 0;
> -        void *dst;
> -
> -        s = offs & ~PAGE_MASK;
> -        l = min(PAGE_SIZE - s, len);
> -
> -        par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
> -        if ( par )
> -        {
> -            panic("Unable to translate guest address");
> -            return;
> -        }
> -
> -        dst = map_domain_page(maddr_to_mfn(ma));
> +    initrd = ioremap_wc(paddr, len);
> +    if ( !initrd )
> +        panic("Unable to map the hwdom initrd");
>  
> -        copy_from_paddr(dst + s, paddr + offs, l);
> -
> -        unmap_domain_page(dst);
> -        offs += l;
> -    }
> +    res = copy_to_guest_phys_flush_dcache(kinfo->d, load_addr,
> +                                          initrd, len);
> +    if ( res != 0 )
> +        panic("Unable to copy the initrd in the hwdom memory");
>  }
>  
>  static void evtchn_fixup(struct domain *d, struct kernel_info *kinfo)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load
  2017-11-23 18:32 ` [PATCH for-next 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load Julien Grall
@ 2017-12-06  1:59   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-06  1:59 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> The function dtb_load is dealing with IPA but uses gvirt_to_maddr to do
> the translation. This is currently working fine because the stage-1 MMU
> is disabled.
> 
> Rather than relying on such assumption, use the new
> copy_to_guest_phys_flush_dcache. This also result to a slightly more
> comprehensible code.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Yes definitely an improvement

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/domain_build.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 42c2e16ef6..9245753a6b 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -1948,14 +1948,15 @@ static int prepare_acpi(struct domain *d, struct kernel_info *kinfo)
>  #endif
>  static void dtb_load(struct kernel_info *kinfo)
>  {
> -    void * __user dtb_virt = (void * __user)(register_t)kinfo->dtb_paddr;
>      unsigned long left;
>  
>      printk("Loading dom0 DTB to 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
>             kinfo->dtb_paddr, kinfo->dtb_paddr + fdt_totalsize(kinfo->fdt));
>  
> -    left = raw_copy_to_guest_flush_dcache(dtb_virt, kinfo->fdt,
> -                                        fdt_totalsize(kinfo->fdt));
> +    left = copy_to_guest_phys_flush_dcache(kinfo->d, kinfo->dtb_paddr,
> +                                           kinfo->fdt,
> +                                           fdt_totalsize(kinfo->fdt));
> +
>      if ( left != 0 )
>          panic("Unable to copy the DTB to dom0 memory (left = %lu bytes)", left);
>      xfree(kinfo->fdt);
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address
  2017-12-06  1:22   ` Stefano Stabellini
  2017-12-06  1:24     ` Stefano Stabellini
@ 2017-12-06 12:22     ` Julien Grall
  2017-12-07 23:01       ` Stefano Stabellini
  1 sibling, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-12-06 12:22 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: andre.przywara, xen-devel

Hi Stefano,

On 12/06/2017 01:22 AM, Stefano Stabellini wrote:
> On Thu, 23 Nov 2017, Julien Grall wrote:
>> The only differences between copy_to_guest and access_guest_memory_by_ipa are:
>>      - The latter does not support copying data crossing page boundary
>>      - The former is copying from/to guest VA whilst the latter from
>>      guest PA
>>
>> copy_to_guest can easily be extended to support copying from/to guest
>> physical address. For that a new bit is used to tell whether linear
>> address or ipa is been used.
>>
>> Lastly access_guest_memory_by_ipa is reimplemented using copy_to_guest.
>> This also has the benefits to extend the use of it, it is now possible
>> to copy data crossing page boundary.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Ah! This is the reason why previous patches were not using vaddr_t. It
> makes sense now. May I suggest we use something different from paddr_t
> in copy_guest for addr type? I don't think is correct to specify addr as
> paddr_t when it could be vaddr_t; in the future we could have type
> checks on them.
> 
> I suggest we specify it as u64, but if you have a better idea go for it.

We should not use more u64 in the code. uint64_t could be a solution but 
even that, I don't see the reason. How are you sure the physical address 
will always fit in 64-bit?

On the other side, very likely vaddr_t will fit in paddr_t. So paddr_t 
is the right way to go for me.

> 
> 
>> ---
>>   xen/arch/arm/guestcopy.c | 86 ++++++++++++++++++++++--------------------------
>>   1 file changed, 39 insertions(+), 47 deletions(-)
>>
>> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
>> index 487f5ab82d..be53bee559 100644
>> --- a/xen/arch/arm/guestcopy.c
>> +++ b/xen/arch/arm/guestcopy.c
>> @@ -8,6 +8,31 @@
>>   #define COPY_flush_dcache   (1U << 0)
>>   #define COPY_from_guest     (0U << 1)
>>   #define COPY_to_guest       (1U << 1)
>> +#define COPY_ipa            (0U << 2)
>> +#define COPY_linear         (1U << 2)
>> +
>> +static struct page_info *translate_get_page(struct vcpu *v, paddr_t addr,
>> +                                            bool linear, bool write)
>> +{
>> +    p2m_type_t p2mt;
>> +    struct page_info *page;
>> +
>> +    if ( linear )
>> +        return get_page_from_gva(v, addr, write ? GV2M_WRITE : GV2M_READ);
>> +
>> +    page = get_page_from_gfn(v->domain, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
>> +
>> +    if ( !page )
>> +        return NULL;
>> +
>> +    if ( !p2m_is_ram(p2mt) )
>> +    {
>> +        put_page(page);
>> +        return NULL;
>> +    }
>> +
>> +    return page;
>> +}
>>   
>>   static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>>                                   struct vcpu *v, unsigned int flags)
>> @@ -21,8 +46,8 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>>           unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>>           struct page_info *page;
>>   
>> -        page = get_page_from_gva(v, addr,
>> -                                 (flags & COPY_to_guest) ? GV2M_WRITE : GV2M_READ);
>> +        page = translate_get_page(v, addr, flags & COPY_linear,
>> +                                  flags & COPY_to_guest);
>>           if ( page == NULL )
>>               return len;
>>   
>> @@ -63,73 +88,40 @@ static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
>>   unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
>>   {
>>       return copy_guest((void *)from, (unsigned long)to, len,
>> -                      current, COPY_to_guest);
>> +                      current, COPY_to_guest | COPY_linear);
>>   }
>>   
>>   unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>>                                                unsigned len)
>>   {
>>       return copy_guest((void *)from, (unsigned long)to, len,
>> -                      current, COPY_to_guest | COPY_flush_dcache);
>> +                      current, COPY_to_guest | COPY_flush_dcache | COPY_linear);
>>   }
>>   
>>   unsigned long raw_clear_guest(void *to, unsigned len)
>>   {
>> -    return copy_guest(NULL, (unsigned long)to, len, current, COPY_to_guest);
>> +    return copy_guest(NULL, (unsigned long)to, len, current,
>> +                      COPY_to_guest | COPY_linear);
>>   }
>>   
>>   unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>>   {
>> -    return copy_guest(to, (unsigned long)from, len, current, COPY_from_guest);
>> +    return copy_guest(to, (unsigned long)from, len, current,
>> +                      COPY_from_guest | COPY_linear);
>>   }
>>   
>> -/*
>> - * Temporarily map one physical guest page and copy data to or from it.
>> - * The data to be copied cannot cross a page boundary.
>> - */
>>   int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
>>                                  uint32_t size, bool is_write)
>>   {
>> -    struct page_info *page;
>> -    uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
>> -    p2m_type_t p2mt;
>> -    void *p;
>> -
>> -    /* Do not cross a page boundary. */
>> -    if ( size > (PAGE_SIZE - offset) )
>> -    {
>> -        printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page boundary.\n",
>> -               d->domain_id);
>> -        return -EINVAL;
>> -    }
> 
> I don't know if we necessarely care about this, but with this change
> this error path goes away. Do we want to keep it?

There are strictly no reason to prevent cross-boundary for IPA when we 
do support them for VA.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-12-06  1:26       ` Stefano Stabellini
@ 2017-12-06 12:27         ` Julien Grall
  2017-12-08 15:34           ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-12-06 12:27 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Andrew Cooper, andre.przywara, xen-devel

Hi Stefano,

On 12/06/2017 01:26 AM, Stefano Stabellini wrote:
> On Thu, 23 Nov 2017, Julien Grall wrote:
>> Hi Andrew,
>>
>> On 23/11/17 18:49, Andrew Cooper wrote:
>>> On 23/11/17 18:32, Julien Grall wrote:
>>>> This new function will be used in a follow-up patch to copy data to the
>>>> guest
>>>> using the IPA (aka guest physical address) and then clean the cache.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>> ---
>>>>    xen/arch/arm/guestcopy.c           | 10 ++++++++++
>>>>    xen/include/asm-arm/guest_access.h |  6 ++++++
>>>>    2 files changed, 16 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
>>>> index be53bee559..7958663970 100644
>>>> --- a/xen/arch/arm/guestcopy.c
>>>> +++ b/xen/arch/arm/guestcopy.c
>>>> @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const
>>>> void __user *from, unsigned le
>>>>                          COPY_from_guest | COPY_linear);
>>>>    }
>>>>    +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
>>>> +                                              paddr_t gpa,
>>>> +                                              void *buf,
>>>> +                                              unsigned int len)
>>>> +{
>>>> +    /* P2M is shared between all vCPUs, so the vCPU used does not matter.
>>>> */
>>>
>>> Be very careful with this line of thinking.  It is only works after
>>> DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
>>> NULL pointer dereference.
>>
>> I really don't expect that function been used before DOMCT_max_vcpus is set.
>> It is only used for hardware emulation or Xen loading image into the hardware
>> domain memory. I could add a check d->vcpus to be safe.
>>
>>>
>>> Also, what about vcpus configured with alternative views?
>>
>> It is not important because the underlying call is get_page_from_gfn that does
>> not care about the alternative view (that function take a domain in
>> parameter). I can update the comment.
>   
> Since this is a new function, would it make sense to take a struct
> vcpu* as parameter, instead of a struct domain* ?

Well, I suggested this patch this way because likely everyone will use 
with d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and 
not d->vcpus[1]...

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync
  2017-11-23 18:32 ` [PATCH for-next 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync Julien Grall
@ 2017-12-07 22:10   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-07 22:10 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> Rename p2m_flush_tlb and p2m_flush_tlb_sync to respectively
> p2m_tlb_flush and p2m_force_tlb_flush_sync.
> 
> At first glance, inverting 'flush' and 'tlb'  might seem pointless but
> would be helpful in the future in order to get more easily some code ported
> from x86 P2M or even to shared with.
> 
> For p2m_flush_tlb_sync, the 'force' was added because the TLBs are
> flush unconditionally. A follow-up patch will add an helper to flush
> TLBs only in certain cases.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/p2m.c | 18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 417609ede2..d466a5bc43 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -52,7 +52,7 @@ static const paddr_t level_masks[] =
>  static const uint8_t level_orders[] =
>      { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
>  
> -static void p2m_flush_tlb(struct p2m_domain *p2m);
> +static void p2m_tlb_flush(struct p2m_domain *p2m);
>  
>  /* Unlock the flush and do a P2M TLB flush if necessary */
>  void p2m_write_unlock(struct p2m_domain *p2m)
> @@ -65,7 +65,7 @@ void p2m_write_unlock(struct p2m_domain *p2m)
>           * to avoid someone else modify the P2M before the TLB
>           * invalidation has completed.
>           */
> -        p2m_flush_tlb(p2m);
> +        p2m_tlb_flush(p2m);
>      }
>  
>      write_unlock(&p2m->lock);
> @@ -138,7 +138,7 @@ void p2m_restore_state(struct vcpu *n)
>      *last_vcpu_ran = n->vcpu_id;
>  }
>  
> -static void p2m_flush_tlb(struct p2m_domain *p2m)
> +static void p2m_tlb_flush(struct p2m_domain *p2m)
>  {
>      unsigned long flags = 0;
>      uint64_t ovttbr;
> @@ -170,11 +170,11 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
>   *
>   * Must be called with the p2m lock held.
>   */
> -static void p2m_flush_tlb_sync(struct p2m_domain *p2m)
> +static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
>  {
>      ASSERT(p2m_is_write_locked(p2m));
>  
> -    p2m_flush_tlb(p2m);
> +    p2m_tlb_flush(p2m);
>      p2m->need_flush = false;
>  }
>  
> @@ -675,7 +675,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
>       * flush?
>       */
>      if ( p2m->need_flush )
> -        p2m_flush_tlb_sync(p2m);
> +        p2m_force_tlb_flush_sync(p2m);
>  
>      mfn = _mfn(entry.p2m.base);
>      ASSERT(mfn_valid(mfn));
> @@ -864,7 +864,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
>           * For more details see (D4.7.1 in ARM DDI 0487A.j).
>           */
>          p2m_remove_pte(entry, p2m->clean_pte);
> -        p2m_flush_tlb_sync(p2m);
> +        p2m_force_tlb_flush_sync(p2m);
>  
>          p2m_write_pte(entry, split_pte, p2m->clean_pte);
>  
> @@ -940,7 +940,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
>          {
>              if ( likely(!p2m->mem_access_enabled) ||
>                   P2M_CLEAR_PERM(pte) != P2M_CLEAR_PERM(orig_pte) )
> -                p2m_flush_tlb_sync(p2m);
> +                p2m_force_tlb_flush_sync(p2m);
>              else
>                  p2m->need_flush = true;
>          }
> @@ -1144,7 +1144,7 @@ static int p2m_alloc_table(struct domain *d)
>       * Make sure that all TLBs corresponding to the new VMID are flushed
>       * before using it
>       */
> -    p2m_flush_tlb(p2m);
> +    p2m_tlb_flush(p2m);
>  
>      return 0;
>  }
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it
  2017-11-23 18:32 ` [PATCH for-next 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it Julien Grall
@ 2017-12-07 22:14   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-07 22:14 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> Multiple places in the code requires to flush the TLBs wonly when
                                                         ^ only

Aside from this

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> p2m->need_flush is set.
> 
> Rather than open-coding it, introduce a new helper p2m_tlb_flush_sync to
> do it.
> 
> Note that p2m_tlb_flush_sync is exported as it might be used by other
> part of Xen.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/p2m.c        | 27 +++++++++++++--------------
>  xen/include/asm-arm/p2m.h |  2 ++
>  2 files changed, 15 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index d466a5bc43..37498d8ff1 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -52,21 +52,15 @@ static const paddr_t level_masks[] =
>  static const uint8_t level_orders[] =
>      { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
>  
> -static void p2m_tlb_flush(struct p2m_domain *p2m);
> -
>  /* Unlock the flush and do a P2M TLB flush if necessary */
>  void p2m_write_unlock(struct p2m_domain *p2m)
>  {
> -    if ( p2m->need_flush )
> -    {
> -        p2m->need_flush = false;
> -        /*
> -         * The final flush is done with the P2M write lock taken to
> -         * to avoid someone else modify the P2M before the TLB
> -         * invalidation has completed.
> -         */
> -        p2m_tlb_flush(p2m);
> -    }
> +    /*
> +     * The final flush is done with the P2M write lock taken to avoid
> +     * someone else modifying the P2M wbefore the TLB invalidation has
> +     * completed.
> +     */
> +    p2m_tlb_flush_sync(p2m);
>  
>      write_unlock(&p2m->lock);
>  }
> @@ -178,6 +172,12 @@ static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
>      p2m->need_flush = false;
>  }
>  
> +void p2m_tlb_flush_sync(struct p2m_domain *p2m)
> +{
> +    if ( p2m->need_flush )
> +        p2m_force_tlb_flush_sync(p2m);
> +}
> +
>  /*
>   * Find and map the root page table. The caller is responsible for
>   * unmapping the table.
> @@ -674,8 +674,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
>       * XXX: Should we defer the free of the page table to avoid the
>       * flush?
>       */
> -    if ( p2m->need_flush )
> -        p2m_force_tlb_flush_sync(p2m);
> +    p2m_tlb_flush_sync(p2m);
>  
>      mfn = _mfn(entry.p2m.base);
>      ASSERT(mfn_valid(mfn));
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index faadcfe8fe..a0abc84ed8 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -204,6 +204,8 @@ static inline int p2m_is_write_locked(struct p2m_domain *p2m)
>      return rw_is_write_locked(&p2m->lock);
>  }
>  
> +void p2m_tlb_flush_sync(struct p2m_domain *p2m);
> +
>  /* Look up the MFN corresponding to a domain's GFN. */
>  mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync
  2017-11-23 18:32 ` [PATCH for-next 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync Julien Grall
@ 2017-12-07 22:26   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-07 22:26 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> p2m_tlb_flush is called in 2 places: p2m_alloc_table and
> p2m_force_tlb_flush_sync.
> 
> p2m_alloc_table is called when the domain is initialized and could be
> replace by a call to p2m_force_tlb_flush_sync with the P2M write locked.
> 
> This seems a bit pointless but would allow to have a single API for
> flushing and avoid misusage in the P2M code.
> 
> So update p2m_alloc_table to use p2m_force_tlb_flush_sync and fold
> p2m_tlb_flush in p2m_force_tlb_flush_sync.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/p2m.c | 24 +++++++++++-------------
>  1 file changed, 11 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 37498d8ff1..5294113afe 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -132,11 +132,18 @@ void p2m_restore_state(struct vcpu *n)
>      *last_vcpu_ran = n->vcpu_id;
>  }
>  
> -static void p2m_tlb_flush(struct p2m_domain *p2m)
> +/*
> + * Force a synchronous P2M TLB flush.
> + *
> + * Must be called with the p2m lock held.
> + */
> +static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
>  {
>      unsigned long flags = 0;
>      uint64_t ovttbr;
>  
> +    ASSERT(p2m_is_write_locked(p2m));
> +
>      /*
>       * ARM only provides an instruction to flush TLBs for the current
>       * VMID. So switch to the VTTBR of a given P2M if different.
> @@ -157,18 +164,7 @@ static void p2m_tlb_flush(struct p2m_domain *p2m)
>          isb();
>          local_irq_restore(flags);
>      }
> -}
> -
> -/*
> - * Force a synchronous P2M TLB flush.
> - *
> - * Must be called with the p2m lock held.
> - */
> -static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
> -{
> -    ASSERT(p2m_is_write_locked(p2m));
>  
> -    p2m_tlb_flush(p2m);
>      p2m->need_flush = false;
>  }
>  
> @@ -1143,7 +1139,9 @@ static int p2m_alloc_table(struct domain *d)
>       * Make sure that all TLBs corresponding to the new VMID are flushed
>       * before using it
>       */
> -    p2m_tlb_flush(p2m);
> +    p2m_write_lock(p2m);
> +    p2m_force_tlb_flush_sync(p2m);
> +    p2m_write_unlock(p2m);
>  
>      return 0;
>  }
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 14/16] xen/arm: traps: Remove the field gva from mmio_info_t
  2017-11-23 18:32 ` [PATCH for-next 14/16] xen/arm: traps: Remove the field gva from mmio_info_t Julien Grall
@ 2017-12-07 22:29   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-07 22:29 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> mmio_info_t is used to gather information in order do emulation a
                                                                 ^ of a

Aside from this

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> region. Guest virtual address is unlikely to be a useful information and
> not currently used. So remove the field gva from mmio_info_t and replace
> by a local variable.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/traps.c       | 13 +++++++------
>  xen/include/asm-arm/mmio.h |  1 -
>  2 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index f6f6de3691..e30dd9b7e2 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2001,6 +2001,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>  {
>      const struct hsr_dabt dabt = hsr.dabt;
>      int rc;
> +    vaddr_t gva;
>      mmio_info_t info;
>      uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
>      mfn_t mfn;
> @@ -2014,13 +2015,13 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>  
>      info.dabt = dabt;
>  
> -    info.gva = get_hfar(true /* is_data */);
> +    gva = get_hfar(true /* is_data */);
>  
>      if ( hpfar_is_valid(dabt.s1ptw, fsc) )
> -        info.gpa = get_faulting_ipa(info.gva);
> +        info.gpa = get_faulting_ipa(gva);
>      else
>      {
> -        rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
> +        rc = gva_to_ipa(gva, &info.gpa, GV2M_READ);
>          /*
>           * We may not be able to translate because someone is
>           * playing with the Stage-2 page table of the domain.
> @@ -2041,7 +2042,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>              .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
>          };
>  
> -        p2m_mem_access_check(info.gpa, info.gva, npfec);
> +        p2m_mem_access_check(info.gpa, gva, npfec);
>          /*
>           * The only way to get here right now is because of mem_access,
>           * thus reinjecting the exception to the guest is never required.
> @@ -2078,8 +2079,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      }
>  
>      gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
> -             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, info.gva, info.gpa);
> -    inject_dabt_exception(regs, info.gva, hsr.len);
> +             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, info.gpa);
> +    inject_dabt_exception(regs, gva, hsr.len);
>  }
>  
>  static void enter_hypervisor_head(struct cpu_user_regs *regs)
> diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
> index c620eed4cd..37e2b7a707 100644
> --- a/xen/include/asm-arm/mmio.h
> +++ b/xen/include/asm-arm/mmio.h
> @@ -29,7 +29,6 @@
>  typedef struct
>  {
>      struct hsr_dabt dabt;
> -    vaddr_t gva;
>      paddr_t gpa;
>  } mmio_info_t;
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest
  2017-11-23 18:32 ` [PATCH for-next 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest Julien Grall
@ 2017-12-07 22:43   ` Stefano Stabellini
  2017-12-07 22:57     ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-07 22:43 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> The two helpers do_trap_instr_abort_guest and do_trap_data_abort_guest
> are used trap stage-2 abort. While the former is only handling prefetch
> abort and the latter data abort, they are very similarly and does not
> warrant to have separate helpers.
> 
> For instance, merging the both will make easier to maintain stage-2 abort
> handling. So consolidate the two helpers in a new helper
> do_trap_stage2_abort.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/traps.c | 133 ++++++++++++++++-----------------------------------
>  1 file changed, 41 insertions(+), 92 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index a68e01b457..b83a2d9244 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1862,79 +1862,6 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
>      return s1ptw || (fsc == FSC_FLT_TRANS && !check_workaround_834220());
>  }
>  
> -static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
> -                                      const union hsr hsr)
> -{
> -    int rc;
> -    register_t gva;
> -    uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
> -    paddr_t gpa;
> -    mfn_t mfn;
> -
> -    gva = get_hfar(false /* is_data */);
> -
> -    /*
> -     * If this bit has been set, it means that this instruction abort is caused
> -     * by a guest external abort. We can handle this instruction abort as guest
> -     * SError.
> -     */
> -    if ( hsr.iabt.eat )
> -        return __do_trap_serror(regs, true);
> -
> -
> -    if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
> -        gpa = get_faulting_ipa(gva);
> -    else
> -    {
> -        /*
> -         * Flush the TLB to make sure the DTLB is clear before
> -         * doing GVA->IPA translation. If we got here because of
> -         * an entry only present in the ITLB, this translation may
> -         * still be inaccurate.
> -         */
> -        flush_tlb_local();
> -
> -        /*
> -         * We may not be able to translate because someone is
> -         * playing with the Stage-2 page table of the domain.
> -         * Return to the guest.
> -         */
> -        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
> -        if ( rc == -EFAULT )
> -            return; /* Try again */
> -    }
> -
> -    switch ( fsc )
> -    {
> -    case FSC_FLT_PERM:
> -    {
> -        const struct npfec npfec = {
> -            .insn_fetch = 1,
> -            .gla_valid = 1,
> -            .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> -        };
> -
> -        p2m_mem_access_check(gpa, gva, npfec);
> -        /*
> -         * The only way to get here right now is because of mem_access,
> -         * thus reinjecting the exception to the guest is never required.
> -         */
> -        return;
> -    }
> -    case FSC_FLT_TRANS:
> -        /*
> -         * The PT walk may have failed because someone was playing
> -         * with the Stage-2 page table. Walk the Stage-2 PT to check
> -         * if the entry exists. If it's the case, return to the guest
> -         */
> -        mfn = gfn_to_mfn(current->domain, _gfn(paddr_to_pfn(gpa)));
> -        if ( !mfn_eq(mfn, INVALID_MFN) )
> -            return;
> -    }
> -
> -    inject_iabt_exception(regs, gva, hsr.len);
> -}
> -
>  static bool try_handle_mmio(struct cpu_user_regs *regs,
>                              const union hsr hsr,
>                              paddr_t gpa)
> @@ -1946,6 +1873,8 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
>      };
>      int rc;
>  
> +    ASSERT(hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
> +
>      /* stage-1 page table should never live in an emulated MMIO region */
>      if ( dabt.s1ptw )
>          return false;
> @@ -2001,29 +1930,43 @@ static bool try_map_mmio(gfn_t gfn)
>      return !map_regions_p2mt(d, gfn, 1, mfn, p2m_mmio_direct_c);
>  }
>  
> -static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
> -                                     const union hsr hsr)
> +static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
> +                                       const union hsr hsr)
>  {
> -    const struct hsr_dabt dabt = hsr.dabt;
> +    /*
> +     * The encoding of hsr_iabt is a subset of hsr_dabt. So use
> +     * hsr_dabt to represent an abort fault.
> +     */
> +    const struct hsr_xabt xabt = hsr.xabt;
>      int rc;
>      vaddr_t gva;
>      paddr_t gpa;
> -    uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
> +    uint8_t fsc = xabt.fsc & ~FSC_LL_MASK;
>      mfn_t mfn;
> +    bool is_data = (hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
>  
>      /*
> -     * If this bit has been set, it means that this data abort is caused
> -     * by a guest external abort. We treat this data abort as guest SError.
> +     * If this bit has been set, it means that this stage-2 abort is caused
> +     * by a guest external abort. We treat this stage-2 abort as guest SError.
>       */
> -    if ( dabt.eat )
> +    if ( xabt.eat )
>          return __do_trap_serror(regs, true);
>  
> -    gva = get_hfar(true /* is_data */);
> +    gva = get_hfar(is_data);
>  
> -    if ( hpfar_is_valid(dabt.s1ptw, fsc) )
> +    if ( hpfar_is_valid(xabt.s1ptw, fsc) )
>          gpa = get_faulting_ipa(gva);
>      else
>      {
> +        /*
> +         * Flush the TLB to make sure the DTLB is clear before
> +         * doing GVA->IPA translation. If we got here because of
> +         * an entry only present in the ITLB, this translation may
> +         * still be inaccurate.
> +         */
> +        if ( !is_data )
> +            flush_tlb_local();
> +
>          rc = gva_to_ipa(gva, &gpa, GV2M_READ);
>          /*
>           * We may not be able to translate because someone is
> @@ -2039,10 +1982,11 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      case FSC_FLT_PERM:
>      {
>          const struct npfec npfec = {
> -            .read_access = !dabt.write,
> -            .write_access = dabt.write,
> +            .insn_fetch = !is_data,
> +            .read_access = is_data && !hsr.dabt.write,
> +            .write_access = is_data && !hsr.dabt.write,

Shouldn't this be:

    .write_access = is_data && hsr.dabt.write,


>              .gla_valid = 1,
> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
> +            .kind = xabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
>          };
>  
>          p2m_mem_access_check(gpa, gva, npfec);
> @@ -2056,8 +2000,10 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>          /*
>           * Attempt first to emulate the MMIO as the data abort will
>           * likely happen in an emulated region.
> +         *
> +         * Note that emulated region cannot be executed
>           */
> -        if ( try_handle_mmio(regs, hsr, gpa) )
> +        if ( is_data && try_handle_mmio(regs, hsr, gpa) )
>          {
>              advance_pc(regs, hsr);
>              return;
> @@ -2072,18 +2018,21 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>          if ( !mfn_eq(mfn, INVALID_MFN) )
>              return;
>  
> -        if ( try_map_mmio(gaddr_to_gfn(gpa)) )
> +        if ( is_data && try_map_mmio(gaddr_to_gfn(gpa)) )
>              return;
>  
>          break;
>      default:
> -        gprintk(XENLOG_WARNING, "Unsupported DFSC: HSR=%#x DFSC=%#x\n",
> -                hsr.bits, dabt.dfsc);
> +        gprintk(XENLOG_WARNING, "Unsupported FSC: HSR=%#x DFSC=%#x\n",
> +                hsr.bits, xabt.fsc);
>      }
>  
>      gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
>               " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, gpa);
> -    inject_dabt_exception(regs, gva, hsr.len);
> +    if ( is_data )
> +        inject_dabt_exception(regs, gva, hsr.len);
> +    else
> +        inject_iabt_exception(regs, gva, hsr.len);
>  }
>  
>  static void enter_hypervisor_head(struct cpu_user_regs *regs)
> @@ -2216,11 +2165,11 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
>  
>      case HSR_EC_INSTR_ABORT_LOWER_EL:
>          perfc_incr(trap_iabt);
> -        do_trap_instr_abort_guest(regs, hsr);
> +        do_trap_stage2_abort_guest(regs, hsr);
>          break;
>      case HSR_EC_DATA_ABORT_LOWER_EL:
>          perfc_incr(trap_dabt);
> -        do_trap_data_abort_guest(regs, hsr);
> +        do_trap_stage2_abort_guest(regs, hsr);
>          break;
>  
>      default:
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio
  2017-11-23 18:32 ` [PATCH for-next 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio Julien Grall
@ 2017-12-07 22:43   ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-07 22:43 UTC (permalink / raw)
  To: Julien Grall; +Cc: sstabellini, andre.przywara, xen-devel

On Thu, 23 Nov 2017, Julien Grall wrote:
> mmio_info_t is currently filled by do_trap_data_guest_abort but only
> important when emulation an MMIO region.
> 
> A follow-up patch will merge stage-2 prefetch abort and stage-2 data abort
> in a single helper. To prepare that, mmio_info_t is now filled by
> try_handle_mmio.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/traps.c | 31 +++++++++++++++++--------------
>  1 file changed, 17 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index e30dd9b7e2..a68e01b457 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1936,9 +1936,14 @@ static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
>  }
>  
>  static bool try_handle_mmio(struct cpu_user_regs *regs,
> -                            mmio_info_t *info)
> +                            const union hsr hsr,
> +                            paddr_t gpa)
>  {
> -    const struct hsr_dabt dabt = info->dabt;
> +    const struct hsr_dabt dabt = hsr.dabt;
> +    mmio_info_t info = {
> +        .gpa = gpa,
> +        .dabt = dabt
> +    };
>      int rc;
>  
>      /* stage-1 page table should never live in an emulated MMIO region */
> @@ -1956,7 +1961,7 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
>      if ( check_workaround_766422() && (regs->cpsr & PSR_THUMB) &&
>           dabt.write )
>      {
> -        rc = decode_instruction(regs, &info->dabt);
> +        rc = decode_instruction(regs, &info.dabt);
>          if ( rc )
>          {
>              gprintk(XENLOG_DEBUG, "Unable to decode instruction\n");
> @@ -1964,7 +1969,7 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
>          }
>      }
>  
> -    return !!handle_mmio(info);
> +    return !!handle_mmio(&info);
>  }
>  
>  /*
> @@ -2002,7 +2007,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      const struct hsr_dabt dabt = hsr.dabt;
>      int rc;
>      vaddr_t gva;
> -    mmio_info_t info;
> +    paddr_t gpa;
>      uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
>      mfn_t mfn;
>  
> @@ -2013,15 +2018,13 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      if ( dabt.eat )
>          return __do_trap_serror(regs, true);
>  
> -    info.dabt = dabt;
> -
>      gva = get_hfar(true /* is_data */);
>  
>      if ( hpfar_is_valid(dabt.s1ptw, fsc) )
> -        info.gpa = get_faulting_ipa(gva);
> +        gpa = get_faulting_ipa(gva);
>      else
>      {
> -        rc = gva_to_ipa(gva, &info.gpa, GV2M_READ);
> +        rc = gva_to_ipa(gva, &gpa, GV2M_READ);
>          /*
>           * We may not be able to translate because someone is
>           * playing with the Stage-2 page table of the domain.
> @@ -2042,7 +2045,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>              .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
>          };
>  
> -        p2m_mem_access_check(info.gpa, gva, npfec);
> +        p2m_mem_access_check(gpa, gva, npfec);
>          /*
>           * The only way to get here right now is because of mem_access,
>           * thus reinjecting the exception to the guest is never required.
> @@ -2054,7 +2057,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>           * Attempt first to emulate the MMIO as the data abort will
>           * likely happen in an emulated region.
>           */
> -        if ( try_handle_mmio(regs, &info) )
> +        if ( try_handle_mmio(regs, hsr, gpa) )
>          {
>              advance_pc(regs, hsr);
>              return;
> @@ -2065,11 +2068,11 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>           * with the Stage-2 page table. Walk the Stage-2 PT to check
>           * if the entry exists. If it's the case, return to the guest
>           */
> -        mfn = gfn_to_mfn(current->domain, gaddr_to_gfn(info.gpa));
> +        mfn = gfn_to_mfn(current->domain, gaddr_to_gfn(gpa));
>          if ( !mfn_eq(mfn, INVALID_MFN) )
>              return;
>  
> -        if ( try_map_mmio(gaddr_to_gfn(info.gpa)) )
> +        if ( try_map_mmio(gaddr_to_gfn(gpa)) )
>              return;
>  
>          break;
> @@ -2079,7 +2082,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>      }
>  
>      gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
> -             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, info.gpa);
> +             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, gpa);
>      inject_dabt_exception(regs, gva, hsr.len);
>  }
>  
> -- 
> 2.11.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest
  2017-12-07 22:43   ` Stefano Stabellini
@ 2017-12-07 22:57     ` Julien Grall
  0 siblings, 0 replies; 49+ messages in thread
From: Julien Grall @ 2017-12-07 22:57 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Andre Przywara, Xen-devel

Hi Stefano,

On 7 December 2017 at 22:43, Stefano Stabellini <sstabellini@kernel.org> wrote:
> On Thu, 23 Nov 2017, Julien Grall wrote:
>> @@ -2039,10 +1982,11 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
>>      case FSC_FLT_PERM:
>>      {
>>          const struct npfec npfec = {
>> -            .read_access = !dabt.write,
>> -            .write_access = dabt.write,
>> +            .insn_fetch = !is_data,
>> +            .read_access = is_data && !hsr.dabt.write,
>> +            .write_access = is_data && !hsr.dabt.write,
>
> Shouldn't this be:
>
>     .write_access = is_data && hsr.dabt.write,
>

Hmmm, yes it should. I will resend the series minus the one you would merge.

Cheers,

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address
  2017-12-06 12:22     ` Julien Grall
@ 2017-12-07 23:01       ` Stefano Stabellini
  2017-12-08 15:24         ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-07 23:01 UTC (permalink / raw)
  To: Julien Grall; +Cc: Stefano Stabellini, andre.przywara, xen-devel

On Wed, 6 Dec 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 12/06/2017 01:22 AM, Stefano Stabellini wrote:
> > On Thu, 23 Nov 2017, Julien Grall wrote:
> > > The only differences between copy_to_guest and access_guest_memory_by_ipa
> > > are:
> > >      - The latter does not support copying data crossing page boundary
> > >      - The former is copying from/to guest VA whilst the latter from
> > >      guest PA
> > > 
> > > copy_to_guest can easily be extended to support copying from/to guest
> > > physical address. For that a new bit is used to tell whether linear
> > > address or ipa is been used.
> > > 
> > > Lastly access_guest_memory_by_ipa is reimplemented using copy_to_guest.
> > > This also has the benefits to extend the use of it, it is now possible
> > > to copy data crossing page boundary.
> > > 
> > > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Ah! This is the reason why previous patches were not using vaddr_t. It
> > makes sense now. May I suggest we use something different from paddr_t
> > in copy_guest for addr type? I don't think is correct to specify addr as
> > paddr_t when it could be vaddr_t; in the future we could have type
> > checks on them.
> > 
> > I suggest we specify it as u64, but if you have a better idea go for it.
> 
> We should not use more u64 in the code. uint64_t could be a solution but even
> that, I don't see the reason. How are you sure the physical address will
> always fit in 64-bit?
> 
> On the other side, very likely vaddr_t will fit in paddr_t. So paddr_t is the
> right way to go for me.

What about introducing xaddr_t?
Or at least:

  static struct page_info *translate_get_page(struct vcpu *v, paddr_t /*or vaddr_t */ addr

 
> > 
> > > ---
> > >   xen/arch/arm/guestcopy.c | 86
> > > ++++++++++++++++++++++--------------------------
> > >   1 file changed, 39 insertions(+), 47 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> > > index 487f5ab82d..be53bee559 100644
> > > --- a/xen/arch/arm/guestcopy.c
> > > +++ b/xen/arch/arm/guestcopy.c
> > > @@ -8,6 +8,31 @@
> > >   #define COPY_flush_dcache   (1U << 0)
> > >   #define COPY_from_guest     (0U << 1)
> > >   #define COPY_to_guest       (1U << 1)
> > > +#define COPY_ipa            (0U << 2)
> > > +#define COPY_linear         (1U << 2)
> > > +
> > > +static struct page_info *translate_get_page(struct vcpu *v, paddr_t addr,
> > > +                                            bool linear, bool write)
> > > +{
> > > +    p2m_type_t p2mt;
> > > +    struct page_info *page;
> > > +
> > > +    if ( linear )
> > > +        return get_page_from_gva(v, addr, write ? GV2M_WRITE :
> > > GV2M_READ);
> > > +
> > > +    page = get_page_from_gfn(v->domain, paddr_to_pfn(addr), &p2mt,
> > > P2M_ALLOC);
> > > +
> > > +    if ( !page )
> > > +        return NULL;
> > > +
> > > +    if ( !p2m_is_ram(p2mt) )
> > > +    {
> > > +        put_page(page);
> > > +        return NULL;
> > > +    }
> > > +
> > > +    return page;
> > > +}
> > >     static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int
> > > len,
> > >                                   struct vcpu *v, unsigned int flags)
> > > @@ -21,8 +46,8 @@ static unsigned long copy_guest(void *buf, paddr_t addr,
> > > unsigned int len,
> > >           unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
> > >           struct page_info *page;
> > >   -        page = get_page_from_gva(v, addr,
> > > -                                 (flags & COPY_to_guest) ? GV2M_WRITE :
> > > GV2M_READ);
> > > +        page = translate_get_page(v, addr, flags & COPY_linear,
> > > +                                  flags & COPY_to_guest);
> > >           if ( page == NULL )
> > >               return len;
> > >   @@ -63,73 +88,40 @@ static unsigned long copy_guest(void *buf, paddr_t
> > > addr, unsigned int len,
> > >   unsigned long raw_copy_to_guest(void *to, const void *from, unsigned
> > > len)
> > >   {
> > >       return copy_guest((void *)from, (unsigned long)to, len,
> > > -                      current, COPY_to_guest);
> > > +                      current, COPY_to_guest | COPY_linear);
> > >   }
> > >     unsigned long raw_copy_to_guest_flush_dcache(void *to, const void
> > > *from,
> > >                                                unsigned len)
> > >   {
> > >       return copy_guest((void *)from, (unsigned long)to, len,
> > > -                      current, COPY_to_guest | COPY_flush_dcache);
> > > +                      current, COPY_to_guest | COPY_flush_dcache |
> > > COPY_linear);
> > >   }
> > >     unsigned long raw_clear_guest(void *to, unsigned len)
> > >   {
> > > -    return copy_guest(NULL, (unsigned long)to, len, current,
> > > COPY_to_guest);
> > > +    return copy_guest(NULL, (unsigned long)to, len, current,
> > > +                      COPY_to_guest | COPY_linear);
> > >   }
> > >     unsigned long raw_copy_from_guest(void *to, const void __user *from,
> > > unsigned len)
> > >   {
> > > -    return copy_guest(to, (unsigned long)from, len, current,
> > > COPY_from_guest);
> > > +    return copy_guest(to, (unsigned long)from, len, current,
> > > +                      COPY_from_guest | COPY_linear);
> > >   }
> > >   -/*
> > > - * Temporarily map one physical guest page and copy data to or from it.
> > > - * The data to be copied cannot cross a page boundary.
> > > - */
> > >   int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
> > >                                  uint32_t size, bool is_write)
> > >   {
> > > -    struct page_info *page;
> > > -    uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page
> > > */
> > > -    p2m_type_t p2mt;
> > > -    void *p;
> > > -
> > > -    /* Do not cross a page boundary. */
> > > -    if ( size > (PAGE_SIZE - offset) )
> > > -    {
> > > -        printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page
> > > boundary.\n",
> > > -               d->domain_id);
> > > -        return -EINVAL;
> > > -    }
> > 
> > I don't know if we necessarely care about this, but with this change
> > this error path goes away. Do we want to keep it?
> 
> There are strictly no reason to prevent cross-boundary for IPA when we do
> support them for VA.
> 
> Cheers,
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address
  2017-12-07 23:01       ` Stefano Stabellini
@ 2017-12-08 15:24         ` Julien Grall
  2017-12-08 21:13           ` Stefano Stabellini
  0 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-12-08 15:24 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: andre.przywara, xen-devel

Hi Stefano,

On 07/12/17 23:01, Stefano Stabellini wrote:
> On Wed, 6 Dec 2017, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 12/06/2017 01:22 AM, Stefano Stabellini wrote:
>>> On Thu, 23 Nov 2017, Julien Grall wrote:
>>>> The only differences between copy_to_guest and access_guest_memory_by_ipa
>>>> are:
>>>>       - The latter does not support copying data crossing page boundary
>>>>       - The former is copying from/to guest VA whilst the latter from
>>>>       guest PA
>>>>
>>>> copy_to_guest can easily be extended to support copying from/to guest
>>>> physical address. For that a new bit is used to tell whether linear
>>>> address or ipa is been used.
>>>>
>>>> Lastly access_guest_memory_by_ipa is reimplemented using copy_to_guest.
>>>> This also has the benefits to extend the use of it, it is now possible
>>>> to copy data crossing page boundary.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>
>>> Ah! This is the reason why previous patches were not using vaddr_t. It
>>> makes sense now. May I suggest we use something different from paddr_t
>>> in copy_guest for addr type? I don't think is correct to specify addr as
>>> paddr_t when it could be vaddr_t; in the future we could have type
>>> checks on them.
>>>
>>> I suggest we specify it as u64, but if you have a better idea go for it.
>>
>> We should not use more u64 in the code. uint64_t could be a solution but even
>> that, I don't see the reason. How are you sure the physical address will
>> always fit in 64-bit?
>>
>> On the other side, very likely vaddr_t will fit in paddr_t. So paddr_t is the
>> right way to go for me.
> 
> What about introducing xaddr_t?

I would prefer uint64_t in that case. xaddr_t is quite confusing to read 
and could be misused.

> Or at least:
> 
>    static struct page_info *translate_get_page(struct vcpu *v, paddr_t /*or vaddr_t */ addr

I can do that as well. What's your preference?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-12-06 12:27         ` Julien Grall
@ 2017-12-08 15:34           ` Julien Grall
  2017-12-08 22:26             ` Stefano Stabellini
  0 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-12-08 15:34 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Andrew Cooper, andre.przywara, xen-devel

Hi,

On 06/12/17 12:27, Julien Grall wrote:
> On 12/06/2017 01:26 AM, Stefano Stabellini wrote:
>> On Thu, 23 Nov 2017, Julien Grall wrote:
>>> Hi Andrew,
>>>
>>> On 23/11/17 18:49, Andrew Cooper wrote:
>>>> On 23/11/17 18:32, Julien Grall wrote:
>>>>> This new function will be used in a follow-up patch to copy data to 
>>>>> the
>>>>> guest
>>>>> using the IPA (aka guest physical address) and then clean the cache.
>>>>>
>>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>>> ---
>>>>>    xen/arch/arm/guestcopy.c           | 10 ++++++++++
>>>>>    xen/include/asm-arm/guest_access.h |  6 ++++++
>>>>>    2 files changed, 16 insertions(+)
>>>>>
>>>>> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
>>>>> index be53bee559..7958663970 100644
>>>>> --- a/xen/arch/arm/guestcopy.c
>>>>> +++ b/xen/arch/arm/guestcopy.c
>>>>> @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const
>>>>> void __user *from, unsigned le
>>>>>                          COPY_from_guest | COPY_linear);
>>>>>    }
>>>>>    +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
>>>>> +                                              paddr_t gpa,
>>>>> +                                              void *buf,
>>>>> +                                              unsigned int len)
>>>>> +{
>>>>> +    /* P2M is shared between all vCPUs, so the vCPU used does not 
>>>>> matter.
>>>>> */
>>>>
>>>> Be very careful with this line of thinking.  It is only works after
>>>> DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
>>>> NULL pointer dereference.
>>>
>>> I really don't expect that function been used before DOMCT_max_vcpus 
>>> is set.
>>> It is only used for hardware emulation or Xen loading image into the 
>>> hardware
>>> domain memory. I could add a check d->vcpus to be safe.
>>>
>>>>
>>>> Also, what about vcpus configured with alternative views?
>>>
>>> It is not important because the underlying call is get_page_from_gfn 
>>> that does
>>> not care about the alternative view (that function take a domain in
>>> parameter). I can update the comment.
>> Since this is a new function, would it make sense to take a struct
>> vcpu* as parameter, instead of a struct domain* ?
> 
> Well, I suggested this patch this way because likely everyone will use 
> with d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and 
> not d->vcpus[1]...

Thinking a bit more to this, it might be better/safer to pass either a 
domain or a vCPU to copy_guest. I can see 2 solutions:
	1# Introduce a union that use the same parameter:
		union
		{
			struct
			{
				struct domain *d;
			} ipa;
			struct
			{
				struct vcpu *v;
			} gva;
		}
	  The structure here would be to ensure that it is clear that only 
domain (resp. vcpu) should be used with ipa (resp. gva).

	2# Have 2 parameters, vcpu and domain.

Any opinions?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address
  2017-12-08 15:24         ` Julien Grall
@ 2017-12-08 21:13           ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-08 21:13 UTC (permalink / raw)
  To: Julien Grall; +Cc: Stefano Stabellini, andre.przywara, xen-devel

On Fri, 8 Dec 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 07/12/17 23:01, Stefano Stabellini wrote:
> > On Wed, 6 Dec 2017, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 12/06/2017 01:22 AM, Stefano Stabellini wrote:
> > > > On Thu, 23 Nov 2017, Julien Grall wrote:
> > > > > The only differences between copy_to_guest and
> > > > > access_guest_memory_by_ipa
> > > > > are:
> > > > >       - The latter does not support copying data crossing page
> > > > > boundary
> > > > >       - The former is copying from/to guest VA whilst the latter from
> > > > >       guest PA
> > > > > 
> > > > > copy_to_guest can easily be extended to support copying from/to guest
> > > > > physical address. For that a new bit is used to tell whether linear
> > > > > address or ipa is been used.
> > > > > 
> > > > > Lastly access_guest_memory_by_ipa is reimplemented using
> > > > > copy_to_guest.
> > > > > This also has the benefits to extend the use of it, it is now possible
> > > > > to copy data crossing page boundary.
> > > > > 
> > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > > > 
> > > > Ah! This is the reason why previous patches were not using vaddr_t. It
> > > > makes sense now. May I suggest we use something different from paddr_t
> > > > in copy_guest for addr type? I don't think is correct to specify addr as
> > > > paddr_t when it could be vaddr_t; in the future we could have type
> > > > checks on them.
> > > > 
> > > > I suggest we specify it as u64, but if you have a better idea go for it.
> > > 
> > > We should not use more u64 in the code. uint64_t could be a solution but
> > > even
> > > that, I don't see the reason. How are you sure the physical address will
> > > always fit in 64-bit?
> > > 
> > > On the other side, very likely vaddr_t will fit in paddr_t. So paddr_t is
> > > the
> > > right way to go for me.
> > 
> > What about introducing xaddr_t?
> 
> I would prefer uint64_t in that case. xaddr_t is quite confusing to read and
> could be misused.
> 
> > Or at least:
> > 
> >    static struct page_info *translate_get_page(struct vcpu *v, paddr_t /*or
> > vaddr_t */ addr
> 
> I can do that as well. What's your preference?

I prefer uint64_t

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-12-08 15:34           ` Julien Grall
@ 2017-12-08 22:26             ` Stefano Stabellini
  2017-12-08 22:30               ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-08 22:26 UTC (permalink / raw)
  To: Julien Grall; +Cc: Andrew Cooper, Stefano Stabellini, andre.przywara, xen-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 3538 bytes --]

On Fri, 8 Dec 2017, Julien Grall wrote:
> On 06/12/17 12:27, Julien Grall wrote:
> > On 12/06/2017 01:26 AM, Stefano Stabellini wrote:
> > > On Thu, 23 Nov 2017, Julien Grall wrote:
> > > > Hi Andrew,
> > > > 
> > > > On 23/11/17 18:49, Andrew Cooper wrote:
> > > > > On 23/11/17 18:32, Julien Grall wrote:
> > > > > > This new function will be used in a follow-up patch to copy data to
> > > > > > the
> > > > > > guest
> > > > > > using the IPA (aka guest physical address) and then clean the cache.
> > > > > > 
> > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > > > > > ---
> > > > > >    xen/arch/arm/guestcopy.c           | 10 ++++++++++
> > > > > >    xen/include/asm-arm/guest_access.h |  6 ++++++
> > > > > >    2 files changed, 16 insertions(+)
> > > > > > 
> > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> > > > > > index be53bee559..7958663970 100644
> > > > > > --- a/xen/arch/arm/guestcopy.c
> > > > > > +++ b/xen/arch/arm/guestcopy.c
> > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to,
> > > > > > const
> > > > > > void __user *from, unsigned le
> > > > > >                          COPY_from_guest | COPY_linear);
> > > > > >    }
> > > > > >    +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
> > > > > > +                                              paddr_t gpa,
> > > > > > +                                              void *buf,
> > > > > > +                                              unsigned int len)
> > > > > > +{
> > > > > > +    /* P2M is shared between all vCPUs, so the vCPU used does not
> > > > > > matter.
> > > > > > */
> > > > > 
> > > > > Be very careful with this line of thinking.  It is only works after
> > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
> > > > > NULL pointer dereference.
> > > > 
> > > > I really don't expect that function been used before DOMCT_max_vcpus is
> > > > set.
> > > > It is only used for hardware emulation or Xen loading image into the
> > > > hardware
> > > > domain memory. I could add a check d->vcpus to be safe.
> > > > 
> > > > > 
> > > > > Also, what about vcpus configured with alternative views?
> > > > 
> > > > It is not important because the underlying call is get_page_from_gfn
> > > > that does
> > > > not care about the alternative view (that function take a domain in
> > > > parameter). I can update the comment.
> > > Since this is a new function, would it make sense to take a struct
> > > vcpu* as parameter, instead of a struct domain* ?
> > 
> > Well, I suggested this patch this way because likely everyone will use with
> > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not
> > d->vcpus[1]...
> 
> Thinking a bit more to this, it might be better/safer to pass either a domain
> or a vCPU to copy_guest. I can see 2 solutions:
> 	1# Introduce a union that use the same parameter:
> 		union
> 		{
> 			struct
> 			{
> 				struct domain *d;
> 			} ipa;
> 			struct
> 			{
> 				struct vcpu *v;
> 			} gva;
> 		}
> 	  The structure here would be to ensure that it is clear that only
> domain (resp. vcpu) should be used with ipa (resp. gva).
> 
> 	2# Have 2 parameters, vcpu and domain.
> 
> Any opinions?

I think that would be clearer. You could also add a paddr_t/vaddr_t
correspondingly inside the union maybe.

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-12-08 22:26             ` Stefano Stabellini
@ 2017-12-08 22:30               ` Julien Grall
  2017-12-08 22:43                 ` Stefano Stabellini
  0 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-12-08 22:30 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Andrew Cooper, andre.przywara, Xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3851 bytes --]

On 8 Dec 2017 22:26, "Stefano Stabellini" <sstabellini@kernel.org> wrote:

On Fri, 8 Dec 2017, Julien Grall wrote:
> On 06/12/17 12:27, Julien Grall wrote:
> > On 12/06/2017 01:26 AM, Stefano Stabellini wrote:
> > > On Thu, 23 Nov 2017, Julien Grall wrote:
> > > > Hi Andrew,
> > > >
> > > > On 23/11/17 18:49, Andrew Cooper wrote:
> > > > > On 23/11/17 18:32, Julien Grall wrote:
> > > > > > This new function will be used in a follow-up patch to copy
data to
> > > > > > the
> > > > > > guest
> > > > > > using the IPA (aka guest physical address) and then clean the
cache.
> > > > > >
> > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > > > > > ---
> > > > > >    xen/arch/arm/guestcopy.c           | 10 ++++++++++
> > > > > >    xen/include/asm-arm/guest_access.h |  6 ++++++
> > > > > >    2 files changed, 16 insertions(+)
> > > > > >
> > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> > > > > > index be53bee559..7958663970 100644
> > > > > > --- a/xen/arch/arm/guestcopy.c
> > > > > > +++ b/xen/arch/arm/guestcopy.c
> > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to,
> > > > > > const
> > > > > > void __user *from, unsigned le
> > > > > >                          COPY_from_guest | COPY_linear);
> > > > > >    }
> > > > > >    +unsigned long copy_to_guest_phys_flush_dcache(struct domain
*d,
> > > > > > +                                              paddr_t gpa,
> > > > > > +                                              void *buf,
> > > > > > +                                              unsigned int len)
> > > > > > +{
> > > > > > +    /* P2M is shared between all vCPUs, so the vCPU used does
not
> > > > > > matter.
> > > > > > */
> > > > >
> > > > > Be very careful with this line of thinking.  It is only works
after
> > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a
latent
> > > > > NULL pointer dereference.
> > > >
> > > > I really don't expect that function been used before
DOMCT_max_vcpus is
> > > > set.
> > > > It is only used for hardware emulation or Xen loading image into the
> > > > hardware
> > > > domain memory. I could add a check d->vcpus to be safe.
> > > >
> > > > >
> > > > > Also, what about vcpus configured with alternative views?
> > > >
> > > > It is not important because the underlying call is get_page_from_gfn
> > > > that does
> > > > not care about the alternative view (that function take a domain in
> > > > parameter). I can update the comment.
> > > Since this is a new function, would it make sense to take a struct
> > > vcpu* as parameter, instead of a struct domain* ?
> >
> > Well, I suggested this patch this way because likely everyone will use
with
> > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not
> > d->vcpus[1]...
>
> Thinking a bit more to this, it might be better/safer to pass either a
domain
> or a vCPU to copy_guest. I can see 2 solutions:
>       1# Introduce a union that use the same parameter:
>               union
>               {
>                       struct
>                       {
>                               struct domain *d;
>                       } ipa;
>                       struct
>                       {
>                               struct vcpu *v;
>                       } gva;
>               }
>         The structure here would be to ensure that it is clear that only
> domain (resp. vcpu) should be used with ipa (resp. gva).
>
>       2# Have 2 parameters, vcpu and domain.
>
> Any opinions?

I think that would be clearer. You could also add a paddr_t/vaddr_t
correspondingly inside the union maybe.


Well, you will have nameclash happening I think.


And vaddr_t and paddr_t are confusing because you don't know which address
you speak about (guest vs hypervisor). At least ipa/gpa, gva are known
naming.

Cheers,

[-- Attachment #1.2: Type: text/html, Size: 6152 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-12-08 22:30               ` Julien Grall
@ 2017-12-08 22:43                 ` Stefano Stabellini
  2017-12-12  0:16                   ` Julien Grall
  0 siblings, 1 reply; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-08 22:43 UTC (permalink / raw)
  To: Julien Grall; +Cc: Andrew Cooper, Stefano Stabellini, andre.przywara, Xen-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 5175 bytes --]

On Fri, 8 Dec 2017, Julien Grall wrote:
> On 8 Dec 2017 22:26, "Stefano Stabellini" <sstabellini@kernel.org> wrote:
>       On Fri, 8 Dec 2017, Julien Grall wrote:
>       > On 06/12/17 12:27, Julien Grall wrote:
>       > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote:
>       > > > On Thu, 23 Nov 2017, Julien Grall wrote:
>       > > > > Hi Andrew,
>       > > > >
>       > > > > On 23/11/17 18:49, Andrew Cooper wrote:
>       > > > > > On 23/11/17 18:32, Julien Grall wrote:
>       > > > > > > This new function will be used in a follow-up patch to copy data to
>       > > > > > > the
>       > > > > > > guest
>       > > > > > > using the IPA (aka guest physical address) and then clean the cache.
>       > > > > > >
>       > > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org>
>       > > > > > > ---
>       > > > > > >    xen/arch/arm/guestcopy.c           | 10 ++++++++++
>       > > > > > >    xen/include/asm-arm/guest_access.h |  6 ++++++
>       > > > > > >    2 files changed, 16 insertions(+)
>       > > > > > >
>       > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
>       > > > > > > index be53bee559..7958663970 100644
>       > > > > > > --- a/xen/arch/arm/guestcopy.c
>       > > > > > > +++ b/xen/arch/arm/guestcopy.c
>       > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to,
>       > > > > > > const
>       > > > > > > void __user *from, unsigned le
>       > > > > > >                          COPY_from_guest | COPY_linear);
>       > > > > > >    }
>       > > > > > >    +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
>       > > > > > > +                                              paddr_t gpa,
>       > > > > > > +                                              void *buf,
>       > > > > > > +                                              unsigned int len)
>       > > > > > > +{
>       > > > > > > +    /* P2M is shared between all vCPUs, so the vCPU used does not
>       > > > > > > matter.
>       > > > > > > */
>       > > > > >
>       > > > > > Be very careful with this line of thinking.  It is only works after
>       > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
>       > > > > > NULL pointer dereference.
>       > > > >
>       > > > > I really don't expect that function been used before DOMCT_max_vcpus is
>       > > > > set.
>       > > > > It is only used for hardware emulation or Xen loading image into the
>       > > > > hardware
>       > > > > domain memory. I could add a check d->vcpus to be safe.
>       > > > >
>       > > > > >
>       > > > > > Also, what about vcpus configured with alternative views?
>       > > > >
>       > > > > It is not important because the underlying call is get_page_from_gfn
>       > > > > that does
>       > > > > not care about the alternative view (that function take a domain in
>       > > > > parameter). I can update the comment.
>       > > > Since this is a new function, would it make sense to take a struct
>       > > > vcpu* as parameter, instead of a struct domain* ?
>       > >
>       > > Well, I suggested this patch this way because likely everyone will use with
>       > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not
>       > > d->vcpus[1]...
>       >
>       > Thinking a bit more to this, it might be better/safer to pass either a domain
>       > or a vCPU to copy_guest. I can see 2 solutions:
>       >       1# Introduce a union that use the same parameter:
>       >               union
>       >               {
>       >                       struct
>       >                       {
>       >                               struct domain *d;
>       >                       } ipa;
>       >                       struct
>       >                       {
>       >                               struct vcpu *v;
>       >                       } gva;
>       >               }
>       >         The structure here would be to ensure that it is clear that only
>       > domain (resp. vcpu) should be used with ipa (resp. gva).
>       >
>       >       2# Have 2 parameters, vcpu and domain.
>       >
>       > Any opinions?
> 
> I think that would be clearer. You could also add a paddr_t/vaddr_t
> correspondingly inside the union maybe.
> 
> 
> Well, you will have nameclash happening I think.
> 
> 
> And vaddr_t and paddr_t are confusing because you don't know which address you speak about (guest vs hypervisor). At least ipa/gpa, gva are known naming.

That's not what I meant. ipa and gva are good names.

I was suggesting to put an additional address field inside the union to
avoid the issue with paddr_t and vaddr_t discussed elsewhere
(alpine.DEB.2.10.1712081313070.8052@sstabellini-ThinkPad-X260).

I am happy either way, it was just a suggestion.

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-12-08 22:43                 ` Stefano Stabellini
@ 2017-12-12  0:16                   ` Julien Grall
  2017-12-12  0:28                     ` Stefano Stabellini
  0 siblings, 1 reply; 49+ messages in thread
From: Julien Grall @ 2017-12-12  0:16 UTC (permalink / raw)
  To: Stefano Stabellini; +Cc: Andrew Cooper, andre.przywara, Xen-devel

Hi Stefano,

On 12/08/2017 10:43 PM, Stefano Stabellini wrote:
> On Fri, 8 Dec 2017, Julien Grall wrote:
>> On 8 Dec 2017 22:26, "Stefano Stabellini" <sstabellini@kernel.org> wrote:
>>        On Fri, 8 Dec 2017, Julien Grall wrote:
>>        > On 06/12/17 12:27, Julien Grall wrote:
>>        > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote:
>>        > > > On Thu, 23 Nov 2017, Julien Grall wrote:
>>        > > > > Hi Andrew,
>>        > > > >
>>        > > > > On 23/11/17 18:49, Andrew Cooper wrote:
>>        > > > > > On 23/11/17 18:32, Julien Grall wrote:
>>        > > > > > > This new function will be used in a follow-up patch to copy data to
>>        > > > > > > the
>>        > > > > > > guest
>>        > > > > > > using the IPA (aka guest physical address) and then clean the cache.
>>        > > > > > >
>>        > > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>        > > > > > > ---
>>        > > > > > >    xen/arch/arm/guestcopy.c           | 10 ++++++++++
>>        > > > > > >    xen/include/asm-arm/guest_access.h |  6 ++++++
>>        > > > > > >    2 files changed, 16 insertions(+)
>>        > > > > > >
>>        > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
>>        > > > > > > index be53bee559..7958663970 100644
>>        > > > > > > --- a/xen/arch/arm/guestcopy.c
>>        > > > > > > +++ b/xen/arch/arm/guestcopy.c
>>        > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to,
>>        > > > > > > const
>>        > > > > > > void __user *from, unsigned le
>>        > > > > > >                          COPY_from_guest | COPY_linear);
>>        > > > > > >    }
>>        > > > > > >    +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
>>        > > > > > > +                                              paddr_t gpa,
>>        > > > > > > +                                              void *buf,
>>        > > > > > > +                                              unsigned int len)
>>        > > > > > > +{
>>        > > > > > > +    /* P2M is shared between all vCPUs, so the vCPU used does not
>>        > > > > > > matter.
>>        > > > > > > */
>>        > > > > >
>>        > > > > > Be very careful with this line of thinking.  It is only works after
>>        > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
>>        > > > > > NULL pointer dereference.
>>        > > > >
>>        > > > > I really don't expect that function been used before DOMCT_max_vcpus is
>>        > > > > set.
>>        > > > > It is only used for hardware emulation or Xen loading image into the
>>        > > > > hardware
>>        > > > > domain memory. I could add a check d->vcpus to be safe.
>>        > > > >
>>        > > > > >
>>        > > > > > Also, what about vcpus configured with alternative views?
>>        > > > >
>>        > > > > It is not important because the underlying call is get_page_from_gfn
>>        > > > > that does
>>        > > > > not care about the alternative view (that function take a domain in
>>        > > > > parameter). I can update the comment.
>>        > > > Since this is a new function, would it make sense to take a struct
>>        > > > vcpu* as parameter, instead of a struct domain* ?
>>        > >
>>        > > Well, I suggested this patch this way because likely everyone will use with
>>        > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not
>>        > > d->vcpus[1]...
>>        >
>>        > Thinking a bit more to this, it might be better/safer to pass either a domain
>>        > or a vCPU to copy_guest. I can see 2 solutions:
>>        >       1# Introduce a union that use the same parameter:
>>        >               union
>>        >               {
>>        >                       struct
>>        >                       {
>>        >                               struct domain *d;
>>        >                       } ipa;
>>        >                       struct
>>        >                       {
>>        >                               struct vcpu *v;
>>        >                       } gva;
>>        >               }
>>        >         The structure here would be to ensure that it is clear that only
>>        > domain (resp. vcpu) should be used with ipa (resp. gva).
>>        >
>>        >       2# Have 2 parameters, vcpu and domain.
>>        >
>>        > Any opinions?
>>
>> I think that would be clearer. You could also add a paddr_t/vaddr_t
>> correspondingly inside the union maybe.
>>
>>
>> Well, you will have nameclash happening I think.
>>
>>
>> And vaddr_t and paddr_t are confusing because you don't know which address you speak about (guest vs hypervisor). At least ipa/gpa, gva are known naming.
> 
> That's not what I meant. ipa and gva are good names.
> 
> I was suggesting to put an additional address field inside the union to
> avoid the issue with paddr_t and vaddr_t discussed elsewhere
> (alpine.DEB.2.10.1712081313070.8052@sstabellini-ThinkPad-X260).
> 
> I am happy either way, it was just a suggestion.

Actually looking at it. It will not be that handy to have the 
paddr_t/vaddr_t inside the union. Mostly because of the common code 
using the address parameter.

I could add another union for the address, but I don't much like it.
As you are happy with either way, I will use uint64_t.

Cheers,

-- 
Julien Grall



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
  2017-12-12  0:16                   ` Julien Grall
@ 2017-12-12  0:28                     ` Stefano Stabellini
  0 siblings, 0 replies; 49+ messages in thread
From: Stefano Stabellini @ 2017-12-12  0:28 UTC (permalink / raw)
  To: Julien Grall; +Cc: Andrew Cooper, Stefano Stabellini, andre.przywara, Xen-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 6217 bytes --]

On Tue, 12 Dec 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 12/08/2017 10:43 PM, Stefano Stabellini wrote:
> > On Fri, 8 Dec 2017, Julien Grall wrote:
> > > On 8 Dec 2017 22:26, "Stefano Stabellini" <sstabellini@kernel.org> wrote:
> > >        On Fri, 8 Dec 2017, Julien Grall wrote:
> > >        > On 06/12/17 12:27, Julien Grall wrote:
> > >        > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote:
> > >        > > > On Thu, 23 Nov 2017, Julien Grall wrote:
> > >        > > > > Hi Andrew,
> > >        > > > >
> > >        > > > > On 23/11/17 18:49, Andrew Cooper wrote:
> > >        > > > > > On 23/11/17 18:32, Julien Grall wrote:
> > >        > > > > > > This new function will be used in a follow-up patch to
> > > copy data to
> > >        > > > > > > the
> > >        > > > > > > guest
> > >        > > > > > > using the IPA (aka guest physical address) and then
> > > clean the cache.
> > >        > > > > > >
> > >        > > > > > > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > >        > > > > > > ---
> > >        > > > > > >    xen/arch/arm/guestcopy.c           | 10 ++++++++++
> > >        > > > > > >    xen/include/asm-arm/guest_access.h |  6 ++++++
> > >        > > > > > >    2 files changed, 16 insertions(+)
> > >        > > > > > >
> > >        > > > > > > diff --git a/xen/arch/arm/guestcopy.c
> > > b/xen/arch/arm/guestcopy.c
> > >        > > > > > > index be53bee559..7958663970 100644
> > >        > > > > > > --- a/xen/arch/arm/guestcopy.c
> > >        > > > > > > +++ b/xen/arch/arm/guestcopy.c
> > >        > > > > > > @@ -110,6 +110,16 @@ unsigned long
> > > raw_copy_from_guest(void *to,
> > >        > > > > > > const
> > >        > > > > > > void __user *from, unsigned le
> > >        > > > > > >                          COPY_from_guest |
> > > COPY_linear);
> > >        > > > > > >    }
> > >        > > > > > >    +unsigned long
> > > copy_to_guest_phys_flush_dcache(struct domain *d,
> > >        > > > > > > +                                              paddr_t
> > > gpa,
> > >        > > > > > > +                                              void
> > > *buf,
> > >        > > > > > > +                                              unsigned
> > > int len)
> > >        > > > > > > +{
> > >        > > > > > > +    /* P2M is shared between all vCPUs, so the vCPU
> > > used does not
> > >        > > > > > > matter.
> > >        > > > > > > */
> > >        > > > > >
> > >        > > > > > Be very careful with this line of thinking.  It is only
> > > works after
> > >        > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it
> > > is a latent
> > >        > > > > > NULL pointer dereference.
> > >        > > > >
> > >        > > > > I really don't expect that function been used before
> > > DOMCT_max_vcpus is
> > >        > > > > set.
> > >        > > > > It is only used for hardware emulation or Xen loading image
> > > into the
> > >        > > > > hardware
> > >        > > > > domain memory. I could add a check d->vcpus to be safe.
> > >        > > > >
> > >        > > > > >
> > >        > > > > > Also, what about vcpus configured with alternative views?
> > >        > > > >
> > >        > > > > It is not important because the underlying call is
> > > get_page_from_gfn
> > >        > > > > that does
> > >        > > > > not care about the alternative view (that function take a
> > > domain in
> > >        > > > > parameter). I can update the comment.
> > >        > > > Since this is a new function, would it make sense to take a
> > > struct
> > >        > > > vcpu* as parameter, instead of a struct domain* ?
> > >        > >
> > >        > > Well, I suggested this patch this way because likely everyone
> > > will use with
> > >        > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0]
> > > and not
> > >        > > d->vcpus[1]...
> > >        >
> > >        > Thinking a bit more to this, it might be better/safer to pass
> > > either a domain
> > >        > or a vCPU to copy_guest. I can see 2 solutions:
> > >        >       1# Introduce a union that use the same parameter:
> > >        >               union
> > >        >               {
> > >        >                       struct
> > >        >                       {
> > >        >                               struct domain *d;
> > >        >                       } ipa;
> > >        >                       struct
> > >        >                       {
> > >        >                               struct vcpu *v;
> > >        >                       } gva;
> > >        >               }
> > >        >         The structure here would be to ensure that it is clear
> > > that only
> > >        > domain (resp. vcpu) should be used with ipa (resp. gva).
> > >        >
> > >        >       2# Have 2 parameters, vcpu and domain.
> > >        >
> > >        > Any opinions?
> > > 
> > > I think that would be clearer. You could also add a paddr_t/vaddr_t
> > > correspondingly inside the union maybe.
> > > 
> > > 
> > > Well, you will have nameclash happening I think.
> > > 
> > > 
> > > And vaddr_t and paddr_t are confusing because you don't know which address
> > > you speak about (guest vs hypervisor). At least ipa/gpa, gva are known
> > > naming.
> > 
> > That's not what I meant. ipa and gva are good names.
> > 
> > I was suggesting to put an additional address field inside the union to
> > avoid the issue with paddr_t and vaddr_t discussed elsewhere
> > (alpine.DEB.2.10.1712081313070.8052@sstabellini-ThinkPad-X260).
> > 
> > I am happy either way, it was just a suggestion.
> 
> Actually looking at it. It will not be that handy to have the paddr_t/vaddr_t
> inside the union. Mostly because of the common code using the address
> parameter.
> 
> I could add another union for the address, but I don't much like it.
> As you are happy with either way, I will use uint64_t.
 
Sounds good

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2017-12-12  0:28 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-23 18:31 [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup Julien Grall
2017-11-23 18:31 ` [PATCH for-next 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags Julien Grall
2017-12-06  0:51   ` Stefano Stabellini
2017-11-23 18:31 ` [PATCH for-next 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it Julien Grall
2017-12-06  1:04   ` Stefano Stabellini
2017-11-23 18:31 ` [PATCH for-next 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it Julien Grall
2017-12-06  1:05   ` Stefano Stabellini
2017-11-23 18:31 ` [PATCH for-next 04/16] xen/arm: Extend copy_to_guest to support zeroing " Julien Grall
2017-12-06  1:08   ` Stefano Stabellini
2017-11-23 18:31 ` [PATCH for-next 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU Julien Grall
2017-12-06  1:11   ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address Julien Grall
2017-12-06  1:22   ` Stefano Stabellini
2017-12-06  1:24     ` Stefano Stabellini
2017-12-06 12:22     ` Julien Grall
2017-12-07 23:01       ` Stefano Stabellini
2017-12-08 15:24         ` Julien Grall
2017-12-08 21:13           ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache Julien Grall
2017-11-23 18:49   ` Andrew Cooper
2017-11-23 19:02     ` Julien Grall
2017-12-06  1:26       ` Stefano Stabellini
2017-12-06 12:27         ` Julien Grall
2017-12-08 15:34           ` Julien Grall
2017-12-08 22:26             ` Stefano Stabellini
2017-12-08 22:30               ` Julien Grall
2017-12-08 22:43                 ` Stefano Stabellini
2017-12-12  0:16                   ` Julien Grall
2017-12-12  0:28                     ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper Julien Grall
2017-12-06  1:38   ` Stefano Stabellini
2017-12-06  1:42     ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 09/16] xen/arm: domain_build: Rework initrd_load " Julien Grall
2017-12-06  1:56   ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load Julien Grall
2017-12-06  1:59   ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync Julien Grall
2017-12-07 22:10   ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it Julien Grall
2017-12-07 22:14   ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync Julien Grall
2017-12-07 22:26   ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 14/16] xen/arm: traps: Remove the field gva from mmio_info_t Julien Grall
2017-12-07 22:29   ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio Julien Grall
2017-12-07 22:43   ` Stefano Stabellini
2017-11-23 18:32 ` [PATCH for-next 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest Julien Grall
2017-12-07 22:43   ` Stefano Stabellini
2017-12-07 22:57     ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.