All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] arm/mem_access: Walk guest page tables in SW if mem_access is active
@ 2017-04-30 19:48 Sergej Proskurin
  2017-04-30 19:48 ` [RFC PATCH 1/4] arm/mem_access: Move TTBCR_SZ_MASK to processor.h Sergej Proskurin
                   ` (3 more replies)
  0 siblings, 4 replies; 27+ messages in thread
From: Sergej Proskurin @ 2017-04-30 19:48 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin

Hi all,

The function p2m_mem_access_check_and_get_page is called from the function
get_page_from_gva if mem_access is active and the hardware-aided translation of
the given guest virtual address (gva) into machine address fails. That is, if
the stage-2 translation tables constrain access to the guests's page tables,
hardware-assisted translation will fail. The idea of the function
p2m_mem_access_check_and_get_page is thus to translate the given gva and check
the requested access rights in software. However, as the current implementation
of p2m_mem_access_check_and_get_page makes use of the hardware-aided gva to ipa
translation, the translation might also fail because of reasons stated above
and will become equally relevant for the altp2m implementation on ARM.  As
such, we provide a software guest translation table walk to address the above
mentioned issue. We submit this patch series as an RFC to discuss the
appropriate location for the code and further functionality required to fix the
above concerns. 

Please note: The current implementation supports 64-bit and 32-bit guest
domains which use the long-descriptor translation table format. A software page
table walk for the 32-bit guests with a short-descriptor translation table
format is not yet supported.

The following patch series can be found on Github[0].

Cheers,
~Sergej

[0] https://github.com/sergej-proskurin/xen (branch arm-gpt-walk-v1)

Sergej Proskurin (4):
  arm/mem_access: Move TTBCR_SZ_MASK to processor.h
  arm/mem_access: Change value of TTBCR_SZ_MASK
  arm/mem_access: Add further TCR_EL1/TTBCR defines
  arm/mem_access: Add software guest-page-table walk

 xen/arch/arm/mem_access.c          | 140 ++++++++++++++++++++++++++++++++++++-
 xen/drivers/passthrough/arm/smmu.c |   1 -
 xen/include/asm-arm/processor.h    |   6 ++
 3 files changed, 145 insertions(+), 2 deletions(-)

-- 
2.12.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [RFC PATCH 1/4] arm/mem_access: Move TTBCR_SZ_MASK to processor.h
  2017-04-30 19:48 [RFC PATCH 0/4] arm/mem_access: Walk guest page tables in SW if mem_access is active Sergej Proskurin
@ 2017-04-30 19:48 ` Sergej Proskurin
  2017-05-02 11:47   ` Julien Grall
  2017-04-30 19:48 ` [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK Sergej Proskurin
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 27+ messages in thread
From: Sergej Proskurin @ 2017-04-30 19:48 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit moves the define TTBCR_SZ_MASK from smmu.c to processor.h as
it will be additionally used in mem_access.c as shown in one of the
following commits.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/drivers/passthrough/arm/smmu.c | 1 -
 xen/include/asm-arm/processor.h    | 2 ++
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 1082fcf8ab..e2f86e917e 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -535,7 +535,6 @@ static struct iommu_group *iommu_group_get(struct device *dev)
 
 #define TTBCR_T1SZ_SHIFT		16
 #define TTBCR_T0SZ_SHIFT		0
-#define TTBCR_SZ_MASK			0xf
 
 #define TTBCR2_SEP_SHIFT		15
 #define TTBCR2_SEP_MASK			0x7
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 855ded1b07..4fdf86070b 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -94,6 +94,8 @@
 #define TTBCR_N_2KB  _AC(0x03,U)
 #define TTBCR_N_1KB  _AC(0x04,U)
 
+#define TTBCR_SZ_MASK   0xf
+
 /* SCTLR System Control Register. */
 /* HSCTLR is a subset of this. */
 #define SCTLR_TE        (_AC(1,U)<<30)
-- 
2.12.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK
  2017-04-30 19:48 [RFC PATCH 0/4] arm/mem_access: Walk guest page tables in SW if mem_access is active Sergej Proskurin
  2017-04-30 19:48 ` [RFC PATCH 1/4] arm/mem_access: Move TTBCR_SZ_MASK to processor.h Sergej Proskurin
@ 2017-04-30 19:48 ` Sergej Proskurin
  2017-05-02 11:56   ` Julien Grall
  2017-04-30 19:48 ` [RFC PATCH 3/4] arm/mem_access: Add further TCR_EL1/TTBCR defines Sergej Proskurin
  2017-04-30 19:48 ` [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk Sergej Proskurin
  3 siblings, 1 reply; 27+ messages in thread
From: Sergej Proskurin @ 2017-04-30 19:48 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

The TTBCR_SZ holds only 3 bits and thus must be masked with the value
0x7 instead of the previously used value 0xf.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/include/asm-arm/processor.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 4fdf86070b..c8b8cff311 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -94,7 +94,7 @@
 #define TTBCR_N_2KB  _AC(0x03,U)
 #define TTBCR_N_1KB  _AC(0x04,U)
 
-#define TTBCR_SZ_MASK   0xf
+#define TTBCR_SZ_MASK   _AC(0x7,UL)
 
 /* SCTLR System Control Register. */
 /* HSCTLR is a subset of this. */
-- 
2.12.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH 3/4] arm/mem_access: Add further TCR_EL1/TTBCR defines
  2017-04-30 19:48 [RFC PATCH 0/4] arm/mem_access: Walk guest page tables in SW if mem_access is active Sergej Proskurin
  2017-04-30 19:48 ` [RFC PATCH 1/4] arm/mem_access: Move TTBCR_SZ_MASK to processor.h Sergej Proskurin
  2017-04-30 19:48 ` [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK Sergej Proskurin
@ 2017-04-30 19:48 ` Sergej Proskurin
  2017-05-02 12:01   ` Julien Grall
  2017-04-30 19:48 ` [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk Sergej Proskurin
  3 siblings, 1 reply; 27+ messages in thread
From: Sergej Proskurin @ 2017-04-30 19:48 UTC (permalink / raw)
  To: xen-devel; +Cc: Sergej Proskurin, Julien Grall, Stefano Stabellini

This commit adds further TCR_EL1/TTBCR defines to simplify access to the
respective register contents.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/include/asm-arm/processor.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index c8b8cff311..8cf442c73c 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -157,6 +157,8 @@
 /* TCR: Stage 1 Translation Control */
 
 #define TCR_T0SZ(x)     ((x)<<0)
+#define TCR_T0SZ_SHIFT  0
+#define TCR_T1SZ_SHIFT  16
 
 #define TCR_IRGN0_NC    (_AC(0x0,UL)<<8)
 #define TCR_IRGN0_WBWA  (_AC(0x1,UL)<<8)
@@ -183,6 +185,8 @@
 
 #define TCR_RES1        (_AC(1,UL)<<31|_AC(1,UL)<<23)
 
+#define TCR_SZ_MASK     (_AC(0x3f,UL)<<0)
+
 #else
 
 #define TCR_RES1        (_AC(1,UL)<<31)
-- 
2.12.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-04-30 19:48 [RFC PATCH 0/4] arm/mem_access: Walk guest page tables in SW if mem_access is active Sergej Proskurin
                   ` (2 preceding siblings ...)
  2017-04-30 19:48 ` [RFC PATCH 3/4] arm/mem_access: Add further TCR_EL1/TTBCR defines Sergej Proskurin
@ 2017-04-30 19:48 ` Sergej Proskurin
  2017-05-01 14:38   ` Razvan Cojocaru
  2017-05-02 15:17   ` Julien Grall
  3 siblings, 2 replies; 27+ messages in thread
From: Sergej Proskurin @ 2017-04-30 19:48 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergej Proskurin, Julien Grall, Tamas K Lengyel,
	Stefano Stabellini, Razvan Cojocaru

The function p2m_mem_access_check_and_get_page in mem_access.c
translates a gva to an ipa by means of the hardware functionality
implemented in the function gva_to_ipa. If mem_access is active,
hardware-based gva to ipa translation might fail, as gva_to_ipa uses the
guest's translation tables, access to which might be restricted by the
active VTTBR. To address this issue, we perform the gva to ipa
translation in software.

Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
---
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
---
 xen/arch/arm/mem_access.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 139 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
index 04b1506b00..352eb6073f 100644
--- a/xen/arch/arm/mem_access.c
+++ b/xen/arch/arm/mem_access.c
@@ -20,6 +20,7 @@
 #include <xen/monitor.h>
 #include <xen/sched.h>
 #include <xen/vm_event.h>
+#include <xen/domain_page.h>
 #include <public/vm_event.h>
 #include <asm/event.h>
 
@@ -90,6 +91,129 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
     return 0;
 }
 
+static int
+p2m_gva_to_ipa(struct p2m_domain *p2m, vaddr_t gva,
+               paddr_t *ipa, unsigned int flags)
+{
+    int level=0, t0_sz, t1_sz;
+    unsigned long t0_max, t1_min;
+    lpae_t pte, *table;
+    mfn_t root_mfn;
+    uint64_t ttbr;
+    uint32_t sctlr = READ_SYSREG(SCTLR_EL1);
+    register_t ttbcr = READ_SYSREG(TCR_EL1);
+    struct domain *d = p2m->domain;
+
+    const unsigned int offsets[4] = {
+#ifdef CONFIG_ARM_64
+        zeroeth_table_offset(gva),
+#endif
+        first_table_offset(gva),
+        second_table_offset(gva),
+        third_table_offset(gva)
+    };
+
+    const paddr_t masks[4] = {
+#ifdef CONFIG_ARM_64
+        ZEROETH_SIZE - 1,
+#endif
+        FIRST_SIZE - 1,
+        SECOND_SIZE - 1,
+        THIRD_SIZE - 1
+    };
+
+    /* If the MMU is disabled, there is no need to translate the gva. */
+    if ( !(sctlr & SCTLR_M) )
+    {
+        *ipa = gva;
+
+        return 0;
+    }
+
+    if ( is_32bit_domain(d) )
+    {
+        /*
+         * XXX: We do not support 32-bit domain translation table walks for
+         * domains using the short-descriptor translation table format, yet.
+         */
+        if ( !(ttbcr & TTBCR_EAE) )
+            return -EFAULT;
+
+#ifdef CONFIG_ARM_64
+        level = 1;
+#endif
+    }
+
+#ifdef CONFIG_ARM_64
+    if ( is_64bit_domain(d) )
+    {
+        /* Get the max GVA that can be translated by TTBR0. */
+        t0_sz = (ttbcr >> TCR_T0SZ_SHIFT) & TCR_SZ_MASK;
+        t0_max = (1UL << (64 - t0_sz)) - 1;
+
+        /* Get the min GVA that can be translated by TTBR1. */
+        t1_sz = (ttbcr >> TCR_T1SZ_SHIFT) & TCR_SZ_MASK;
+        t1_min = ~0UL - (1UL << (64 - t1_sz)) + 1;
+    }
+    else
+#endif
+    {
+        /* Get the max GVA that can be translated by TTBR0. */
+        t0_sz = (ttbcr >> TCR_T0SZ_SHIFT) & TTBCR_SZ_MASK;
+        t0_max = (1U << (32 - t0_sz)) - 1;
+
+        /* Get the min GVA that can be translated by TTBR1. */
+        t1_sz = (ttbcr >> TCR_T1SZ_SHIFT) & TTBCR_SZ_MASK;
+        t1_min = ~0U - (1U << (32 - t1_sz)) + 1;
+    }
+
+    if ( t0_max >= gva )
+        /* Use TTBR0 for GVA to IPA translation. */
+        ttbr = READ_SYSREG64(TTBR0_EL1);
+    else if ( t1_min <= gva )
+        /* Use TTBR1 for GVA to IPA translation. */
+        ttbr = READ_SYSREG64(TTBR1_EL1);
+    else
+        /* GVA out of bounds of TTBR(0|1). */
+        return -EFAULT;
+
+    /* Bits [63..48] might be used by an ASID. */
+    root_mfn = p2m_lookup(d, _gfn(paddr_to_pfn(ttbr & ((1ULL<<48)-1))), NULL);
+
+    /* Check, whether TTBR holds a valid address. */
+    if ( mfn_eq(root_mfn, INVALID_MFN) )
+        return -EFAULT;
+
+    table = map_domain_page(root_mfn);
+
+    for ( ; ; level++ )
+    {
+        pte = table[offsets[level]];
+
+        if ( level == 3 || !pte.walk.valid || !pte.walk.table )
+            break;
+
+        unmap_domain_page(table);
+
+        root_mfn = p2m_lookup(d, _gfn(pte.walk.base), NULL);
+        table = map_domain_page(root_mfn);
+    }
+
+    unmap_domain_page(table);
+
+    if ( !pte.walk.valid )
+        return -EFAULT;
+
+    /* Make sure the entry holds the requested access attributes. */
+    if ( ((flags & GV2M_WRITE) == GV2M_WRITE) && pte.pt.ro )
+        return -EFAULT;
+
+    *ipa = pfn_to_paddr(pte.walk.base) | (gva & masks[level]);
+
+    return 0;
+}
+
+
 /*
  * If mem_access is in use it might have been the reason why get_page_from_gva
  * failed to fetch the page, as it uses the MMU for the permission checking.
@@ -109,9 +233,23 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
     struct page_info *page = NULL;
     struct p2m_domain *p2m = &v->domain->arch.p2m;
 
+    ASSERT(p2m->mem_access_enabled);
+
     rc = gva_to_ipa(gva, &ipa, flag);
+
+    /*
+     * In case mem_access is active, hardware-based gva_to_ipa translation
+     * might fail. Since gva_to_ipa uses the guest's translation tables, access
+     * to which might be restricted by the active VTTBR, we perform a gva to
+     * ipa translation in software.
+     */
     if ( rc < 0 )
-        goto err;
+        if ( p2m_gva_to_ipa(p2m, gva, &ipa, flag) < 0 )
+            /*
+             * The software gva to ipa translation can still fail, if the the
+             * gva is not mapped or does not hold the requested access rights.
+             */
+            goto err;
 
     gfn = _gfn(paddr_to_pfn(ipa));
 
-- 
2.12.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-04-30 19:48 ` [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk Sergej Proskurin
@ 2017-05-01 14:38   ` Razvan Cojocaru
  2017-05-01 15:39     ` Sergej Proskurin
  2017-05-02 15:17   ` Julien Grall
  1 sibling, 1 reply; 27+ messages in thread
From: Razvan Cojocaru @ 2017-05-01 14:38 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel
  Cc: Julien Grall, Tamas K Lengyel, Stefano Stabellini

On 04/30/2017 10:48 PM, Sergej Proskurin wrote:
> The function p2m_mem_access_check_and_get_page in mem_access.c
> translates a gva to an ipa by means of the hardware functionality
> implemented in the function gva_to_ipa. If mem_access is active,
> hardware-based gva to ipa translation might fail, as gva_to_ipa uses the
> guest's translation tables, access to which might be restricted by the
> active VTTBR. To address this issue, we perform the gva to ipa
> translation in software.
> 
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Tamas K Lengyel <tamas@tklengyel.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/mem_access.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 139 insertions(+), 1 deletion(-)

My ARM knowledge is scant to say the least, and I have no way to test
this code, so I'll leave it to Tamas who has done some ARM work in the
past. In any case - to state the obvious - the main acks here I believe
are Julien and Stefano.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-01 14:38   ` Razvan Cojocaru
@ 2017-05-01 15:39     ` Sergej Proskurin
  0 siblings, 0 replies; 27+ messages in thread
From: Sergej Proskurin @ 2017-05-01 15:39 UTC (permalink / raw)
  To: Razvan Cojocaru, xen-devel
  Cc: Julien Grall, Tamas K Lengyel, Stefano Stabellini

Hi Razvan,

Sure thing. Thanks anyway :)

Cheers,

~Sergej


On 05/01/2017 04:38 PM, Razvan Cojocaru wrote:
> On 04/30/2017 10:48 PM, Sergej Proskurin wrote:
>> The function p2m_mem_access_check_and_get_page in mem_access.c
>> translates a gva to an ipa by means of the hardware functionality
>> implemented in the function gva_to_ipa. If mem_access is active,
>> hardware-based gva to ipa translation might fail, as gva_to_ipa uses the
>> guest's translation tables, access to which might be restricted by the
>> active VTTBR. To address this issue, we perform the gva to ipa
>> translation in software.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
>> Cc: Tamas K Lengyel <tamas@tklengyel.com>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/mem_access.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
>>  1 file changed, 139 insertions(+), 1 deletion(-)
> My ARM knowledge is scant to say the least, and I have no way to test
> this code, so I'll leave it to Tamas who has done some ARM work in the
> past. In any case - to state the obvious - the main acks here I believe
> are Julien and Stefano.
>
>
> Thanks,
> Razvan

-- 
Sergej Proskurin, M.Sc.
Wissenschaftlicher Mitarbeiter

Technische Universität München
Fakultät für Informatik
Lehrstuhl für Sicherheit in der Informatik

Boltzmannstraße 3
85748 Garching (bei München)

Tel. +49 (0)89 289-18592
Fax +49 (0)89 289-18579

proskurin@sec.in.tum.de
www.sec.in.tum.de


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 1/4] arm/mem_access: Move TTBCR_SZ_MASK to processor.h
  2017-04-30 19:48 ` [RFC PATCH 1/4] arm/mem_access: Move TTBCR_SZ_MASK to processor.h Sergej Proskurin
@ 2017-05-02 11:47   ` Julien Grall
  0 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-05-02 11:47 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hi Sergej,

On 30/04/17 20:48, Sergej Proskurin wrote:
> This commit moves the define TTBCR_SZ_MASK from smmu.c to processor.h as
> it will be additionally used in mem_access.c as shown in one of the
> following commits.

The SMMU driver has been imported from Linux and so does TTBCR_SZ_MASK. 
We want to keep the driver as close as Linux, so I am not in favor of 
this change.

If you need to re-define TTBR_SZ_MASK, then you can undef at the 
beginning of the SMMU file (see how we do SCTLR_AFE) at the beginning.

Cheers,

>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/drivers/passthrough/arm/smmu.c | 1 -
>  xen/include/asm-arm/processor.h    | 2 ++
>  2 files changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 1082fcf8ab..e2f86e917e 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -535,7 +535,6 @@ static struct iommu_group *iommu_group_get(struct device *dev)
>
>  #define TTBCR_T1SZ_SHIFT		16
>  #define TTBCR_T0SZ_SHIFT		0
> -#define TTBCR_SZ_MASK			0xf
>
>  #define TTBCR2_SEP_SHIFT		15
>  #define TTBCR2_SEP_MASK			0x7
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index 855ded1b07..4fdf86070b 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -94,6 +94,8 @@
>  #define TTBCR_N_2KB  _AC(0x03,U)
>  #define TTBCR_N_1KB  _AC(0x04,U)
>
> +#define TTBCR_SZ_MASK   0xf
> +
>  /* SCTLR System Control Register. */
>  /* HSCTLR is a subset of this. */
>  #define SCTLR_TE        (_AC(1,U)<<30)
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK
  2017-04-30 19:48 ` [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK Sergej Proskurin
@ 2017-05-02 11:56   ` Julien Grall
  2017-05-02 12:01     ` Julien Grall
  2017-05-08  6:25     ` Sergej Proskurin
  0 siblings, 2 replies; 27+ messages in thread
From: Julien Grall @ 2017-05-02 11:56 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hi Sergej,

On 30/04/17 20:48, Sergej Proskurin wrote:
> The TTBCR_SZ holds only 3 bits and thus must be masked with the value
> 0x7 instead of the previously used value 0xf.

Please quote the spec (paragaph + version) when you do a such change.

TTBCR_* flags are used for both TCR_EL1 (AArch64) and TTBCR (AArch32). 
Looking at the spec (ARM DDI 0487A.k_iss10775) TCR_EL1.{T0SZ,T1SZ) is 
encoded on 6 bits and TTBCR_EL1.{T0SZ,T1SZ} is encoded on 3 bits, with 
the following 3 bits RES0.

So the mask here should be 0x3f.

Cheers,

>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/include/asm-arm/processor.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index 4fdf86070b..c8b8cff311 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -94,7 +94,7 @@
>  #define TTBCR_N_2KB  _AC(0x03,U)
>  #define TTBCR_N_1KB  _AC(0x04,U)
>
> -#define TTBCR_SZ_MASK   0xf
> +#define TTBCR_SZ_MASK   _AC(0x7,UL)
>
>  /* SCTLR System Control Register. */
>  /* HSCTLR is a subset of this. */
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK
  2017-05-02 11:56   ` Julien Grall
@ 2017-05-02 12:01     ` Julien Grall
  2017-05-08  6:40       ` Sergej Proskurin
  2017-05-08  6:25     ` Sergej Proskurin
  1 sibling, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-05-02 12:01 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 02/05/17 12:56, Julien Grall wrote:
> Hi Sergej,
>
> On 30/04/17 20:48, Sergej Proskurin wrote:
>> The TTBCR_SZ holds only 3 bits and thus must be masked with the value
>> 0x7 instead of the previously used value 0xf.
>
> Please quote the spec (paragaph + version) when you do a such change.
>
> TTBCR_* flags are used for both TCR_EL1 (AArch64) and TTBCR (AArch32).
> Looking at the spec (ARM DDI 0487A.k_iss10775) TCR_EL1.{T0SZ,T1SZ) is
> encoded on 6 bits and TTBCR_EL1.{T0SZ,T1SZ} is encoded on 3 bits, with
> the following 3 bits RES0.
>
> So the mask here should be 0x3f.

Hmmm, I have just noticed we do a mix of TTBCR and TCR in the code. I 
would prefer if we use only TCR_* everywhere.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 3/4] arm/mem_access: Add further TCR_EL1/TTBCR defines
  2017-04-30 19:48 ` [RFC PATCH 3/4] arm/mem_access: Add further TCR_EL1/TTBCR defines Sergej Proskurin
@ 2017-05-02 12:01   ` Julien Grall
  0 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-05-02 12:01 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini

Hi Sergej,

On 30/04/17 20:48, Sergej Proskurin wrote:
> This commit adds further TCR_EL1/TTBCR defines to simplify access to the
> respective register contents.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/include/asm-arm/processor.h | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
> index c8b8cff311..8cf442c73c 100644
> --- a/xen/include/asm-arm/processor.h
> +++ b/xen/include/asm-arm/processor.h
> @@ -157,6 +157,8 @@
>  /* TCR: Stage 1 Translation Control */
>
>  #define TCR_T0SZ(x)     ((x)<<0)
> +#define TCR_T0SZ_SHIFT  0
> +#define TCR_T1SZ_SHIFT  16
>
>  #define TCR_IRGN0_NC    (_AC(0x0,UL)<<8)
>  #define TCR_IRGN0_WBWA  (_AC(0x1,UL)<<8)
> @@ -183,6 +185,8 @@
>
>  #define TCR_RES1        (_AC(1,UL)<<31|_AC(1,UL)<<23)
>
> +#define TCR_SZ_MASK     (_AC(0x3f,UL)<<0)
> +

See my comment on patch #2.

>  #else
>
>  #define TCR_RES1        (_AC(1,UL)<<31)
>

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-04-30 19:48 ` [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk Sergej Proskurin
  2017-05-01 14:38   ` Razvan Cojocaru
@ 2017-05-02 15:17   ` Julien Grall
  2017-05-08  9:22     ` Sergej Proskurin
  1 sibling, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-05-02 15:17 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel
  Cc: Tamas K Lengyel, Stefano Stabellini, Razvan Cojocaru

Hi Sergej,

On 30/04/17 20:48, Sergej Proskurin wrote:
> The function p2m_mem_access_check_and_get_page in mem_access.c
> translates a gva to an ipa by means of the hardware functionality
> implemented in the function gva_to_ipa. If mem_access is active,
> hardware-based gva to ipa translation might fail, as gva_to_ipa uses the
> guest's translation tables, access to which might be restricted by the
> active VTTBR. To address this issue, we perform the gva to ipa
> translation in software.
>
> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
> ---
> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
> Cc: Tamas K Lengyel <tamas@tklengyel.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>  xen/arch/arm/mem_access.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 139 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
> index 04b1506b00..352eb6073f 100644
> --- a/xen/arch/arm/mem_access.c
> +++ b/xen/arch/arm/mem_access.c
> @@ -20,6 +20,7 @@
>  #include <xen/monitor.h>
>  #include <xen/sched.h>
>  #include <xen/vm_event.h>
> +#include <xen/domain_page.h>
>  #include <public/vm_event.h>
>  #include <asm/event.h>
>
> @@ -90,6 +91,129 @@ static int __p2m_get_mem_access(struct domain *d, gfn_t gfn,
>      return 0;
>  }
>
> +static int
> +p2m_gva_to_ipa(struct p2m_domain *p2m, vaddr_t gva,
> +               paddr_t *ipa, unsigned int flags)

I don't think this function belongs to mem_access.c. It would be better 
in p2m.c. Also, the name is a bit confusing, a user would not know the 
difference between p2m_gva_to_ipa and gva_to_ipa.

> +{
> +    int level=0, t0_sz, t1_sz;

Coding style level = 0.

Also please use unsigned int for level.

t0_sz and t1_sz should be stay signed.

> +    unsigned long t0_max, t1_min;
> +    lpae_t pte, *table;
> +    mfn_t root_mfn;
> +    uint64_t ttbr;
> +    uint32_t sctlr = READ_SYSREG(SCTLR_EL1);
> +    register_t ttbcr = READ_SYSREG(TCR_EL1);

So you are assuming that the current vCPU is running, right? If so, this 
should be clarified in the commit message and in a comment above the 
function.

Also s/ttbcr/tcr/

> +    struct domain *d = p2m->domain;
> +
> +    const unsigned int offsets[4] = {
> +#ifdef CONFIG_ARM_64
> +        zeroeth_table_offset(gva),
> +#endif
> +        first_table_offset(gva),
> +        second_table_offset(gva),
> +        third_table_offset(gva)
> +    };

Offsets are based on the granularity of Xen (currently 4KB). However the 
guests can support 4KB, 16KB, 64KB. Please handle everything correctly.

> +
> +    const paddr_t masks[4] = {
> +#ifdef CONFIG_ARM_64
> +        ZEROETH_SIZE - 1,
> +#endif
> +        FIRST_SIZE - 1,
> +        SECOND_SIZE - 1,
> +        THIRD_SIZE - 1
> +    };
> +
> +    /* If the MMU is disabled, there is no need to translate the gva. */
> +    if ( !(sctlr & SCTLR_M) )
> +    {
> +        *ipa = gva;
> +
> +        return 0;
> +    }
> +
> +    if ( is_32bit_domain(d) )
> +    {
> +        /*
> +         * XXX: We do not support 32-bit domain translation table walks for
> +         * domains using the short-descriptor translation table format, yet.
> +         */

Debian ARM 32bit is using short-descriptor translation table. So are you 
suggesting that memaccess will not correctly with Debian guest?

> +        if ( !(ttbcr & TTBCR_EAE) )

See my comment on patch #2 about the naming convention.

> +            return -EFAULT;
> +
> +#ifdef CONFIG_ARM_64
> +        level = 1;

This sounds wrong to me. The first lookup level is determined by the 
value of T0SZ and T1SZ (see B3-1352 in ARM DDI 0406C.c).

For the AArch64 version see D4.2.5 in ARM DDI 0487A.k_iss10775.

> +#endif
> +    }
> +
> +#ifdef CONFIG_ARM_64
> +    if ( is_64bit_domain(d) )
> +    {
> +        /* Get the max GVA that can be translated by TTBR0. */
> +        t0_sz = (ttbcr >> TCR_T0SZ_SHIFT) & TCR_SZ_MASK;
> +        t0_max = (1UL << (64 - t0_sz)) - 1;
> +
> +        /* Get the min GVA that can be translated by TTBR1. */
> +        t1_sz = (ttbcr >> TCR_T1SZ_SHIFT) & TCR_SZ_MASK;
> +        t1_min = ~0UL - (1UL << (64 - t1_sz)) + 1;
> +    }
> +    else
> +#endif
> +    {
> +        /* Get the max GVA that can be translated by TTBR0. */
> +        t0_sz = (ttbcr >> TCR_T0SZ_SHIFT) & TTBCR_SZ_MASK;

See my comment on patch #2 for the naming convention.

> +        t0_max = (1U << (32 - t0_sz)) - 1;
> +
> +        /* Get the min GVA that can be translated by TTBR1. */
> +        t1_sz = (ttbcr >> TCR_T1SZ_SHIFT) & TTBCR_SZ_MASK;
> +        t1_min = ~0U - (1U << (32 - t1_sz)) + 1;

1U << (32 - t1_sz) will not fit in an unsigned long if t1_sz = 0.

Also, this code looks wrong to me. Looking at B3.6.4 in DDI 0406C.c, you 
don't handle properly t0_max and t1_min  when T0SZ or T1SZ is 0.

I think it would be worth for you to have a look to the pseudo-code in 
the ARM ARM for TranslationTableWalk (see B3.19.3 in ARM DDI 0406C.c and 
J1-5295 in ARM DDI 0487A.k_iss10775) which gives you all the details for 
doing properly translation table walk.

> +    }
> +
> +    if ( t0_max >= gva )
> +        /* Use TTBR0 for GVA to IPA translation. */
> +        ttbr = READ_SYSREG64(TTBR0_EL1);
> +    else if ( t1_min <= gva )
> +        /* Use TTBR1 for GVA to IPA translation. */
> +        ttbr = READ_SYSREG64(TTBR1_EL1);
> +    else
> +        /* GVA out of bounds of TTBR(0|1). */
> +        return -EFAULT;
> +
> +    /* Bits [63..48] might be used by an ASID. */
> +    root_mfn = p2m_lookup(d, _gfn(paddr_to_pfn(ttbr & ((1ULL<<48)-1))), NULL);

Please don't hardcode the mask.

> +
> +    /* Check, whether TTBR holds a valid address. */
> +    if ( mfn_eq(root_mfn, INVALID_MFN) )
> +        return -EFAULT;
> +
> +    table = map_domain_page(root_mfn);

Nothing prevents the guest to give back the page table whilst you 
browsing it. You may want to have a look to patch "ARM: introduce 
vgic_access_guest_memory()" [1] to generalize the function and avoid 
re-inventing the wheel.

> +
> +    for ( ; ; level++ )
> +    {
> +        pte = table[offsets[level]];

You likely want to use ACCESS_ONCE here to prevent the compiler to read 
the pte twice from the memory.

> +
> +        if ( level == 3 || !pte.walk.valid || !pte.walk.table )
> +            break;
> +
> +        unmap_domain_page(table);
> +
> +        root_mfn = p2m_lookup(d, _gfn(pte.walk.base), NULL);

No check on root_mfn?

> +        table = map_domain_page(root_mfn);
> +    }
> +
> +    unmap_domain_page(table);
> +
> +    if ( !pte.walk.valid )
> +        return -EFAULT;
> +
> +    /* Make sure the entry holds the requested access attributes. */
> +    if ( ((flags & GV2M_WRITE) == GV2M_WRITE) && pte.pt.ro )
> +        return -EFAULT;

IHMO, this function should return the access attribute of the page and 
let the caller decides what to do. This would make this function more 
generic.

> +
> +    *ipa = pfn_to_paddr(pte.walk.base) | (gva & masks[level]);
> +
> +    return 0;
> +}
> +
> +
>  /*
>   * If mem_access is in use it might have been the reason why get_page_from_gva
>   * failed to fetch the page, as it uses the MMU for the permission checking.
> @@ -109,9 +233,23 @@ p2m_mem_access_check_and_get_page(vaddr_t gva, unsigned long flag,
>      struct page_info *page = NULL;
>      struct p2m_domain *p2m = &v->domain->arch.p2m;
>
> +    ASSERT(p2m->mem_access_enabled);
> +

Why this ASSERT has been added?

>      rc = gva_to_ipa(gva, &ipa, flag);
> +
> +    /*
> +     * In case mem_access is active, hardware-based gva_to_ipa translation
> +     * might fail. Since gva_to_ipa uses the guest's translation tables, access
> +     * to which might be restricted by the active VTTBR, we perform a gva to
> +     * ipa translation in software.
> +     */
>      if ( rc < 0 )
> -        goto err;
> +        if ( p2m_gva_to_ipa(p2m, gva, &ipa, flag) < 0 )
> +            /*
> +             * The software gva to ipa translation can still fail, if the the
> +             * gva is not mapped or does not hold the requested access rights.
> +             */
> +            goto err;

Rather falling back, why don't we do software page table walk every time?

>
>      gfn = _gfn(paddr_to_pfn(ipa));
>
>

Cheers,

[1] https://patchwork.kernel.org/patch/9676291/

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK
  2017-05-02 11:56   ` Julien Grall
  2017-05-02 12:01     ` Julien Grall
@ 2017-05-08  6:25     ` Sergej Proskurin
  1 sibling, 0 replies; 27+ messages in thread
From: Sergej Proskurin @ 2017-05-08  6:25 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 05/02/2017 01:56 PM, Julien Grall wrote:
> Hi Sergej,
>
> On 30/04/17 20:48, Sergej Proskurin wrote:
>> The TTBCR_SZ holds only 3 bits and thus must be masked with the value
>> 0x7 instead of the previously used value 0xf.
>
> Please quote the spec (paragaph + version) when you do a such change.
>
> TTBCR_* flags are used for both TCR_EL1 (AArch64) and TTBCR (AArch32).
> Looking at the spec (ARM DDI 0487A.k_iss10775) TCR_EL1.{T0SZ,T1SZ) is
> encoded on 6 bits and TTBCR_EL1.{T0SZ,T1SZ} is encoded on 3 bits, with
> the following 3 bits RES0.
>
> So the mask here should be 0x3f.
>

That's fair, thanks. It is already part of my v2 patch.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK
  2017-05-02 12:01     ` Julien Grall
@ 2017-05-08  6:40       ` Sergej Proskurin
  2017-05-08 11:31         ` Julien Grall
  0 siblings, 1 reply; 27+ messages in thread
From: Sergej Proskurin @ 2017-05-08  6:40 UTC (permalink / raw)
  To: Julien Grall, xen-devel; +Cc: Stefano Stabellini

Hi Julien,


On 05/02/2017 02:01 PM, Julien Grall wrote:
>
>
> On 02/05/17 12:56, Julien Grall wrote:
>> Hi Sergej,
>>
>> On 30/04/17 20:48, Sergej Proskurin wrote:
>>> The TTBCR_SZ holds only 3 bits and thus must be masked with the value
>>> 0x7 instead of the previously used value 0xf.
>>
>> Please quote the spec (paragaph + version) when you do a such change.
>>
>> TTBCR_* flags are used for both TCR_EL1 (AArch64) and TTBCR (AArch32).
>> Looking at the spec (ARM DDI 0487A.k_iss10775) TCR_EL1.{T0SZ,T1SZ) is
>> encoded on 6 bits and TTBCR_EL1.{T0SZ,T1SZ} is encoded on 3 bits, with
>> the following 3 bits RES0.
>>
>> So the mask here should be 0x3f.
>
> Hmmm, I have just noticed we do a mix of TTBCR and TCR in the code. I
> would prefer if we use only TCR_* everywhere.
>

Thank you. I have adopted my current implementation so that it uses
solely TCR_* defines.

This is fine for the current implementation as it supports only Aarch64
and the long-descriptor translation table format of Aarch32/ARMv7. Yet,
as soon as we provide support for the short-descriptor translation table
format, I believe it would make sense to make use of the TTBCR_*
defines, as well. Otherwise, the implementation would mixup the TCR_*
with, e.g., the TTBCR_N_MASK defines. Because of this, I would suggest
to use the TTBCR_* when it comes to the short-descriptor format. What do
you think about that?

Also, in order to reduce the complexity of the gpt-walk function, it
would probably make sense to move all short-descriptor translation table
related operations out of the suggested function in the patch 4/4.

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-02 15:17   ` Julien Grall
@ 2017-05-08  9:22     ` Sergej Proskurin
  2017-05-08 11:44       ` Julien Grall
  0 siblings, 1 reply; 27+ messages in thread
From: Sergej Proskurin @ 2017-05-08  9:22 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Tamas K Lengyel, Stefano Stabellini, Razvan Cojocaru

Hi Julien,


On 05/02/2017 05:17 PM, Julien Grall wrote:
> Hi Sergej,
>
> On 30/04/17 20:48, Sergej Proskurin wrote:
>> The function p2m_mem_access_check_and_get_page in mem_access.c
>> translates a gva to an ipa by means of the hardware functionality
>> implemented in the function gva_to_ipa. If mem_access is active,
>> hardware-based gva to ipa translation might fail, as gva_to_ipa uses the
>> guest's translation tables, access to which might be restricted by the
>> active VTTBR. To address this issue, we perform the gva to ipa
>> translation in software.
>>
>> Signed-off-by: Sergej Proskurin <proskurin@sec.in.tum.de>
>> ---
>> Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
>> Cc: Tamas K Lengyel <tamas@tklengyel.com>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Julien Grall <julien.grall@arm.com>
>> ---
>>  xen/arch/arm/mem_access.c | 140
>> +++++++++++++++++++++++++++++++++++++++++++++-
>>  1 file changed, 139 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/mem_access.c b/xen/arch/arm/mem_access.c
>> index 04b1506b00..352eb6073f 100644
>> --- a/xen/arch/arm/mem_access.c
>> +++ b/xen/arch/arm/mem_access.c
>> @@ -20,6 +20,7 @@
>>  #include <xen/monitor.h>
>>  #include <xen/sched.h>
>>  #include <xen/vm_event.h>
>> +#include <xen/domain_page.h>
>>  #include <public/vm_event.h>
>>  #include <asm/event.h>
>>
>> @@ -90,6 +91,129 @@ static int __p2m_get_mem_access(struct domain *d,
>> gfn_t gfn,
>>      return 0;
>>  }
>>
>> +static int
>> +p2m_gva_to_ipa(struct p2m_domain *p2m, vaddr_t gva,
>> +               paddr_t *ipa, unsigned int flags)
>
> I don't think this function belongs to mem_access.c. It would be
> better in p2m.c. Also, the name is a bit confusing, a user would not
> know the difference between p2m_gva_to_ipa and gva_to_ipa.

Fair enough, thank you.

>
>> +{
>> +    int level=0, t0_sz, t1_sz;
>
> Coding style level = 0.
>
> Also please use unsigned int for level.
>
> t0_sz and t1_sz should be stay signed.
>
>> +    unsigned long t0_max, t1_min;
>> +    lpae_t pte, *table;
>> +    mfn_t root_mfn;
>> +    uint64_t ttbr;
>> +    uint32_t sctlr = READ_SYSREG(SCTLR_EL1);
>> +    register_t ttbcr = READ_SYSREG(TCR_EL1);
>
> So you are assuming that the current vCPU is running, right? If so,
> this should be clarified in the commit message and in a comment above
> the function.

I will state that in a comment, thank you.

>
> Also s/ttbcr/tcr/
>
>> +    struct domain *d = p2m->domain;
>> +
>> +    const unsigned int offsets[4] = {
>> +#ifdef CONFIG_ARM_64
>> +        zeroeth_table_offset(gva),
>> +#endif
>> +        first_table_offset(gva),
>> +        second_table_offset(gva),
>> +        third_table_offset(gva)
>> +    };
>
> Offsets are based on the granularity of Xen (currently 4KB). However
> the guests can support 4KB, 16KB, 64KB. Please handle everything
> correctly.

True, ARM ist quite flexible. Yet, I have not seen an OS implementation
that is supported by Xen and makes use of page sizes other than 4KB and
their supersets (2/4MB, 1GB) (please let me know, if you know some),
which is why I doubt that we need that. Please let me know why you think
we need that kind of flexibility in software?

If you should nevertheless insist on that functionality, I would include
it in the next patch and try not to blow up the code too much.

>
>> +
>> +    const paddr_t masks[4] = {
>> +#ifdef CONFIG_ARM_64
>> +        ZEROETH_SIZE - 1,
>> +#endif
>> +        FIRST_SIZE - 1,
>> +        SECOND_SIZE - 1,
>> +        THIRD_SIZE - 1
>> +    };
>> +
>> +    /* If the MMU is disabled, there is no need to translate the
>> gva. */
>> +    if ( !(sctlr & SCTLR_M) )
>> +    {
>> +        *ipa = gva;
>> +
>> +        return 0;
>> +    }
>> +
>> +    if ( is_32bit_domain(d) )
>> +    {
>> +        /*
>> +         * XXX: We do not support 32-bit domain translation table
>> walks for
>> +         * domains using the short-descriptor translation table
>> format, yet.
>> +         */
>
> Debian ARM 32bit is using short-descriptor translation table. So are
> you suggesting that memaccess will not correctly with Debian guest?
>

Yes, as stated in the comment, the current implementation does not
support the short-descriptor translation table format. As this is an RFC
patch, I wanted to see your opinion on that functionality in general
before implementing all corner cases of the ARM architecture.

As mentioned in my previous reply in patch (patch 2/4), I would prefer
to separate the long-descriptor from the short-descriptor translation in
the future implementation.

>> +        if ( !(ttbcr & TTBCR_EAE) )
>
> See my comment on patch #2 about the naming convention.
>

Done, thanks.

>> +            return -EFAULT;
>> +
>> +#ifdef CONFIG_ARM_64
>> +        level = 1;
>
> This sounds wrong to me. The first lookup level is determined by the
> value of T0SZ and T1SZ (see B3-1352 in ARM DDI 0406C.c).
>
> For the AArch64 version see D4.2.5 in ARM DDI 0487A.k_iss10775.
>

Thank you for the hint.

>> +#endif
>> +    }
>> +
>> +#ifdef CONFIG_ARM_64
>> +    if ( is_64bit_domain(d) )
>> +    {
>> +        /* Get the max GVA that can be translated by TTBR0. */
>> +        t0_sz = (ttbcr >> TCR_T0SZ_SHIFT) & TCR_SZ_MASK;
>> +        t0_max = (1UL << (64 - t0_sz)) - 1;
>> +
>> +        /* Get the min GVA that can be translated by TTBR1. */
>> +        t1_sz = (ttbcr >> TCR_T1SZ_SHIFT) & TCR_SZ_MASK;
>> +        t1_min = ~0UL - (1UL << (64 - t1_sz)) + 1;
>> +    }
>> +    else
>> +#endif
>> +    {
>> +        /* Get the max GVA that can be translated by TTBR0. */
>> +        t0_sz = (ttbcr >> TCR_T0SZ_SHIFT) & TTBCR_SZ_MASK;
>
> See my comment on patch #2 for the naming convention.
>
>> +        t0_max = (1U << (32 - t0_sz)) - 1;
>> +
>> +        /* Get the min GVA that can be translated by TTBR1. */
>> +        t1_sz = (ttbcr >> TCR_T1SZ_SHIFT) & TTBCR_SZ_MASK;
>> +        t1_min = ~0U - (1U << (32 - t1_sz)) + 1;
>
> 1U << (32 - t1_sz) will not fit in an unsigned long if t1_sz = 0.

An unsigned long long should fix that (however not in the 64-bit case),
thanks.

>
> Also, this code looks wrong to me. Looking at B3.6.4 in DDI 0406C.c,
> you don't handle properly t0_max and t1_min  when T0SZ or T1SZ is 0.
>
> I think it would be worth for you to have a look to the pseudo-code in
> the ARM ARM for TranslationTableWalk (see B3.19.3 in ARM DDI 0406C.c
> and J1-5295 in ARM DDI 0487A.k_iss10775) which gives you all the
> details for doing properly translation table walk.

I will have a deeper look and adopt parts of the given pseudocode in my
implementation. You are right: there might be an off-by-one issue in the
computation above.

>
>> +    }
>> +
>> +    if ( t0_max >= gva )
>> +        /* Use TTBR0 for GVA to IPA translation. */
>> +        ttbr = READ_SYSREG64(TTBR0_EL1);
>> +    else if ( t1_min <= gva )
>> +        /* Use TTBR1 for GVA to IPA translation. */
>> +        ttbr = READ_SYSREG64(TTBR1_EL1);
>> +    else
>> +        /* GVA out of bounds of TTBR(0|1). */
>> +        return -EFAULT;
>> +
>> +    /* Bits [63..48] might be used by an ASID. */
>> +    root_mfn = p2m_lookup(d, _gfn(paddr_to_pfn(ttbr &
>> ((1ULL<<48)-1))), NULL);
>
> Please don't hardcode the mask.
>
>> +
>> +    /* Check, whether TTBR holds a valid address. */
>> +    if ( mfn_eq(root_mfn, INVALID_MFN) )
>> +        return -EFAULT;
>> +
>> +    table = map_domain_page(root_mfn);
>
> Nothing prevents the guest to give back the page table whilst you
> browsing it. You may want to have a look to patch "ARM: introduce
> vgic_access_guest_memory()" [1] to generalize the function and avoid
> re-inventing the wheel.
>

I will have a look at this commit, thanks.

>> +
>> +    for ( ; ; level++ )
>> +    {
>> +        pte = table[offsets[level]];
>
> You likely want to use ACCESS_ONCE here to prevent the compiler to
> read the pte twice from the memory.

I tried that, however the compiler did not seem to like it. I will try
again and let you know if I got that to work.

>
>> +
>> +        if ( level == 3 || !pte.walk.valid || !pte.walk.table )
>> +            break;
>> +
>> +        unmap_domain_page(table);
>> +
>> +        root_mfn = p2m_lookup(d, _gfn(pte.walk.base), NULL);
>
> No check on root_mfn?

Done, thanks.

>
>> +        table = map_domain_page(root_mfn);
>> +    }
>> +
>> +    unmap_domain_page(table);
>> +
>> +    if ( !pte.walk.valid )
>> +        return -EFAULT;
>> +
>> +    /* Make sure the entry holds the requested access attributes. */
>> +    if ( ((flags & GV2M_WRITE) == GV2M_WRITE) && pte.pt.ro )
>> +        return -EFAULT;
>
> IHMO, this function should return the access attribute of the page and
> let the caller decides what to do. This would make this function more
> generic.
>

Makes sense. This would make the function indeed more generic. Since
there are already multiple gpt-walk stubs throughout the codebase, we
could make use of only one.

>> +
>> +    *ipa = pfn_to_paddr(pte.walk.base) | (gva & masks[level]);
>> +
>> +    return 0;
>> +}
>> +
>> +
>>  /*
>>   * If mem_access is in use it might have been the reason why
>> get_page_from_gva
>>   * failed to fetch the page, as it uses the MMU for the permission
>> checking.
>> @@ -109,9 +233,23 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
>> unsigned long flag,
>>      struct page_info *page = NULL;
>>      struct p2m_domain *p2m = &v->domain->arch.p2m;
>>
>> +    ASSERT(p2m->mem_access_enabled);
>> +
>
> Why this ASSERT has been added?

The function p2m_mem_access_check_and_get_page is called only if
get_page_from_gva fails and mem_access is active. I can add a comment
and move this part into an individual commit.

>
>>      rc = gva_to_ipa(gva, &ipa, flag);
>> +
>> +    /*
>> +     * In case mem_access is active, hardware-based gva_to_ipa
>> translation
>> +     * might fail. Since gva_to_ipa uses the guest's translation
>> tables, access
>> +     * to which might be restricted by the active VTTBR, we perform
>> a gva to
>> +     * ipa translation in software.
>> +     */
>>      if ( rc < 0 )
>> -        goto err;
>> +        if ( p2m_gva_to_ipa(p2m, gva, &ipa, flag) < 0 )
>> +            /*
>> +             * The software gva to ipa translation can still fail,
>> if the the
>> +             * gva is not mapped or does not hold the requested
>> access rights.
>> +             */
>> +            goto err;
>
> Rather falling back, why don't we do software page table walk every time?

A software page table walk would (presumably) take more time to
translate the gva. Do we want that? Also, I am not sure what you meant
by "falling back" at this point. Thank you.

>
>>
>>      gfn = _gfn(paddr_to_pfn(ipa));
>>
>>
>
> Cheers,
>
> [1] https://patchwork.kernel.org/patch/9676291/
>

Cheers,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK
  2017-05-08  6:40       ` Sergej Proskurin
@ 2017-05-08 11:31         ` Julien Grall
  0 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-05-08 11:31 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel; +Cc: Stefano Stabellini



On 08/05/17 07:40, Sergej Proskurin wrote:
> Hi Julien,
>
>
> On 05/02/2017 02:01 PM, Julien Grall wrote:
>>
>>
>> On 02/05/17 12:56, Julien Grall wrote:
>>> Hi Sergej,
>>>
>>> On 30/04/17 20:48, Sergej Proskurin wrote:
>>>> The TTBCR_SZ holds only 3 bits and thus must be masked with the value
>>>> 0x7 instead of the previously used value 0xf.
>>>
>>> Please quote the spec (paragaph + version) when you do a such change.
>>>
>>> TTBCR_* flags are used for both TCR_EL1 (AArch64) and TTBCR (AArch32).
>>> Looking at the spec (ARM DDI 0487A.k_iss10775) TCR_EL1.{T0SZ,T1SZ) is
>>> encoded on 6 bits and TTBCR_EL1.{T0SZ,T1SZ} is encoded on 3 bits, with
>>> the following 3 bits RES0.
>>>
>>> So the mask here should be 0x3f.
>>
>> Hmmm, I have just noticed we do a mix of TTBCR and TCR in the code. I
>> would prefer if we use only TCR_* everywhere.
>>
>
> Thank you. I have adopted my current implementation so that it uses
> solely TCR_* defines.
>
> This is fine for the current implementation as it supports only Aarch64
> and the long-descriptor translation table format of Aarch32/ARMv7. Yet,
> as soon as we provide support for the short-descriptor translation table
> format, I believe it would make sense to make use of the TTBCR_*
> defines, as well. Otherwise, the implementation would mixup the TCR_*
> with, e.g., the TTBCR_N_MASK defines. Because of this, I would suggest
> to use the TTBCR_* when it comes to the short-descriptor format. What do
> you think about that?
>
> Also, in order to reduce the complexity of the gpt-walk function, it
> would probably make sense to move all short-descriptor translation table
> related operations out of the suggested function in the patch 4/4.

It would be indeed better. But in that case, the LPAE page walk should 
be moved in a separate function too. The current helper would just 
select the correct one to call.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-08  9:22     ` Sergej Proskurin
@ 2017-05-08 11:44       ` Julien Grall
  2017-05-08 19:42         ` Tamas K Lengyel
  0 siblings, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-05-08 11:44 UTC (permalink / raw)
  To: Sergej Proskurin, xen-devel
  Cc: Tamas K Lengyel, Stefano Stabellini, Razvan Cojocaru

Hi,

On 08/05/17 10:22, Sergej Proskurin wrote:
> On 05/02/2017 05:17 PM, Julien Grall wrote:
>> On 30/04/17 20:48, Sergej Proskurin wrote:
>> Also s/ttbcr/tcr/
>>
>>> +    struct domain *d = p2m->domain;
>>> +
>>> +    const unsigned int offsets[4] = {
>>> +#ifdef CONFIG_ARM_64
>>> +        zeroeth_table_offset(gva),
>>> +#endif
>>> +        first_table_offset(gva),
>>> +        second_table_offset(gva),
>>> +        third_table_offset(gva)
>>> +    };
>>
>> Offsets are based on the granularity of Xen (currently 4KB). However
>> the guests can support 4KB, 16KB, 64KB. Please handle everything
>> correctly.
>
> True, ARM ist quite flexible. Yet, I have not seen an OS implementation
> that is supported by Xen and makes use of page sizes other than 4KB and
> their supersets (2/4MB, 1GB) (please let me know, if you know some),
> which is why I doubt that we need that. Please let me know why you think
> we need that kind of flexibility in software?
>
> If you should nevertheless insist on that functionality, I would include
> it in the next patch and try not to blow up the code too much.

Linux is able to support 4KB, 16KB, 64KB page granularity for AArch64. 
Centos and RedHat are only shipped with 64KB page granularity.

>
>>
>>> +
>>> +    const paddr_t masks[4] = {
>>> +#ifdef CONFIG_ARM_64
>>> +        ZEROETH_SIZE - 1,
>>> +#endif
>>> +        FIRST_SIZE - 1,
>>> +        SECOND_SIZE - 1,
>>> +        THIRD_SIZE - 1
>>> +    };
>>> +
>>> +    /* If the MMU is disabled, there is no need to translate the
>>> gva. */
>>> +    if ( !(sctlr & SCTLR_M) )
>>> +    {
>>> +        *ipa = gva;
>>> +
>>> +        return 0;
>>> +    }
>>> +
>>> +    if ( is_32bit_domain(d) )
>>> +    {
>>> +        /*
>>> +         * XXX: We do not support 32-bit domain translation table
>>> walks for
>>> +         * domains using the short-descriptor translation table
>>> format, yet.
>>> +         */
>>
>> Debian ARM 32bit is using short-descriptor translation table. So are
>> you suggesting that memaccess will not correctly with Debian guest?
>>
>
> Yes, as stated in the comment, the current implementation does not
> support the short-descriptor translation table format. As this is an RFC
> patch, I wanted to see your opinion on that functionality in general
> before implementing all corner cases of the ARM architecture.
>
> As mentioned in my previous reply in patch (patch 2/4), I would prefer
> to separate the long-descriptor from the short-descriptor translation in
> the future implementation.

I agree here. See my answer on patch #2 about how I would like to see 
the implementation.

[...]

>>> +
>>> +    *ipa = pfn_to_paddr(pte.walk.base) | (gva & masks[level]);
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +
>>>  /*
>>>   * If mem_access is in use it might have been the reason why
>>> get_page_from_gva
>>>   * failed to fetch the page, as it uses the MMU for the permission
>>> checking.
>>> @@ -109,9 +233,23 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
>>> unsigned long flag,
>>>      struct page_info *page = NULL;
>>>      struct p2m_domain *p2m = &v->domain->arch.p2m;
>>>
>>> +    ASSERT(p2m->mem_access_enabled);
>>> +
>>
>> Why this ASSERT has been added?
>
> The function p2m_mem_access_check_and_get_page is called only if
> get_page_from_gva fails and mem_access is active. I can add a comment
> and move this part into an individual commit.

Whilst I agree it is dumb to call this code without mem_access enabled, 
this code is able to cope with it. So why do you need this ASSERT?

>
>>
>>>      rc = gva_to_ipa(gva, &ipa, flag);
>>> +
>>> +    /*
>>> +     * In case mem_access is active, hardware-based gva_to_ipa
>>> translation
>>> +     * might fail. Since gva_to_ipa uses the guest's translation
>>> tables, access
>>> +     * to which might be restricted by the active VTTBR, we perform
>>> a gva to
>>> +     * ipa translation in software.
>>> +     */
>>>      if ( rc < 0 )
>>> -        goto err;
>>> +        if ( p2m_gva_to_ipa(p2m, gva, &ipa, flag) < 0 )
>>> +            /*
>>> +             * The software gva to ipa translation can still fail,
>>> if the the
>>> +             * gva is not mapped or does not hold the requested
>>> access rights.
>>> +             */
>>> +            goto err;
>>
>> Rather falling back, why don't we do software page table walk every time?
>
> A software page table walk would (presumably) take more time to
> translate the gva. Do we want that? Also, I am not sure what you meant
> by "falling back" at this point. Thank you.

What you currently do is try gva_to_ipa and if it does not work you will 
call p2m_gva_to_ipa. This sounds a bit pointless to me and waste of time 
if the underlying memory of stage-1 page table is protected.

Before saying this is taking much more time, I would like to see actual 
numbers.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-08 11:44       ` Julien Grall
@ 2017-05-08 19:42         ` Tamas K Lengyel
  2017-05-08 20:53           ` Julien Grall
  2017-05-09  7:17           ` Sergej Proskurin
  0 siblings, 2 replies; 27+ messages in thread
From: Tamas K Lengyel @ 2017-05-08 19:42 UTC (permalink / raw)
  To: Julien Grall
  Cc: Sergej Proskurin, Stefano Stabellini, Razvan Cojocaru, Xen-devel

On Mon, May 8, 2017 at 5:44 AM, Julien Grall <julien.grall@arm.com> wrote:
> Hi,
>
> On 08/05/17 10:22, Sergej Proskurin wrote:
>>
>> On 05/02/2017 05:17 PM, Julien Grall wrote:
>>>
>>> On 30/04/17 20:48, Sergej Proskurin wrote:
>>> Also s/ttbcr/tcr/
>>>
>>>> +    struct domain *d = p2m->domain;
>>>> +
>>>> +    const unsigned int offsets[4] = {
>>>> +#ifdef CONFIG_ARM_64
>>>> +        zeroeth_table_offset(gva),
>>>> +#endif
>>>> +        first_table_offset(gva),
>>>> +        second_table_offset(gva),
>>>> +        third_table_offset(gva)
>>>> +    };
>>>
>>>
>>> Offsets are based on the granularity of Xen (currently 4KB). However
>>> the guests can support 4KB, 16KB, 64KB. Please handle everything
>>> correctly.
>>
>>
>> True, ARM ist quite flexible. Yet, I have not seen an OS implementation
>> that is supported by Xen and makes use of page sizes other than 4KB and
>> their supersets (2/4MB, 1GB) (please let me know, if you know some),
>> which is why I doubt that we need that. Please let me know why you think
>> we need that kind of flexibility in software?
>>
>> If you should nevertheless insist on that functionality, I would include
>> it in the next patch and try not to blow up the code too much.
>
>
> Linux is able to support 4KB, 16KB, 64KB page granularity for AArch64.
> Centos and RedHat are only shipped with 64KB page granularity.
>
>
>>
>>>
>>>> +
>>>> +    const paddr_t masks[4] = {
>>>> +#ifdef CONFIG_ARM_64
>>>> +        ZEROETH_SIZE - 1,
>>>> +#endif
>>>> +        FIRST_SIZE - 1,
>>>> +        SECOND_SIZE - 1,
>>>> +        THIRD_SIZE - 1
>>>> +    };
>>>> +
>>>> +    /* If the MMU is disabled, there is no need to translate the
>>>> gva. */
>>>> +    if ( !(sctlr & SCTLR_M) )
>>>> +    {
>>>> +        *ipa = gva;
>>>> +
>>>> +        return 0;
>>>> +    }
>>>> +
>>>> +    if ( is_32bit_domain(d) )
>>>> +    {
>>>> +        /*
>>>> +         * XXX: We do not support 32-bit domain translation table
>>>> walks for
>>>> +         * domains using the short-descriptor translation table
>>>> format, yet.
>>>> +         */
>>>
>>>
>>> Debian ARM 32bit is using short-descriptor translation table. So are
>>> you suggesting that memaccess will not correctly with Debian guest?
>>>
>>
>> Yes, as stated in the comment, the current implementation does not
>> support the short-descriptor translation table format. As this is an RFC
>> patch, I wanted to see your opinion on that functionality in general
>> before implementing all corner cases of the ARM architecture.
>>
>> As mentioned in my previous reply in patch (patch 2/4), I would prefer
>> to separate the long-descriptor from the short-descriptor translation in
>> the future implementation.
>
>
> I agree here. See my answer on patch #2 about how I would like to see the
> implementation.
>
> [...]
>
>>>> +
>>>> +    *ipa = pfn_to_paddr(pte.walk.base) | (gva & masks[level]);
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>> +
>>>>  /*
>>>>   * If mem_access is in use it might have been the reason why
>>>> get_page_from_gva
>>>>   * failed to fetch the page, as it uses the MMU for the permission
>>>> checking.
>>>> @@ -109,9 +233,23 @@ p2m_mem_access_check_and_get_page(vaddr_t gva,
>>>> unsigned long flag,
>>>>      struct page_info *page = NULL;
>>>>      struct p2m_domain *p2m = &v->domain->arch.p2m;
>>>>
>>>> +    ASSERT(p2m->mem_access_enabled);
>>>> +
>>>
>>>
>>> Why this ASSERT has been added?
>>
>>
>> The function p2m_mem_access_check_and_get_page is called only if
>> get_page_from_gva fails and mem_access is active. I can add a comment
>> and move this part into an individual commit.
>
>
> Whilst I agree it is dumb to call this code without mem_access enabled, this
> code is able to cope with it. So why do you need this ASSERT?
>
>
>>
>>>
>>>>      rc = gva_to_ipa(gva, &ipa, flag);
>>>> +
>>>> +    /*
>>>> +     * In case mem_access is active, hardware-based gva_to_ipa
>>>> translation
>>>> +     * might fail. Since gva_to_ipa uses the guest's translation
>>>> tables, access
>>>> +     * to which might be restricted by the active VTTBR, we perform
>>>> a gva to
>>>> +     * ipa translation in software.
>>>> +     */
>>>>      if ( rc < 0 )
>>>> -        goto err;
>>>> +        if ( p2m_gva_to_ipa(p2m, gva, &ipa, flag) < 0 )
>>>> +            /*
>>>> +             * The software gva to ipa translation can still fail,
>>>> if the the
>>>> +             * gva is not mapped or does not hold the requested
>>>> access rights.
>>>> +             */
>>>> +            goto err;
>>>
>>>
>>> Rather falling back, why don't we do software page table walk every time?
>>
>>
>> A software page table walk would (presumably) take more time to
>> translate the gva. Do we want that? Also, I am not sure what you meant
>> by "falling back" at this point. Thank you.
>
>
> What you currently do is try gva_to_ipa and if it does not work you will
> call p2m_gva_to_ipa. This sounds a bit pointless to me and waste of time if
> the underlying memory of stage-1 page table is protected.

But we don't know that the stage-1 page table is protected until the
hardware based lookup fails. I can turn your argument around and say
doing the software based lookup is pointless and a waste of time when
the stage-1 table is not protected. Which is by the way what I would
expect to see in most cases.

> Before saying this is taking much more time, I would like to see actual
> numbers.

Whether the performance is measurable different is going to be very
usecase specific. If the TLBs are already loaded with the translation
then the hardware lookup will be a lot faster. Setting up a test-case
for this is just a PITA and I don't really see why you are against
using the hardware lookup to start with.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-08 19:42         ` Tamas K Lengyel
@ 2017-05-08 20:53           ` Julien Grall
  2017-05-08 21:22             ` Tamas K Lengyel
  2017-05-09  7:17           ` Sergej Proskurin
  1 sibling, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-05-08 20:53 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Sergej Proskurin, Stefano Stabellini, Razvan Cojocaru, Xen-devel



On 05/08/2017 08:42 PM, Tamas K Lengyel wrote:
> On Mon, May 8, 2017 at 5:44 AM, Julien Grall <julien.grall@arm.com> wrote:
> Whether the performance is measurable different is going to be very
> usecase specific. If the TLBs are already loaded with the translation
> then the hardware lookup will be a lot faster. Setting up a test-case
> for this is just a PITA and I don't really see why you are against
> using the hardware lookup to start with.

Well, if you read my e-mail you would have noticed I am not against it. 
I just find a bit pointless to do both and was asking if you did some 
benchmark before taking this decision.

But it sounds like you are not willing to even consider it. Anyway I am 
not going to argue on that. It is not something I really care for now.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-08 20:53           ` Julien Grall
@ 2017-05-08 21:22             ` Tamas K Lengyel
  0 siblings, 0 replies; 27+ messages in thread
From: Tamas K Lengyel @ 2017-05-08 21:22 UTC (permalink / raw)
  To: Julien Grall
  Cc: Sergej Proskurin, Stefano Stabellini, Razvan Cojocaru, Xen-devel

On Mon, May 8, 2017 at 2:53 PM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 05/08/2017 08:42 PM, Tamas K Lengyel wrote:
>>
>> On Mon, May 8, 2017 at 5:44 AM, Julien Grall <julien.grall@arm.com> wrote:
>> Whether the performance is measurable different is going to be very
>> usecase specific. If the TLBs are already loaded with the translation
>> then the hardware lookup will be a lot faster. Setting up a test-case
>> for this is just a PITA and I don't really see why you are against
>> using the hardware lookup to start with.
>
>
> Well, if you read my e-mail you would have noticed I am not against it. I
> just find a bit pointless to do both and was asking if you did some
> benchmark before taking this decision.
>
> But it sounds like you are not willing to even consider it. Anyway I am not
> going to argue on that. It is not something I really care for now.

It did sound to me like you were pushing for it. I would prefer doing
hardware translation first and only falling back to software when
needed unless we have some good incentive to do otherwise.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-08 19:42         ` Tamas K Lengyel
  2017-05-08 20:53           ` Julien Grall
@ 2017-05-09  7:17           ` Sergej Proskurin
  2017-05-09  8:09             ` Julien Grall
  1 sibling, 1 reply; 27+ messages in thread
From: Sergej Proskurin @ 2017-05-09  7:17 UTC (permalink / raw)
  To: Tamas K Lengyel, Julien Grall
  Cc: Xen-devel, Stefano Stabellini, Razvan Cojocaru


[-- Attachment #1.1: Type: text/plain, Size: 796 bytes --]

Hi,

>> What you currently do is try gva_to_ipa and if it does not work >> you will call p2m_gva_to_ipa. This sounds a bit pointless to me and
>> waste of time if the underlying memory of stage-1 page table is >>
protected. > > But we don't know that the stage-1 page table is
protected until the > hardware based lookup fails. I can turn your
argument around and say > doing the software based lookup is pointless
and a waste of time > when the stage-1 table is not protected. Which is
by the way what I > would expect to see in most cases.

I agree with Tamas: I also believe that in most cases the stage-1
translation table won't be protected. So, in my opinion, falling back to
software (which will be presumablly a rare operation) is the better
approach.

Cheers,
~Sergej


[-- Attachment #1.2: Type: text/html, Size: 1139 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-09  7:17           ` Sergej Proskurin
@ 2017-05-09  8:09             ` Julien Grall
  2017-05-09 16:04               ` Tamas K Lengyel
  0 siblings, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-05-09  8:09 UTC (permalink / raw)
  To: Sergej Proskurin, Tamas K Lengyel
  Cc: Xen-devel, Stefano Stabellini, Razvan Cojocaru



On 05/09/2017 08:17 AM, Sergej Proskurin wrote:
> Hi,
>
>>> What you currently do is try gva_to_ipa and if it does not work >> you will call p2m_gva_to_ipa. This sounds a bit pointless to me and
>>> waste of time if the underlying memory of stage-1 page table is >>
> protected. > > But we don't know that the stage-1 page table is
> protected until the > hardware based lookup fails. I can turn your
> argument around and say > doing the software based lookup is pointless
> and a waste of time > when the stage-1 table is not protected. Which is
> by the way what I > would expect to see in most cases.
>
> I agree with Tamas: I also believe that in most cases the stage-1
> translation table won't be protected. So, in my opinion, falling back to
> software (which will be presumablly a rare operation) is the better
> approach.

Well, you both consider that it is better to do the fallback by assuming 
the TLBs will always cache the intermediate translations (e.g VA -> IPA) 
and not only the full S1 -> S2 translation (e.g VA -> PA).

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-09  8:09             ` Julien Grall
@ 2017-05-09 16:04               ` Tamas K Lengyel
  2017-05-09 16:22                 ` Julien Grall
  0 siblings, 1 reply; 27+ messages in thread
From: Tamas K Lengyel @ 2017-05-09 16:04 UTC (permalink / raw)
  To: Julien Grall
  Cc: Sergej Proskurin, Stefano Stabellini, Razvan Cojocaru, Xen-devel

On Tue, May 9, 2017 at 2:09 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 05/09/2017 08:17 AM, Sergej Proskurin wrote:
>>
>> Hi,
>>
>>>> What you currently do is try gva_to_ipa and if it does not work >> you
>>>> will call p2m_gva_to_ipa. This sounds a bit pointless to me and
>>>> waste of time if the underlying memory of stage-1 page table is >>
>>
>> protected. > > But we don't know that the stage-1 page table is
>> protected until the > hardware based lookup fails. I can turn your
>> argument around and say > doing the software based lookup is pointless
>> and a waste of time > when the stage-1 table is not protected. Which is
>> by the way what I > would expect to see in most cases.
>>
>> I agree with Tamas: I also believe that in most cases the stage-1
>> translation table won't be protected. So, in my opinion, falling back to
>> software (which will be presumablly a rare operation) is the better
>> approach.
>
>
> Well, you both consider that it is better to do the fallback by assuming the
> TLBs will always cache the intermediate translations (e.g VA -> IPA) and not
> only the full S1 -> S2 translation (e.g VA -> PA).

No, I was just pointing out that if the TLB has it cached it is
guaranteed to be faster. Whether that is the case is probably usecase
dependent. But even if the TLB is not loaded, I would assume the
hardware lookup is at least as fast as the software based one, but I
would bet it is faster. Without number this is just theoretical but I
would be very surprised if the hardware lookup would ever be slower
then the software based one... And since this is a corner-case that
most application will never encounter, forcing them all to use a
slower approach doesn't sound right.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-09 16:04               ` Tamas K Lengyel
@ 2017-05-09 16:22                 ` Julien Grall
  2017-05-09 16:45                   ` Tamas K Lengyel
  0 siblings, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-05-09 16:22 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Sergej Proskurin, Stefano Stabellini, Razvan Cojocaru, Xen-devel



On 09/05/17 17:04, Tamas K Lengyel wrote:
> On Tue, May 9, 2017 at 2:09 AM, Julien Grall <julien.grall@arm.com> wrote:
>>
>>
>> On 05/09/2017 08:17 AM, Sergej Proskurin wrote:
>>>
>>> Hi,
>>>
>>>>> What you currently do is try gva_to_ipa and if it does not work >> you
>>>>> will call p2m_gva_to_ipa. This sounds a bit pointless to me and
>>>>> waste of time if the underlying memory of stage-1 page table is >>
>>>
>>> protected. > > But we don't know that the stage-1 page table is
>>> protected until the > hardware based lookup fails. I can turn your
>>> argument around and say > doing the software based lookup is pointless
>>> and a waste of time > when the stage-1 table is not protected. Which is
>>> by the way what I > would expect to see in most cases.
>>>
>>> I agree with Tamas: I also believe that in most cases the stage-1
>>> translation table won't be protected. So, in my opinion, falling back to
>>> software (which will be presumablly a rare operation) is the better
>>> approach.
>>
>>
>> Well, you both consider that it is better to do the fallback by assuming the
>> TLBs will always cache the intermediate translations (e.g VA -> IPA) and not
>> only the full S1 -> S2 translation (e.g VA -> PA).
>
> No, I was just pointing out that if the TLB has it cached it is
> guaranteed to be faster. Whether that is the case is probably usecase
> dependent. But even if the TLB is not loaded, I would assume the
> hardware lookup is at least as fast as the software based one, but I
> would bet it is faster. Without number this is just theoretical but I
> would be very surprised if the hardware lookup would ever be slower
> then the software based one... And since this is a corner-case that
> most application will never encounter, forcing them all to use a
> slower approach doesn't sound right.

What you miss in my point is TLB can be designed to never cache 
intermediate translation. It is not even about whether the TLB are 
loaded or not.

I am struggling to understand why this would be a corner case as it 
would be valid to protect the page table to inspect what the guest is 
doing...

I am not saying translation would be slower in hardware than in 
software. I am just saying that maybe it would not be a huge performance 
hit to always do software rather than fallback and having a worse one 
time to time (or often?).

Also, if you use altp2m (when it gets implemented), you would likely 
need to either do some context switch to use the right p2m or do 
software lookup.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-09 16:22                 ` Julien Grall
@ 2017-05-09 16:45                   ` Tamas K Lengyel
  2017-05-09 18:55                     ` Julien Grall
  0 siblings, 1 reply; 27+ messages in thread
From: Tamas K Lengyel @ 2017-05-09 16:45 UTC (permalink / raw)
  To: Julien Grall
  Cc: Sergej Proskurin, Stefano Stabellini, Razvan Cojocaru, Xen-devel

On Tue, May 9, 2017 at 10:22 AM, Julien Grall <julien.grall@arm.com> wrote:
>
>
> On 09/05/17 17:04, Tamas K Lengyel wrote:
>>
>> On Tue, May 9, 2017 at 2:09 AM, Julien Grall <julien.grall@arm.com> wrote:
>>>
>>>
>>>
>>> On 05/09/2017 08:17 AM, Sergej Proskurin wrote:
>>>>
>>>>
>>>> Hi,
>>>>
>>>>>> What you currently do is try gva_to_ipa and if it does not work >> you
>>>>>> will call p2m_gva_to_ipa. This sounds a bit pointless to me and
>>>>>> waste of time if the underlying memory of stage-1 page table is >>
>>>>
>>>>
>>>> protected. > > But we don't know that the stage-1 page table is
>>>> protected until the > hardware based lookup fails. I can turn your
>>>> argument around and say > doing the software based lookup is pointless
>>>> and a waste of time > when the stage-1 table is not protected. Which is
>>>> by the way what I > would expect to see in most cases.
>>>>
>>>> I agree with Tamas: I also believe that in most cases the stage-1
>>>> translation table won't be protected. So, in my opinion, falling back to
>>>> software (which will be presumablly a rare operation) is the better
>>>> approach.
>>>
>>>
>>>
>>> Well, you both consider that it is better to do the fallback by assuming
>>> the
>>> TLBs will always cache the intermediate translations (e.g VA -> IPA) and
>>> not
>>> only the full S1 -> S2 translation (e.g VA -> PA).
>>
>>
>> No, I was just pointing out that if the TLB has it cached it is
>> guaranteed to be faster. Whether that is the case is probably usecase
>> dependent. But even if the TLB is not loaded, I would assume the
>> hardware lookup is at least as fast as the software based one, but I
>> would bet it is faster. Without number this is just theoretical but I
>> would be very surprised if the hardware lookup would ever be slower
>> then the software based one... And since this is a corner-case that
>> most application will never encounter, forcing them all to use a
>> slower approach doesn't sound right.
>
>
> What you miss in my point is TLB can be designed to never cache intermediate
> translation. It is not even about whether the TLB are loaded or not.
>
> I am struggling to understand why this would be a corner case as it would be
> valid to protect the page table to inspect what the guest is doing...

It is valid, yes, but nothing requires the user to block read access
to that specific page. Most applications only monitor a handful of
pages at a time. Applications may not even use read-access
restrictions at all (doing only write or execute tracing). If we do
software based lookups all the time we penalize all of those
applications when it is really only a very small subset of cases where
this will be needed.

>
> I am not saying translation would be slower in hardware than in software. I
> am just saying that maybe it would not be a huge performance hit to always
> do software rather than fallback and having a worse one time to time (or
> often?).

It probably wouldn't be a big hit, but why take any hit if not necessary?

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-09 16:45                   ` Tamas K Lengyel
@ 2017-05-09 18:55                     ` Julien Grall
  2017-05-09 18:56                       ` Julien Grall
  0 siblings, 1 reply; 27+ messages in thread
From: Julien Grall @ 2017-05-09 18:55 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Sergej Proskurin, nd, Stefano Stabellini, Razvan Cojocaru, Xen-devel



On 09/05/2017 17:45, Tamas K Lengyel wrote:
> On Tue, May 9, 2017 at 10:22 AM, Julien Grall <julien.grall@arm.com> wrote:
>>
>>
>> On 09/05/17 17:04, Tamas K Lengyel wrote:
>>>
>>> On Tue, May 9, 2017 at 2:09 AM, Julien Grall <julien.grall@arm.com> wrote:
>>>>
>>>>
>>>>
>>>> On 05/09/2017 08:17 AM, Sergej Proskurin wrote:
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>>>> What you currently do is try gva_to_ipa and if it does not work >> you
>>>>>>> will call p2m_gva_to_ipa. This sounds a bit pointless to me and
>>>>>>> waste of time if the underlying memory of stage-1 page table is >>
>>>>>
>>>>>
>>>>> protected. > > But we don't know that the stage-1 page table is
>>>>> protected until the > hardware based lookup fails. I can turn your
>>>>> argument around and say > doing the software based lookup is pointless
>>>>> and a waste of time > when the stage-1 table is not protected. Which is
>>>>> by the way what I > would expect to see in most cases.
>>>>>
>>>>> I agree with Tamas: I also believe that in most cases the stage-1
>>>>> translation table won't be protected. So, in my opinion, falling back to
>>>>> software (which will be presumablly a rare operation) is the better
>>>>> approach.
>>>>
>>>>
>>>>
>>>> Well, you both consider that it is better to do the fallback by assuming
>>>> the
>>>> TLBs will always cache the intermediate translations (e.g VA -> IPA) and
>>>> not
>>>> only the full S1 -> S2 translation (e.g VA -> PA).
>>>
>>>
>>> No, I was just pointing out that if the TLB has it cached it is
>>> guaranteed to be faster. Whether that is the case is probably usecase
>>> dependent. But even if the TLB is not loaded, I would assume the
>>> hardware lookup is at least as fast as the software based one, but I
>>> would bet it is faster. Without number this is just theoretical but I
>>> would be very surprised if the hardware lookup would ever be slower
>>> then the software based one... And since this is a corner-case that
>>> most application will never encounter, forcing them all to use a
>>> slower approach doesn't sound right.
>>
>>
>> What you miss in my point is TLB can be designed to never cache intermediate
>> translation. It is not even about whether the TLB are loaded or not.
>>
>> I am struggling to understand why this would be a corner case as it would be
>> valid to protect the page table to inspect what the guest is doing...
>
> It is valid, yes, but nothing requires the user to block read access
> to that specific page. Most applications only monitor a handful of
> pages at a time. Applications may not even use read-access
> restrictions at all (doing only write or execute tracing). If we do
> software based lookups all the time we penalize all of those
> applications when it is really only a very small subset of cases where
> this will be needed.
>
>>
>> I am not saying translation would be slower in hardware than in software. I
>> am just saying that maybe it would not be a huge performance hit to always
>> do software rather than fallback and having a worse one time to time (or
>> often?).
>
> It probably wouldn't be a big hit, but why take any hit if not necessary?

You still miss my point... Anyway, it is nothing something I really care 
for now. So I am not going to argue.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk
  2017-05-09 18:55                     ` Julien Grall
@ 2017-05-09 18:56                       ` Julien Grall
  0 siblings, 0 replies; 27+ messages in thread
From: Julien Grall @ 2017-05-09 18:56 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Sergej Proskurin, nd, Stefano Stabellini, Razvan Cojocaru, Xen-devel



On 09/05/2017 19:55, Julien Grall wrote:
>
>
> On 09/05/2017 17:45, Tamas K Lengyel wrote:
>> On Tue, May 9, 2017 at 10:22 AM, Julien Grall <julien.grall@arm.com>
>> wrote:
>>>
>>>
>>> On 09/05/17 17:04, Tamas K Lengyel wrote:
>>>>
>>>> On Tue, May 9, 2017 at 2:09 AM, Julien Grall <julien.grall@arm.com>
>>>> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 05/09/2017 08:17 AM, Sergej Proskurin wrote:
>>>>>>
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>>> What you currently do is try gva_to_ipa and if it does not work
>>>>>>>> >> you
>>>>>>>> will call p2m_gva_to_ipa. This sounds a bit pointless to me and
>>>>>>>> waste of time if the underlying memory of stage-1 page table is >>
>>>>>>
>>>>>>
>>>>>> protected. > > But we don't know that the stage-1 page table is
>>>>>> protected until the > hardware based lookup fails. I can turn your
>>>>>> argument around and say > doing the software based lookup is
>>>>>> pointless
>>>>>> and a waste of time > when the stage-1 table is not protected.
>>>>>> Which is
>>>>>> by the way what I > would expect to see in most cases.
>>>>>>
>>>>>> I agree with Tamas: I also believe that in most cases the stage-1
>>>>>> translation table won't be protected. So, in my opinion, falling
>>>>>> back to
>>>>>> software (which will be presumablly a rare operation) is the better
>>>>>> approach.
>>>>>
>>>>>
>>>>>
>>>>> Well, you both consider that it is better to do the fallback by
>>>>> assuming
>>>>> the
>>>>> TLBs will always cache the intermediate translations (e.g VA ->
>>>>> IPA) and
>>>>> not
>>>>> only the full S1 -> S2 translation (e.g VA -> PA).
>>>>
>>>>
>>>> No, I was just pointing out that if the TLB has it cached it is
>>>> guaranteed to be faster. Whether that is the case is probably usecase
>>>> dependent. But even if the TLB is not loaded, I would assume the
>>>> hardware lookup is at least as fast as the software based one, but I
>>>> would bet it is faster. Without number this is just theoretical but I
>>>> would be very surprised if the hardware lookup would ever be slower
>>>> then the software based one... And since this is a corner-case that
>>>> most application will never encounter, forcing them all to use a
>>>> slower approach doesn't sound right.
>>>
>>>
>>> What you miss in my point is TLB can be designed to never cache
>>> intermediate
>>> translation. It is not even about whether the TLB are loaded or not.
>>>
>>> I am struggling to understand why this would be a corner case as it
>>> would be
>>> valid to protect the page table to inspect what the guest is doing...
>>
>> It is valid, yes, but nothing requires the user to block read access
>> to that specific page. Most applications only monitor a handful of
>> pages at a time. Applications may not even use read-access
>> restrictions at all (doing only write or execute tracing). If we do
>> software based lookups all the time we penalize all of those
>> applications when it is really only a very small subset of cases where
>> this will be needed.
>>
>>>
>>> I am not saying translation would be slower in hardware than in
>>> software. I
>>> am just saying that maybe it would not be a huge performance hit to
>>> always
>>> do software rather than fallback and having a worse one time to time (or
>>> often?).
>>
>> It probably wouldn't be a big hit, but why take any hit if not necessary?
>
> You still miss my point... Anyway, it is nothing something I really care

* it is not something

> for now. So I am not going to argue.
>
> Cheers,
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2017-05-09 18:56 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-30 19:48 [RFC PATCH 0/4] arm/mem_access: Walk guest page tables in SW if mem_access is active Sergej Proskurin
2017-04-30 19:48 ` [RFC PATCH 1/4] arm/mem_access: Move TTBCR_SZ_MASK to processor.h Sergej Proskurin
2017-05-02 11:47   ` Julien Grall
2017-04-30 19:48 ` [RFC PATCH 2/4] arm/mem_access: Change value of TTBCR_SZ_MASK Sergej Proskurin
2017-05-02 11:56   ` Julien Grall
2017-05-02 12:01     ` Julien Grall
2017-05-08  6:40       ` Sergej Proskurin
2017-05-08 11:31         ` Julien Grall
2017-05-08  6:25     ` Sergej Proskurin
2017-04-30 19:48 ` [RFC PATCH 3/4] arm/mem_access: Add further TCR_EL1/TTBCR defines Sergej Proskurin
2017-05-02 12:01   ` Julien Grall
2017-04-30 19:48 ` [RFC PATCH 4/4] arm/mem_access: Add software guest-page-table walk Sergej Proskurin
2017-05-01 14:38   ` Razvan Cojocaru
2017-05-01 15:39     ` Sergej Proskurin
2017-05-02 15:17   ` Julien Grall
2017-05-08  9:22     ` Sergej Proskurin
2017-05-08 11:44       ` Julien Grall
2017-05-08 19:42         ` Tamas K Lengyel
2017-05-08 20:53           ` Julien Grall
2017-05-08 21:22             ` Tamas K Lengyel
2017-05-09  7:17           ` Sergej Proskurin
2017-05-09  8:09             ` Julien Grall
2017-05-09 16:04               ` Tamas K Lengyel
2017-05-09 16:22                 ` Julien Grall
2017-05-09 16:45                   ` Tamas K Lengyel
2017-05-09 18:55                     ` Julien Grall
2017-05-09 18:56                       ` Julien Grall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.