* [PATCH] xen/arm: guest_walk: Only generate necessary offsets/masks
@ 2021-04-05 16:20 Julien Grall
2021-04-15 22:32 ` Stefano Stabellini
2021-04-16 16:04 ` Bertrand Marquis
0 siblings, 2 replies; 5+ messages in thread
From: Julien Grall @ 2021-04-05 16:20 UTC (permalink / raw)
To: xen-devel
Cc: bertrand.marquis, Julien Grall, Stefano Stabellini, Julien Grall,
Volodymyr Babchuk, Julien Grall
From: Julien Grall <jgrall@amazon.com>
At the moment, we are computing offsets/masks for each level and
granularity. This is a bit of waste given that we only need to
know the offsets/masks for the granularity used by the guest.
All the LPAE information can easily be inferred with just the
page shift for a given granularity and the level.
So rather than providing a set of helpers per granularity, we can
provide a single set that takes the granularity and the level in
parameters.
With the new helpers in place, we can rework guest_walk_ld() to
only compute necessary information.
Signed-off-by: Julien Grall <julien@amazon.com>
---
xen/arch/arm/guest_walk.c | 37 ++---------------
xen/include/asm-arm/lpae.h | 82 +++++++++++++-------------------------
2 files changed, 30 insertions(+), 89 deletions(-)
diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index b4496c4c86c6..87de40d0cb68 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -372,38 +372,6 @@ static bool guest_walk_ld(const struct vcpu *v,
register_t tcr = READ_SYSREG(TCR_EL1);
struct domain *d = v->domain;
-#define OFFSETS(gva, gran) \
-{ \
- zeroeth_table_offset_##gran(gva), \
- first_table_offset_##gran(gva), \
- second_table_offset_##gran(gva), \
- third_table_offset_##gran(gva) \
-}
-
- const paddr_t offsets[3][4] = {
- OFFSETS(gva, 4K),
- OFFSETS(gva, 16K),
- OFFSETS(gva, 64K)
- };
-
-#undef OFFSETS
-
-#define MASKS(gran) \
-{ \
- zeroeth_size(gran) - 1, \
- first_size(gran) - 1, \
- second_size(gran) - 1, \
- third_size(gran) - 1 \
-}
-
- static const paddr_t masks[3][4] = {
- MASKS(4K),
- MASKS(16K),
- MASKS(64K)
- };
-
-#undef MASKS
-
static const unsigned int grainsizes[3] = {
PAGE_SHIFT_4K,
PAGE_SHIFT_16K,
@@ -519,7 +487,7 @@ static bool guest_walk_ld(const struct vcpu *v,
* Add offset given by the GVA to the translation table base address.
* Shift the offset by 3 as it is 8-byte aligned.
*/
- paddr |= offsets[gran][level] << 3;
+ paddr |= LPAE_TABLE_INDEX_GS(grainsizes[gran], level, gva) << 3;
/* Access the guest's memory to read only one PTE. */
ret = access_guest_memory_by_ipa(d, paddr, &pte, sizeof(lpae_t), false);
@@ -572,7 +540,8 @@ static bool guest_walk_ld(const struct vcpu *v,
/* Make sure that the lower bits of the PTE's base address are zero. */
mask = GENMASK_ULL(47, grainsizes[gran]);
- *ipa = (pfn_to_paddr(pte.walk.base) & mask) | (gva & masks[gran][level]);
+ *ipa = (pfn_to_paddr(pte.walk.base) & mask) |
+ (gva & (LEVEL_SIZE_GS(grainsizes[gran], level) - 1));
/*
* Set permissions so that the caller can check the flags by herself. Note
diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
index 4797f9cee494..e94de2e7d8e8 100644
--- a/xen/include/asm-arm/lpae.h
+++ b/xen/include/asm-arm/lpae.h
@@ -160,63 +160,35 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
#define lpae_set_mfn(pte, mfn) ((pte).walk.base = mfn_x(mfn))
/*
- * AArch64 supports pages with different sizes (4K, 16K, and 64K). To enable
- * page table walks for various configurations, the following helpers enable
- * walking the translation table with varying page size granularities.
+ * AArch64 supports pages with different sizes (4K, 16K, and 64K).
+ * Provide a set of generic helpers that will compute various
+ * information based on the page granularity.
+ *
+ * Note the parameter 'gs' is the page shift of the granularity used.
+ * Some macro will evaluate 'gs' twice rather than storing in a
+ * variable. This is to allow using the macros in assembly.
+ */
+
+/*
+ * Granularity | PAGE_SHIFT | LPAE_SHIFT
+ * -------------------------------------
+ * 4K | 12 | 9
+ * 16K | 14 | 11
+ * 64K | 16 | 13
+ *
+ * This is equivalent to LPAE_SHIFT = PAGE_SHIFT - 3
*/
+#define LPAE_SHIFT_GS(gs) ((gs) - 3)
+#define LPAE_ENTRIES_GS(gs) (_AC(1, U) << LPAE_SHIFT_GS(gs))
+#define LPAE_ENTRIES_MASK_GS(gs) (LPAE_ENTRIES_GS(gs) - 1)
+
+#define LEVEL_ORDER_GS(gs, lvl) ((3 - (lvl)) * LPAE_SHIFT_GS(gs))
+#define LEVEL_SHIFT_GS(gs, lvl) (LEVEL_ORDER_GS(gs, lvl) + (gs))
+#define LEVEL_SIZE_GS(gs, lvl) (_AT(paddr_t, 1) << LEVEL_SHIFT_GS(gs, lvl))
-#define LPAE_SHIFT_4K (9)
-#define LPAE_SHIFT_16K (11)
-#define LPAE_SHIFT_64K (13)
-
-#define lpae_entries(gran) (_AC(1,U) << LPAE_SHIFT_##gran)
-#define lpae_entry_mask(gran) (lpae_entries(gran) - 1)
-
-#define third_shift(gran) (PAGE_SHIFT_##gran)
-#define third_size(gran) ((paddr_t)1 << third_shift(gran))
-
-#define second_shift(gran) (third_shift(gran) + LPAE_SHIFT_##gran)
-#define second_size(gran) ((paddr_t)1 << second_shift(gran))
-
-#define first_shift(gran) (second_shift(gran) + LPAE_SHIFT_##gran)
-#define first_size(gran) ((paddr_t)1 << first_shift(gran))
-
-/* Note that there is no zeroeth lookup level with a 64K granule size. */
-#define zeroeth_shift(gran) (first_shift(gran) + LPAE_SHIFT_##gran)
-#define zeroeth_size(gran) ((paddr_t)1 << zeroeth_shift(gran))
-
-#define TABLE_OFFSET(offs, gran) (offs & lpae_entry_mask(gran))
-#define TABLE_OFFSET_HELPERS(gran) \
-static inline paddr_t third_table_offset_##gran##K(paddr_t va) \
-{ \
- return TABLE_OFFSET((va >> third_shift(gran##K)), gran##K); \
-} \
- \
-static inline paddr_t second_table_offset_##gran##K(paddr_t va) \
-{ \
- return TABLE_OFFSET((va >> second_shift(gran##K)), gran##K); \
-} \
- \
-static inline paddr_t first_table_offset_##gran##K(paddr_t va) \
-{ \
- return TABLE_OFFSET((va >> first_shift(gran##K)), gran##K); \
-} \
- \
-static inline paddr_t zeroeth_table_offset_##gran##K(paddr_t va) \
-{ \
- /* Note that there is no zeroeth lookup level with 64K granule sizes. */\
- if ( gran == 64 ) \
- return 0; \
- else \
- return TABLE_OFFSET((va >> zeroeth_shift(gran##K)), gran##K); \
-} \
-
-TABLE_OFFSET_HELPERS(4);
-TABLE_OFFSET_HELPERS(16);
-TABLE_OFFSET_HELPERS(64);
-
-#undef TABLE_OFFSET
-#undef TABLE_OFFSET_HELPERS
+/* Offset in the table at level 'lvl' */
+#define LPAE_TABLE_INDEX_GS(gs, lvl, addr) \
+ (((addr) >> LEVEL_SHIFT_GS(gs, lvl)) & LPAE_ENTRIES_MASK_GS(gs))
/* Generate an array @var containing the offset for each level from @addr */
#define DECLARE_OFFSETS(var, addr) \
--
2.17.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] xen/arm: guest_walk: Only generate necessary offsets/masks
2021-04-05 16:20 [PATCH] xen/arm: guest_walk: Only generate necessary offsets/masks Julien Grall
@ 2021-04-15 22:32 ` Stefano Stabellini
2021-04-16 16:04 ` Bertrand Marquis
1 sibling, 0 replies; 5+ messages in thread
From: Stefano Stabellini @ 2021-04-15 22:32 UTC (permalink / raw)
To: Julien Grall
Cc: xen-devel, bertrand.marquis, Julien Grall, Stefano Stabellini,
Volodymyr Babchuk, Julien Grall
On Mon, 5 Apr 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>
> At the moment, we are computing offsets/masks for each level and
> granularity. This is a bit of waste given that we only need to
> know the offsets/masks for the granularity used by the guest.
>
> All the LPAE information can easily be inferred with just the
> page shift for a given granularity and the level.
>
> So rather than providing a set of helpers per granularity, we can
> provide a single set that takes the granularity and the level in
> parameters.
>
> With the new helpers in place, we can rework guest_walk_ld() to
> only compute necessary information.
>
> Signed-off-by: Julien Grall <julien@amazon.com>
Very nice!!
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> xen/arch/arm/guest_walk.c | 37 ++---------------
> xen/include/asm-arm/lpae.h | 82 +++++++++++++-------------------------
> 2 files changed, 30 insertions(+), 89 deletions(-)
>
> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
> index b4496c4c86c6..87de40d0cb68 100644
> --- a/xen/arch/arm/guest_walk.c
> +++ b/xen/arch/arm/guest_walk.c
> @@ -372,38 +372,6 @@ static bool guest_walk_ld(const struct vcpu *v,
> register_t tcr = READ_SYSREG(TCR_EL1);
> struct domain *d = v->domain;
>
> -#define OFFSETS(gva, gran) \
> -{ \
> - zeroeth_table_offset_##gran(gva), \
> - first_table_offset_##gran(gva), \
> - second_table_offset_##gran(gva), \
> - third_table_offset_##gran(gva) \
> -}
> -
> - const paddr_t offsets[3][4] = {
> - OFFSETS(gva, 4K),
> - OFFSETS(gva, 16K),
> - OFFSETS(gva, 64K)
> - };
> -
> -#undef OFFSETS
> -
> -#define MASKS(gran) \
> -{ \
> - zeroeth_size(gran) - 1, \
> - first_size(gran) - 1, \
> - second_size(gran) - 1, \
> - third_size(gran) - 1 \
> -}
> -
> - static const paddr_t masks[3][4] = {
> - MASKS(4K),
> - MASKS(16K),
> - MASKS(64K)
> - };
> -
> -#undef MASKS
> -
> static const unsigned int grainsizes[3] = {
> PAGE_SHIFT_4K,
> PAGE_SHIFT_16K,
> @@ -519,7 +487,7 @@ static bool guest_walk_ld(const struct vcpu *v,
> * Add offset given by the GVA to the translation table base address.
> * Shift the offset by 3 as it is 8-byte aligned.
> */
> - paddr |= offsets[gran][level] << 3;
> + paddr |= LPAE_TABLE_INDEX_GS(grainsizes[gran], level, gva) << 3;
>
> /* Access the guest's memory to read only one PTE. */
> ret = access_guest_memory_by_ipa(d, paddr, &pte, sizeof(lpae_t), false);
> @@ -572,7 +540,8 @@ static bool guest_walk_ld(const struct vcpu *v,
>
> /* Make sure that the lower bits of the PTE's base address are zero. */
> mask = GENMASK_ULL(47, grainsizes[gran]);
> - *ipa = (pfn_to_paddr(pte.walk.base) & mask) | (gva & masks[gran][level]);
> + *ipa = (pfn_to_paddr(pte.walk.base) & mask) |
> + (gva & (LEVEL_SIZE_GS(grainsizes[gran], level) - 1));
>
> /*
> * Set permissions so that the caller can check the flags by herself. Note
> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> index 4797f9cee494..e94de2e7d8e8 100644
> --- a/xen/include/asm-arm/lpae.h
> +++ b/xen/include/asm-arm/lpae.h
> @@ -160,63 +160,35 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
> #define lpae_set_mfn(pte, mfn) ((pte).walk.base = mfn_x(mfn))
>
> /*
> - * AArch64 supports pages with different sizes (4K, 16K, and 64K). To enable
> - * page table walks for various configurations, the following helpers enable
> - * walking the translation table with varying page size granularities.
> + * AArch64 supports pages with different sizes (4K, 16K, and 64K).
> + * Provide a set of generic helpers that will compute various
> + * information based on the page granularity.
> + *
> + * Note the parameter 'gs' is the page shift of the granularity used.
> + * Some macro will evaluate 'gs' twice rather than storing in a
> + * variable. This is to allow using the macros in assembly.
> + */
> +
> +/*
> + * Granularity | PAGE_SHIFT | LPAE_SHIFT
> + * -------------------------------------
> + * 4K | 12 | 9
> + * 16K | 14 | 11
> + * 64K | 16 | 13
> + *
> + * This is equivalent to LPAE_SHIFT = PAGE_SHIFT - 3
> */
> +#define LPAE_SHIFT_GS(gs) ((gs) - 3)
> +#define LPAE_ENTRIES_GS(gs) (_AC(1, U) << LPAE_SHIFT_GS(gs))
> +#define LPAE_ENTRIES_MASK_GS(gs) (LPAE_ENTRIES_GS(gs) - 1)
> +
> +#define LEVEL_ORDER_GS(gs, lvl) ((3 - (lvl)) * LPAE_SHIFT_GS(gs))
> +#define LEVEL_SHIFT_GS(gs, lvl) (LEVEL_ORDER_GS(gs, lvl) + (gs))
> +#define LEVEL_SIZE_GS(gs, lvl) (_AT(paddr_t, 1) << LEVEL_SHIFT_GS(gs, lvl))
>
> -#define LPAE_SHIFT_4K (9)
> -#define LPAE_SHIFT_16K (11)
> -#define LPAE_SHIFT_64K (13)
> -
> -#define lpae_entries(gran) (_AC(1,U) << LPAE_SHIFT_##gran)
> -#define lpae_entry_mask(gran) (lpae_entries(gran) - 1)
> -
> -#define third_shift(gran) (PAGE_SHIFT_##gran)
> -#define third_size(gran) ((paddr_t)1 << third_shift(gran))
> -
> -#define second_shift(gran) (third_shift(gran) + LPAE_SHIFT_##gran)
> -#define second_size(gran) ((paddr_t)1 << second_shift(gran))
> -
> -#define first_shift(gran) (second_shift(gran) + LPAE_SHIFT_##gran)
> -#define first_size(gran) ((paddr_t)1 << first_shift(gran))
> -
> -/* Note that there is no zeroeth lookup level with a 64K granule size. */
> -#define zeroeth_shift(gran) (first_shift(gran) + LPAE_SHIFT_##gran)
> -#define zeroeth_size(gran) ((paddr_t)1 << zeroeth_shift(gran))
> -
> -#define TABLE_OFFSET(offs, gran) (offs & lpae_entry_mask(gran))
> -#define TABLE_OFFSET_HELPERS(gran) \
> -static inline paddr_t third_table_offset_##gran##K(paddr_t va) \
> -{ \
> - return TABLE_OFFSET((va >> third_shift(gran##K)), gran##K); \
> -} \
> - \
> -static inline paddr_t second_table_offset_##gran##K(paddr_t va) \
> -{ \
> - return TABLE_OFFSET((va >> second_shift(gran##K)), gran##K); \
> -} \
> - \
> -static inline paddr_t first_table_offset_##gran##K(paddr_t va) \
> -{ \
> - return TABLE_OFFSET((va >> first_shift(gran##K)), gran##K); \
> -} \
> - \
> -static inline paddr_t zeroeth_table_offset_##gran##K(paddr_t va) \
> -{ \
> - /* Note that there is no zeroeth lookup level with 64K granule sizes. */\
> - if ( gran == 64 ) \
> - return 0; \
> - else \
> - return TABLE_OFFSET((va >> zeroeth_shift(gran##K)), gran##K); \
> -} \
> -
> -TABLE_OFFSET_HELPERS(4);
> -TABLE_OFFSET_HELPERS(16);
> -TABLE_OFFSET_HELPERS(64);
> -
> -#undef TABLE_OFFSET
> -#undef TABLE_OFFSET_HELPERS
> +/* Offset in the table at level 'lvl' */
> +#define LPAE_TABLE_INDEX_GS(gs, lvl, addr) \
> + (((addr) >> LEVEL_SHIFT_GS(gs, lvl)) & LPAE_ENTRIES_MASK_GS(gs))
>
> /* Generate an array @var containing the offset for each level from @addr */
> #define DECLARE_OFFSETS(var, addr) \
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] xen/arm: guest_walk: Only generate necessary offsets/masks
2021-04-05 16:20 [PATCH] xen/arm: guest_walk: Only generate necessary offsets/masks Julien Grall
2021-04-15 22:32 ` Stefano Stabellini
@ 2021-04-16 16:04 ` Bertrand Marquis
2021-04-18 18:16 ` Julien Grall
1 sibling, 1 reply; 5+ messages in thread
From: Bertrand Marquis @ 2021-04-16 16:04 UTC (permalink / raw)
To: Julien Grall
Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk,
Julien Grall
Hi Julien,
> On 5 Apr 2021, at 17:20, Julien Grall <julien@xen.org> wrote:
>
> From: Julien Grall <jgrall@amazon.com>
>
> At the moment, we are computing offsets/masks for each level and
> granularity. This is a bit of waste given that we only need to
> know the offsets/masks for the granularity used by the guest.
>
> All the LPAE information can easily be inferred with just the
> page shift for a given granularity and the level.
>
> So rather than providing a set of helpers per granularity, we can
> provide a single set that takes the granularity and the level in
> parameters.
>
> With the new helpers in place, we can rework guest_walk_ld() to
> only compute necessary information.
>
> Signed-off-by: Julien Grall <julien@amazon.com>
Very nice cleanup.
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
I validated the changes on arm64 but not on arm32.
Regards
Bertrand
> ---
> xen/arch/arm/guest_walk.c | 37 ++---------------
> xen/include/asm-arm/lpae.h | 82 +++++++++++++-------------------------
> 2 files changed, 30 insertions(+), 89 deletions(-)
>
> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
> index b4496c4c86c6..87de40d0cb68 100644
> --- a/xen/arch/arm/guest_walk.c
> +++ b/xen/arch/arm/guest_walk.c
> @@ -372,38 +372,6 @@ static bool guest_walk_ld(const struct vcpu *v,
> register_t tcr = READ_SYSREG(TCR_EL1);
> struct domain *d = v->domain;
>
> -#define OFFSETS(gva, gran) \
> -{ \
> - zeroeth_table_offset_##gran(gva), \
> - first_table_offset_##gran(gva), \
> - second_table_offset_##gran(gva), \
> - third_table_offset_##gran(gva) \
> -}
> -
> - const paddr_t offsets[3][4] = {
> - OFFSETS(gva, 4K),
> - OFFSETS(gva, 16K),
> - OFFSETS(gva, 64K)
> - };
> -
> -#undef OFFSETS
> -
> -#define MASKS(gran) \
> -{ \
> - zeroeth_size(gran) - 1, \
> - first_size(gran) - 1, \
> - second_size(gran) - 1, \
> - third_size(gran) - 1 \
> -}
> -
> - static const paddr_t masks[3][4] = {
> - MASKS(4K),
> - MASKS(16K),
> - MASKS(64K)
> - };
> -
> -#undef MASKS
> -
> static const unsigned int grainsizes[3] = {
> PAGE_SHIFT_4K,
> PAGE_SHIFT_16K,
> @@ -519,7 +487,7 @@ static bool guest_walk_ld(const struct vcpu *v,
> * Add offset given by the GVA to the translation table base address.
> * Shift the offset by 3 as it is 8-byte aligned.
> */
> - paddr |= offsets[gran][level] << 3;
> + paddr |= LPAE_TABLE_INDEX_GS(grainsizes[gran], level, gva) << 3;
>
> /* Access the guest's memory to read only one PTE. */
> ret = access_guest_memory_by_ipa(d, paddr, &pte, sizeof(lpae_t), false);
> @@ -572,7 +540,8 @@ static bool guest_walk_ld(const struct vcpu *v,
>
> /* Make sure that the lower bits of the PTE's base address are zero. */
> mask = GENMASK_ULL(47, grainsizes[gran]);
> - *ipa = (pfn_to_paddr(pte.walk.base) & mask) | (gva & masks[gran][level]);
> + *ipa = (pfn_to_paddr(pte.walk.base) & mask) |
> + (gva & (LEVEL_SIZE_GS(grainsizes[gran], level) - 1));
>
> /*
> * Set permissions so that the caller can check the flags by herself. Note
> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> index 4797f9cee494..e94de2e7d8e8 100644
> --- a/xen/include/asm-arm/lpae.h
> +++ b/xen/include/asm-arm/lpae.h
> @@ -160,63 +160,35 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
> #define lpae_set_mfn(pte, mfn) ((pte).walk.base = mfn_x(mfn))
>
> /*
> - * AArch64 supports pages with different sizes (4K, 16K, and 64K). To enable
> - * page table walks for various configurations, the following helpers enable
> - * walking the translation table with varying page size granularities.
> + * AArch64 supports pages with different sizes (4K, 16K, and 64K).
> + * Provide a set of generic helpers that will compute various
> + * information based on the page granularity.
> + *
> + * Note the parameter 'gs' is the page shift of the granularity used.
> + * Some macro will evaluate 'gs' twice rather than storing in a
> + * variable. This is to allow using the macros in assembly.
> + */
> +
> +/*
> + * Granularity | PAGE_SHIFT | LPAE_SHIFT
> + * -------------------------------------
> + * 4K | 12 | 9
> + * 16K | 14 | 11
> + * 64K | 16 | 13
> + *
> + * This is equivalent to LPAE_SHIFT = PAGE_SHIFT - 3
> */
> +#define LPAE_SHIFT_GS(gs) ((gs) - 3)
> +#define LPAE_ENTRIES_GS(gs) (_AC(1, U) << LPAE_SHIFT_GS(gs))
> +#define LPAE_ENTRIES_MASK_GS(gs) (LPAE_ENTRIES_GS(gs) - 1)
> +
> +#define LEVEL_ORDER_GS(gs, lvl) ((3 - (lvl)) * LPAE_SHIFT_GS(gs))
> +#define LEVEL_SHIFT_GS(gs, lvl) (LEVEL_ORDER_GS(gs, lvl) + (gs))
> +#define LEVEL_SIZE_GS(gs, lvl) (_AT(paddr_t, 1) << LEVEL_SHIFT_GS(gs, lvl))
>
> -#define LPAE_SHIFT_4K (9)
> -#define LPAE_SHIFT_16K (11)
> -#define LPAE_SHIFT_64K (13)
> -
> -#define lpae_entries(gran) (_AC(1,U) << LPAE_SHIFT_##gran)
> -#define lpae_entry_mask(gran) (lpae_entries(gran) - 1)
> -
> -#define third_shift(gran) (PAGE_SHIFT_##gran)
> -#define third_size(gran) ((paddr_t)1 << third_shift(gran))
> -
> -#define second_shift(gran) (third_shift(gran) + LPAE_SHIFT_##gran)
> -#define second_size(gran) ((paddr_t)1 << second_shift(gran))
> -
> -#define first_shift(gran) (second_shift(gran) + LPAE_SHIFT_##gran)
> -#define first_size(gran) ((paddr_t)1 << first_shift(gran))
> -
> -/* Note that there is no zeroeth lookup level with a 64K granule size. */
> -#define zeroeth_shift(gran) (first_shift(gran) + LPAE_SHIFT_##gran)
> -#define zeroeth_size(gran) ((paddr_t)1 << zeroeth_shift(gran))
> -
> -#define TABLE_OFFSET(offs, gran) (offs & lpae_entry_mask(gran))
> -#define TABLE_OFFSET_HELPERS(gran) \
> -static inline paddr_t third_table_offset_##gran##K(paddr_t va) \
> -{ \
> - return TABLE_OFFSET((va >> third_shift(gran##K)), gran##K); \
> -} \
> - \
> -static inline paddr_t second_table_offset_##gran##K(paddr_t va) \
> -{ \
> - return TABLE_OFFSET((va >> second_shift(gran##K)), gran##K); \
> -} \
> - \
> -static inline paddr_t first_table_offset_##gran##K(paddr_t va) \
> -{ \
> - return TABLE_OFFSET((va >> first_shift(gran##K)), gran##K); \
> -} \
> - \
> -static inline paddr_t zeroeth_table_offset_##gran##K(paddr_t va) \
> -{ \
> - /* Note that there is no zeroeth lookup level with 64K granule sizes. */\
> - if ( gran == 64 ) \
> - return 0; \
> - else \
> - return TABLE_OFFSET((va >> zeroeth_shift(gran##K)), gran##K); \
> -} \
> -
> -TABLE_OFFSET_HELPERS(4);
> -TABLE_OFFSET_HELPERS(16);
> -TABLE_OFFSET_HELPERS(64);
> -
> -#undef TABLE_OFFSET
> -#undef TABLE_OFFSET_HELPERS
> +/* Offset in the table at level 'lvl' */
> +#define LPAE_TABLE_INDEX_GS(gs, lvl, addr) \
> + (((addr) >> LEVEL_SHIFT_GS(gs, lvl)) & LPAE_ENTRIES_MASK_GS(gs))
>
> /* Generate an array @var containing the offset for each level from @addr */
> #define DECLARE_OFFSETS(var, addr) \
> --
> 2.17.1
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] xen/arm: guest_walk: Only generate necessary offsets/masks
2021-04-16 16:04 ` Bertrand Marquis
@ 2021-04-18 18:16 ` Julien Grall
2021-04-19 7:50 ` Bertrand Marquis
0 siblings, 1 reply; 5+ messages in thread
From: Julien Grall @ 2021-04-18 18:16 UTC (permalink / raw)
To: Bertrand Marquis
Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk,
Julien Grall
Hi Bertrand,
On 16/04/2021 17:04, Bertrand Marquis wrote:
>> On 5 Apr 2021, at 17:20, Julien Grall <julien@xen.org> wrote:
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, we are computing offsets/masks for each level and
>> granularity. This is a bit of waste given that we only need to
>> know the offsets/masks for the granularity used by the guest.
>>
>> All the LPAE information can easily be inferred with just the
>> page shift for a given granularity and the level.
>>
>> So rather than providing a set of helpers per granularity, we can
>> provide a single set that takes the granularity and the level in
>> parameters.
>>
>> With the new helpers in place, we can rework guest_walk_ld() to
>> only compute necessary information.
>>
>> Signed-off-by: Julien Grall <julien@amazon.com>
>
> Very nice cleanup.
I have a couple of more cleanup in that area to send. So far, we are
using a completely different set of macros for Xen page-tables. I would
like to introduce a new set that will just pass PAGE_SHIFT to the
existing helper.
The nice part is this means it will be easier to support other page
granularity if we wanted :).
>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
Thanks! I have committed the patch.
Cheers,
--
Julien Grall
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] xen/arm: guest_walk: Only generate necessary offsets/masks
2021-04-18 18:16 ` Julien Grall
@ 2021-04-19 7:50 ` Bertrand Marquis
0 siblings, 0 replies; 5+ messages in thread
From: Bertrand Marquis @ 2021-04-19 7:50 UTC (permalink / raw)
To: Julien Grall
Cc: xen-devel, Julien Grall, Stefano Stabellini, Volodymyr Babchuk,
Julien Grall
Hi Julien,
> On 18 Apr 2021, at 19:16, Julien Grall <julien@xen.org> wrote:
>
> Hi Bertrand,
>
> On 16/04/2021 17:04, Bertrand Marquis wrote:
>>> On 5 Apr 2021, at 17:20, Julien Grall <julien@xen.org> wrote:
>>>
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> At the moment, we are computing offsets/masks for each level and
>>> granularity. This is a bit of waste given that we only need to
>>> know the offsets/masks for the granularity used by the guest.
>>>
>>> All the LPAE information can easily be inferred with just the
>>> page shift for a given granularity and the level.
>>>
>>> So rather than providing a set of helpers per granularity, we can
>>> provide a single set that takes the granularity and the level in
>>> parameters.
>>>
>>> With the new helpers in place, we can rework guest_walk_ld() to
>>> only compute necessary information.
>>>
>>> Signed-off-by: Julien Grall <julien@amazon.com>
>> Very nice cleanup.
>
> I have a couple of more cleanup in that area to send. So far, we are using a completely different set of macros for Xen page-tables. I would like to introduce a new set that will just pass PAGE_SHIFT to the existing helper.
>
> The nice part is this means it will be easier to support other page granularity if we wanted :).
That would be awesome.
Do not hesitate to ping me if you need some help.
>
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
>
> Thanks! I have committed the patch.
Great,
Cheers
Bertrand
>
> Cheers,
>
> --
> Julien Grall
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-04-19 7:51 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-05 16:20 [PATCH] xen/arm: guest_walk: Only generate necessary offsets/masks Julien Grall
2021-04-15 22:32 ` Stefano Stabellini
2021-04-16 16:04 ` Bertrand Marquis
2021-04-18 18:16 ` Julien Grall
2021-04-19 7:50 ` Bertrand Marquis
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.