* [PATCH] KVM: arm64: pkvm: Use the mm_ops indirection for cache maintenance
@ 2022-01-14 12:50 ` Marc Zyngier
0 siblings, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2022-01-14 12:50 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Will Deacon,
Fuad Tabba, Quentin Perret
CMOs issued from EL2 cannot directly use the kernel helpers,
as EL2 doesn't have a mapping of the guest pages. Oops.
Instead, use the mm_ops indirection to use helpers that will
perform a mapping at EL2 and allow the CMO to be effective.
Fixes: 25aa28691bb9 ("KVM: arm64: Move guest CMOs to the fault handlers")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/pgtable.c | 18 ++++++------------
1 file changed, 6 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 844a6f003fd5..2cb3867eb7c2 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -983,13 +983,9 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
*/
stage2_put_pte(ptep, mmu, addr, level, mm_ops);
- if (need_flush) {
- kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
-
- dcache_clean_inval_poc((unsigned long)pte_follow,
- (unsigned long)pte_follow +
- kvm_granule_size(level));
- }
+ if (need_flush && mm_ops->dcache_clean_inval_poc)
+ mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
+ kvm_granule_size(level));
if (childp)
mm_ops->put_page(childp);
@@ -1151,15 +1147,13 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
struct kvm_pgtable *pgt = arg;
struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
kvm_pte_t pte = *ptep;
- kvm_pte_t *pte_follow;
if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
return 0;
- pte_follow = kvm_pte_follow(pte, mm_ops);
- dcache_clean_inval_poc((unsigned long)pte_follow,
- (unsigned long)pte_follow +
- kvm_granule_size(level));
+ if (mm_ops->dcache_clean_inval_poc)
+ mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
+ kvm_granule_size(level));
return 0;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH] KVM: arm64: pkvm: Use the mm_ops indirection for cache maintenance
@ 2022-01-14 12:50 ` Marc Zyngier
0 siblings, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2022-01-14 12:50 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel; +Cc: Will Deacon
CMOs issued from EL2 cannot directly use the kernel helpers,
as EL2 doesn't have a mapping of the guest pages. Oops.
Instead, use the mm_ops indirection to use helpers that will
perform a mapping at EL2 and allow the CMO to be effective.
Fixes: 25aa28691bb9 ("KVM: arm64: Move guest CMOs to the fault handlers")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/pgtable.c | 18 ++++++------------
1 file changed, 6 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 844a6f003fd5..2cb3867eb7c2 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -983,13 +983,9 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
*/
stage2_put_pte(ptep, mmu, addr, level, mm_ops);
- if (need_flush) {
- kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
-
- dcache_clean_inval_poc((unsigned long)pte_follow,
- (unsigned long)pte_follow +
- kvm_granule_size(level));
- }
+ if (need_flush && mm_ops->dcache_clean_inval_poc)
+ mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
+ kvm_granule_size(level));
if (childp)
mm_ops->put_page(childp);
@@ -1151,15 +1147,13 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
struct kvm_pgtable *pgt = arg;
struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
kvm_pte_t pte = *ptep;
- kvm_pte_t *pte_follow;
if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
return 0;
- pte_follow = kvm_pte_follow(pte, mm_ops);
- dcache_clean_inval_poc((unsigned long)pte_follow,
- (unsigned long)pte_follow +
- kvm_granule_size(level));
+ if (mm_ops->dcache_clean_inval_poc)
+ mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
+ kvm_granule_size(level));
return 0;
}
--
2.30.2
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH] KVM: arm64: pkvm: Use the mm_ops indirection for cache maintenance
@ 2022-01-14 12:50 ` Marc Zyngier
0 siblings, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2022-01-14 12:50 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Will Deacon,
Fuad Tabba, Quentin Perret
CMOs issued from EL2 cannot directly use the kernel helpers,
as EL2 doesn't have a mapping of the guest pages. Oops.
Instead, use the mm_ops indirection to use helpers that will
perform a mapping at EL2 and allow the CMO to be effective.
Fixes: 25aa28691bb9 ("KVM: arm64: Move guest CMOs to the fault handlers")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/pgtable.c | 18 ++++++------------
1 file changed, 6 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 844a6f003fd5..2cb3867eb7c2 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -983,13 +983,9 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
*/
stage2_put_pte(ptep, mmu, addr, level, mm_ops);
- if (need_flush) {
- kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
-
- dcache_clean_inval_poc((unsigned long)pte_follow,
- (unsigned long)pte_follow +
- kvm_granule_size(level));
- }
+ if (need_flush && mm_ops->dcache_clean_inval_poc)
+ mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
+ kvm_granule_size(level));
if (childp)
mm_ops->put_page(childp);
@@ -1151,15 +1147,13 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
struct kvm_pgtable *pgt = arg;
struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
kvm_pte_t pte = *ptep;
- kvm_pte_t *pte_follow;
if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
return 0;
- pte_follow = kvm_pte_follow(pte, mm_ops);
- dcache_clean_inval_poc((unsigned long)pte_follow,
- (unsigned long)pte_follow +
- kvm_granule_size(level));
+ if (mm_ops->dcache_clean_inval_poc)
+ mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
+ kvm_granule_size(level));
return 0;
}
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: arm64: pkvm: Use the mm_ops indirection for cache maintenance
2022-01-14 12:50 ` Marc Zyngier
(?)
@ 2022-01-14 13:49 ` Quentin Perret
-1 siblings, 0 replies; 6+ messages in thread
From: Quentin Perret @ 2022-01-14 13:49 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Alexandru Elisei, Will Deacon, Fuad Tabba
On Friday 14 Jan 2022 at 12:50:38 (+0000), Marc Zyngier wrote:
> CMOs issued from EL2 cannot directly use the kernel helpers,
> as EL2 doesn't have a mapping of the guest pages. Oops.
>
> Instead, use the mm_ops indirection to use helpers that will
> perform a mapping at EL2 and allow the CMO to be effective.
Right, we were clearly lucky not to use those paths at EL2 _yet_, but
that's going to change soon and this is better for consistency, so:
Reviewed-by: Quentin Perret <qperret@google.com>
> Fixes: 25aa28691bb9 ("KVM: arm64: Move guest CMOs to the fault handlers")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/hyp/pgtable.c | 18 ++++++------------
> 1 file changed, 6 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 844a6f003fd5..2cb3867eb7c2 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -983,13 +983,9 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> */
> stage2_put_pte(ptep, mmu, addr, level, mm_ops);
>
> - if (need_flush) {
> - kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
> -
> - dcache_clean_inval_poc((unsigned long)pte_follow,
> - (unsigned long)pte_follow +
> - kvm_granule_size(level));
> - }
> + if (need_flush && mm_ops->dcache_clean_inval_poc)
> + mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
> + kvm_granule_size(level));
>
> if (childp)
> mm_ops->put_page(childp);
> @@ -1151,15 +1147,13 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> struct kvm_pgtable *pgt = arg;
> struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
> kvm_pte_t pte = *ptep;
> - kvm_pte_t *pte_follow;
>
> if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
> return 0;
>
> - pte_follow = kvm_pte_follow(pte, mm_ops);
> - dcache_clean_inval_poc((unsigned long)pte_follow,
> - (unsigned long)pte_follow +
> - kvm_granule_size(level));
> + if (mm_ops->dcache_clean_inval_poc)
> + mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
> + kvm_granule_size(level));
> return 0;
> }
>
> --
> 2.30.2
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: arm64: pkvm: Use the mm_ops indirection for cache maintenance
@ 2022-01-14 13:49 ` Quentin Perret
0 siblings, 0 replies; 6+ messages in thread
From: Quentin Perret @ 2022-01-14 13:49 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvm, Will Deacon, kvmarm, linux-arm-kernel
On Friday 14 Jan 2022 at 12:50:38 (+0000), Marc Zyngier wrote:
> CMOs issued from EL2 cannot directly use the kernel helpers,
> as EL2 doesn't have a mapping of the guest pages. Oops.
>
> Instead, use the mm_ops indirection to use helpers that will
> perform a mapping at EL2 and allow the CMO to be effective.
Right, we were clearly lucky not to use those paths at EL2 _yet_, but
that's going to change soon and this is better for consistency, so:
Reviewed-by: Quentin Perret <qperret@google.com>
> Fixes: 25aa28691bb9 ("KVM: arm64: Move guest CMOs to the fault handlers")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/hyp/pgtable.c | 18 ++++++------------
> 1 file changed, 6 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 844a6f003fd5..2cb3867eb7c2 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -983,13 +983,9 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> */
> stage2_put_pte(ptep, mmu, addr, level, mm_ops);
>
> - if (need_flush) {
> - kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
> -
> - dcache_clean_inval_poc((unsigned long)pte_follow,
> - (unsigned long)pte_follow +
> - kvm_granule_size(level));
> - }
> + if (need_flush && mm_ops->dcache_clean_inval_poc)
> + mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
> + kvm_granule_size(level));
>
> if (childp)
> mm_ops->put_page(childp);
> @@ -1151,15 +1147,13 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> struct kvm_pgtable *pgt = arg;
> struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
> kvm_pte_t pte = *ptep;
> - kvm_pte_t *pte_follow;
>
> if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
> return 0;
>
> - pte_follow = kvm_pte_follow(pte, mm_ops);
> - dcache_clean_inval_poc((unsigned long)pte_follow,
> - (unsigned long)pte_follow +
> - kvm_granule_size(level));
> + if (mm_ops->dcache_clean_inval_poc)
> + mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
> + kvm_granule_size(level));
> return 0;
> }
>
> --
> 2.30.2
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: arm64: pkvm: Use the mm_ops indirection for cache maintenance
@ 2022-01-14 13:49 ` Quentin Perret
0 siblings, 0 replies; 6+ messages in thread
From: Quentin Perret @ 2022-01-14 13:49 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Alexandru Elisei, Will Deacon, Fuad Tabba
On Friday 14 Jan 2022 at 12:50:38 (+0000), Marc Zyngier wrote:
> CMOs issued from EL2 cannot directly use the kernel helpers,
> as EL2 doesn't have a mapping of the guest pages. Oops.
>
> Instead, use the mm_ops indirection to use helpers that will
> perform a mapping at EL2 and allow the CMO to be effective.
Right, we were clearly lucky not to use those paths at EL2 _yet_, but
that's going to change soon and this is better for consistency, so:
Reviewed-by: Quentin Perret <qperret@google.com>
> Fixes: 25aa28691bb9 ("KVM: arm64: Move guest CMOs to the fault handlers")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/hyp/pgtable.c | 18 ++++++------------
> 1 file changed, 6 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 844a6f003fd5..2cb3867eb7c2 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -983,13 +983,9 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> */
> stage2_put_pte(ptep, mmu, addr, level, mm_ops);
>
> - if (need_flush) {
> - kvm_pte_t *pte_follow = kvm_pte_follow(pte, mm_ops);
> -
> - dcache_clean_inval_poc((unsigned long)pte_follow,
> - (unsigned long)pte_follow +
> - kvm_granule_size(level));
> - }
> + if (need_flush && mm_ops->dcache_clean_inval_poc)
> + mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
> + kvm_granule_size(level));
>
> if (childp)
> mm_ops->put_page(childp);
> @@ -1151,15 +1147,13 @@ static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,
> struct kvm_pgtable *pgt = arg;
> struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops;
> kvm_pte_t pte = *ptep;
> - kvm_pte_t *pte_follow;
>
> if (!kvm_pte_valid(pte) || !stage2_pte_cacheable(pgt, pte))
> return 0;
>
> - pte_follow = kvm_pte_follow(pte, mm_ops);
> - dcache_clean_inval_poc((unsigned long)pte_follow,
> - (unsigned long)pte_follow +
> - kvm_granule_size(level));
> + if (mm_ops->dcache_clean_inval_poc)
> + mm_ops->dcache_clean_inval_poc(kvm_pte_follow(pte, mm_ops),
> + kvm_granule_size(level));
> return 0;
> }
>
> --
> 2.30.2
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-01-14 13:50 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-14 12:50 [PATCH] KVM: arm64: pkvm: Use the mm_ops indirection for cache maintenance Marc Zyngier
2022-01-14 12:50 ` Marc Zyngier
2022-01-14 12:50 ` Marc Zyngier
2022-01-14 13:49 ` Quentin Perret
2022-01-14 13:49 ` Quentin Perret
2022-01-14 13:49 ` Quentin Perret
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.