All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Quentin Perret <qperret@google.com>
Cc: linux-arch@vger.kernel.org, Will Deacon <will@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	kernel-team@android.com, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org,
	Jade Alglave <jade.alglave@arm.com>
Subject: Re: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2()
Date: Fri, 20 Aug 2021 09:01:41 +0100	[thread overview]
Message-ID: <87tujkr1cq.wl-maz@kernel.org> (raw)
In-Reply-To: <YQ07sPoa4ACizYrp@google.com>

On Fri, 06 Aug 2021 14:40:00 +0100,
Quentin Perret <qperret@google.com> wrote:
> 
> On Friday 06 Aug 2021 at 12:31:07 (+0100), Will Deacon wrote:
> > From: Marc Zyngier <maz@kernel.org>
> > 
> > The protected mode relies on a separate helper to load the
> > S2 context. Move over to the __load_guest_stage2() helper
> > instead.
> > 
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Jade Alglave <jade.alglave@arm.com>
> > Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_mmu.h              | 11 +++--------
> >  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/mem_protect.c         |  2 +-
> >  3 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > index 05e089653a1a..934ef0deff9f 100644
> > --- a/arch/arm64/include/asm/kvm_mmu.h
> > +++ b/arch/arm64/include/asm/kvm_mmu.h
> > @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
> >   * Must be called from hyp code running at EL2 with an updated VTTBR
> >   * and interrupts disabled.
> >   */
> > -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr)
> > +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > +						struct kvm_arch *arch)
> >  {
> > -	write_sysreg(vtcr, vtcr_el2);
> > +	write_sysreg(arch->vtcr, vtcr_el2);
> >  	write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
> >  
> >  	/*
> > @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long
> >  	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
> >  }
> >  
> > -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > -						struct kvm_arch *arch)
> > -{
> > -	__load_stage2(mmu, arch->vtcr);
> > -}
> > -
> >  static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
> >  {
> >  	return container_of(mmu->arch, struct kvm, arch);
> > diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > index 9c227d87c36d..a910648bc71b 100644
> > --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
> >  static __always_inline void __load_host_stage2(void)
> >  {
> >  	if (static_branch_likely(&kvm_protected_mode_initialized))
> > -		__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > +		__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> >  	else
> >  		write_sysreg(0, vttbr_el2);
> >  }
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index d938ce95d3bd..d4e74ca7f876 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void)
> >  	kvm_flush_dcache_to_poc(params, sizeof(*params));
> >  
> >  	write_sysreg(params->hcr_el2, hcr_el2);
> > -	__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > +	__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> 
> Nit: clearly we're not loading a guest stage-2 here, so maybe the
> function should take a more generic name?

How about we rename __load_guest_stage2() to __load_stage2() instead,
with the same parameters?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Quentin Perret <qperret@google.com>
Cc: Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org, kernel-team@android.com,
	Catalin Marinas <catalin.marinas@arm.com>,
	Jade Alglave <jade.alglave@arm.com>,
	Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>,
	kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org
Subject: Re: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2()
Date: Fri, 20 Aug 2021 09:01:41 +0100	[thread overview]
Message-ID: <87tujkr1cq.wl-maz@kernel.org> (raw)
In-Reply-To: <YQ07sPoa4ACizYrp@google.com>

On Fri, 06 Aug 2021 14:40:00 +0100,
Quentin Perret <qperret@google.com> wrote:
> 
> On Friday 06 Aug 2021 at 12:31:07 (+0100), Will Deacon wrote:
> > From: Marc Zyngier <maz@kernel.org>
> > 
> > The protected mode relies on a separate helper to load the
> > S2 context. Move over to the __load_guest_stage2() helper
> > instead.
> > 
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Jade Alglave <jade.alglave@arm.com>
> > Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_mmu.h              | 11 +++--------
> >  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/mem_protect.c         |  2 +-
> >  3 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > index 05e089653a1a..934ef0deff9f 100644
> > --- a/arch/arm64/include/asm/kvm_mmu.h
> > +++ b/arch/arm64/include/asm/kvm_mmu.h
> > @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
> >   * Must be called from hyp code running at EL2 with an updated VTTBR
> >   * and interrupts disabled.
> >   */
> > -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr)
> > +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > +						struct kvm_arch *arch)
> >  {
> > -	write_sysreg(vtcr, vtcr_el2);
> > +	write_sysreg(arch->vtcr, vtcr_el2);
> >  	write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
> >  
> >  	/*
> > @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long
> >  	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
> >  }
> >  
> > -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > -						struct kvm_arch *arch)
> > -{
> > -	__load_stage2(mmu, arch->vtcr);
> > -}
> > -
> >  static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
> >  {
> >  	return container_of(mmu->arch, struct kvm, arch);
> > diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > index 9c227d87c36d..a910648bc71b 100644
> > --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
> >  static __always_inline void __load_host_stage2(void)
> >  {
> >  	if (static_branch_likely(&kvm_protected_mode_initialized))
> > -		__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > +		__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> >  	else
> >  		write_sysreg(0, vttbr_el2);
> >  }
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index d938ce95d3bd..d4e74ca7f876 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void)
> >  	kvm_flush_dcache_to_poc(params, sizeof(*params));
> >  
> >  	write_sysreg(params->hcr_el2, hcr_el2);
> > -	__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > +	__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> 
> Nit: clearly we're not loading a guest stage-2 here, so maybe the
> function should take a more generic name?

How about we rename __load_guest_stage2() to __load_stage2() instead,
with the same parameters?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Quentin Perret <qperret@google.com>
Cc: Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org, kernel-team@android.com,
	Catalin Marinas <catalin.marinas@arm.com>,
	Jade Alglave <jade.alglave@arm.com>,
	Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>,
	kvmarm@lists.cs.columbia.edu, linux-arch@vger.kernel.org
Subject: Re: [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2()
Date: Fri, 20 Aug 2021 09:01:41 +0100	[thread overview]
Message-ID: <87tujkr1cq.wl-maz@kernel.org> (raw)
In-Reply-To: <YQ07sPoa4ACizYrp@google.com>

On Fri, 06 Aug 2021 14:40:00 +0100,
Quentin Perret <qperret@google.com> wrote:
> 
> On Friday 06 Aug 2021 at 12:31:07 (+0100), Will Deacon wrote:
> > From: Marc Zyngier <maz@kernel.org>
> > 
> > The protected mode relies on a separate helper to load the
> > S2 context. Move over to the __load_guest_stage2() helper
> > instead.
> > 
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Jade Alglave <jade.alglave@arm.com>
> > Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_mmu.h              | 11 +++--------
> >  arch/arm64/kvm/hyp/include/nvhe/mem_protect.h |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/mem_protect.c         |  2 +-
> >  3 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> > index 05e089653a1a..934ef0deff9f 100644
> > --- a/arch/arm64/include/asm/kvm_mmu.h
> > +++ b/arch/arm64/include/asm/kvm_mmu.h
> > @@ -267,9 +267,10 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
> >   * Must be called from hyp code running at EL2 with an updated VTTBR
> >   * and interrupts disabled.
> >   */
> > -static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long vtcr)
> > +static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > +						struct kvm_arch *arch)
> >  {
> > -	write_sysreg(vtcr, vtcr_el2);
> > +	write_sysreg(arch->vtcr, vtcr_el2);
> >  	write_sysreg(kvm_get_vttbr(mmu), vttbr_el2);
> >  
> >  	/*
> > @@ -280,12 +281,6 @@ static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, unsigned long
> >  	asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT));
> >  }
> >  
> > -static __always_inline void __load_guest_stage2(struct kvm_s2_mmu *mmu,
> > -						struct kvm_arch *arch)
> > -{
> > -	__load_stage2(mmu, arch->vtcr);
> > -}
> > -
> >  static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
> >  {
> >  	return container_of(mmu->arch, struct kvm, arch);
> > diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > index 9c227d87c36d..a910648bc71b 100644
> > --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> > @@ -29,7 +29,7 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
> >  static __always_inline void __load_host_stage2(void)
> >  {
> >  	if (static_branch_likely(&kvm_protected_mode_initialized))
> > -		__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > +		__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> >  	else
> >  		write_sysreg(0, vttbr_el2);
> >  }
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index d938ce95d3bd..d4e74ca7f876 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -126,7 +126,7 @@ int __pkvm_prot_finalize(void)
> >  	kvm_flush_dcache_to_poc(params, sizeof(*params));
> >  
> >  	write_sysreg(params->hcr_el2, hcr_el2);
> > -	__load_stage2(&host_kvm.arch.mmu, host_kvm.arch.vtcr);
> > +	__load_guest_stage2(&host_kvm.arch.mmu, &host_kvm.arch);
> 
> Nit: clearly we're not loading a guest stage-2 here, so maybe the
> function should take a more generic name?

How about we rename __load_guest_stage2() to __load_stage2() instead,
with the same parameters?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-08-20  8:01 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-06 11:31 [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Will Deacon
2021-08-06 11:31 ` Will Deacon
2021-08-06 11:31 ` Will Deacon
2021-08-06 11:31 ` [PATCH 1/4] arm64: mm: Fix TLBI vs ASID rollover Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 11:59   ` Catalin Marinas
2021-08-06 11:59     ` Catalin Marinas
2021-08-06 11:59     ` Catalin Marinas
2021-08-06 12:42     ` Will Deacon
2021-08-06 12:42       ` Will Deacon
2021-08-06 12:42       ` Will Deacon
2021-08-06 12:49       ` Catalin Marinas
2021-08-06 12:49         ` Catalin Marinas
2021-08-06 12:49         ` Catalin Marinas
2021-08-06 11:31 ` [PATCH] of: restricted dma: Don't fail device probe on rmem init failure Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 11:34   ` Will Deacon
2021-08-06 11:34     ` Will Deacon
2021-08-06 11:34     ` Will Deacon
2021-08-06 11:31 ` [PATCH 2/4] KVM: arm64: Move kern_hyp_va() usage in __load_guest_stage2() into the callers Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 11:31 ` [PATCH 3/4] KVM: arm64: Convert the host S2 over to __load_guest_stage2() Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 13:40   ` Quentin Perret
2021-08-06 13:40     ` Quentin Perret
2021-08-06 13:40     ` Quentin Perret
2021-08-20  8:01     ` Marc Zyngier [this message]
2021-08-20  8:01       ` Marc Zyngier
2021-08-20  8:01       ` Marc Zyngier
2021-08-20  9:08       ` Quentin Perret
2021-08-20  9:08         ` Quentin Perret
2021-08-20  9:08         ` Quentin Perret
2021-08-06 11:31 ` [PATCH 4/4] KVM: arm64: Upgrade VMID accesses to {READ,WRITE}_ONCE Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 11:31   ` Will Deacon
2021-08-06 14:24   ` Quentin Perret
2021-08-06 14:24     ` Quentin Perret
2021-08-06 14:24     ` Quentin Perret
2021-08-06 16:04 ` (subset) [PATCH 0/4] Fix racing TLBI with ASID/VMID reallocation Catalin Marinas
2021-08-06 16:04   ` Catalin Marinas
2021-08-06 16:04   ` Catalin Marinas
2021-09-10  9:06 ` Shameerali Kolothum Thodi
2021-09-10  9:06   ` Shameerali Kolothum Thodi
2021-09-10  9:06   ` Shameerali Kolothum Thodi
2021-09-10  9:45   ` Catalin Marinas
2021-09-10  9:45     ` Catalin Marinas
2021-09-10  9:45     ` Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87tujkr1cq.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=jade.alglave@arm.com \
    --cc=kernel-team@android.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=qperret@google.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.