KVM ARM Archive on lore.kernel.org
 help / color / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: David Brazdil <dbrazdil@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	Will Deacon <will@kernel.org>,
	kvmarm@lists.cs.columbia.edu
Subject: Re: [PATCH 05/15] arm64: kvm: Build hyp-entry.S separately for VHE/nVHE
Date: Thu, 07 May 2020 16:07:54 +0100
Message-ID: <87imh72379.wl-maz@kernel.org> (raw)
In-Reply-To: <20200430144831.59194-6-dbrazdil@google.com>

On Thu, 30 Apr 2020 15:48:21 +0100,
David Brazdil <dbrazdil@google.com> wrote:
> 
> This patch is part of a series which builds KVM's non-VHE hyp code separately
> from VHE and the rest of the kernel.

This sentence should be part of your cover letter, and not in the 5th
patch.

> 
> hyp-entry.S contains implementation of KVM hyp vectors. This code is mostly
> shared between VHE/nVHE, therefore compile it under both VHE and nVHE build
> rules, with small differences hidden behind '#ifdef __HYPERVISOR__'. These are:
>   * only nVHE should handle host HVCs, VHE will now panic,

That's not true. VHE does handle HVCs from the guest. If you make VHE
panic on guest exit, I'll come after you! ;-)

>   * only nVHE needs kvm_hcall_table, so move host_hypcall.c to nvhe/,
>   * __smccc_workaround_1_smc is not needed by nVHE, only cpu_errata.c in
>     kernel proper.

How comes? You certainly need to be able to use the generated code,
don't you? Or do you actually mean that the assembly code doesn't need
to live in the file that contains the vectors themselves (which I'd
agree with)?

> 
> Adjust code which selects which KVM hyp vecs to install to choose the correct
> VHE/nVHE symbol.
> 
> Signed-off-by: David Brazdil <dbrazdil@google.com>
> ---
>  arch/arm64/include/asm/kvm_asm.h              |  7 +++++
>  arch/arm64/include/asm/kvm_mmu.h              | 13 +++++----
>  arch/arm64/include/asm/mmu.h                  |  7 -----
>  arch/arm64/kernel/cpu_errata.c                |  2 +-
>  arch/arm64/kernel/image-vars.h                | 28 +++++++++++++++++++
>  arch/arm64/kvm/hyp/Makefile                   |  2 +-
>  arch/arm64/kvm/hyp/hyp-entry.S                | 27 ++++++++++++------
>  arch/arm64/kvm/hyp/nvhe/Makefile              |  2 +-
>  .../arm64/kvm/hyp/{ => nvhe}/host_hypercall.c |  0
>  arch/arm64/kvm/va_layout.c                    |  2 +-
>  10 files changed, 65 insertions(+), 25 deletions(-)
>  rename arch/arm64/kvm/hyp/{ => nvhe}/host_hypercall.c (100%)
> 
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index 99ab204519ca..cdaf3df8085d 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -71,6 +71,13 @@ extern char __kvm_hyp_init[];
>  extern char __kvm_hyp_init_end[];
>  
>  extern char __kvm_hyp_vector[];
> +extern char kvm_nvhe_sym(__kvm_hyp_vector)[];

This is becoming pretty ugly. I'd rather we have a helper that emits
the declaration for both symbols. Or something.

> +
> +#ifdef CONFIG_KVM_INDIRECT_VECTORS
> +extern char __bp_harden_hyp_vecs[];
> +extern char kvm_nvhe_sym(__bp_harden_hyp_vecs)[];
> +extern atomic_t arm64_el2_vector_last_slot;
> +#endif
>  
>  extern void __kvm_flush_vm_context(void);
>  extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 30b0e8d6b895..0a5fa033422c 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -468,7 +468,7 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa,
>   * VHE, as we don't have hypervisor-specific mappings. If the system
>   * is VHE and yet selects this capability, it will be ignored.
>   */
> -#include <asm/mmu.h>
> +#include <asm/kvm_asm.h>
>  
>  extern void *__kvm_bp_vect_base;
>  extern int __kvm_harden_el2_vector_slot;
> @@ -477,11 +477,11 @@ extern int __kvm_harden_el2_vector_slot;
>  static inline void *kvm_get_hyp_vector(void)
>  {
>  	struct bp_hardening_data *data = arm64_get_bp_hardening_data();
> -	void *vect = kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector));
> +	void *vect = kern_hyp_va(kvm_hyp_ref(__kvm_hyp_vector));

I find it pretty annoying (again) that I have to know where a symbol
lives (kernel or nVHE-EL2) to know which kvm_*_ref() helper to use.
Maybe there is no good solution to this, but still...

>  	int slot = -1;
>  
>  	if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) {
> -		vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs));
> +		vect = kern_hyp_va(kvm_hyp_ref(__bp_harden_hyp_vecs));
>  		slot = data->hyp_vectors_slot;
>  	}
>  
> @@ -510,12 +510,13 @@ static inline int kvm_map_vectors(void)
>  	 *  HBP +  HEL2 -> use hardened vertors and use exec mapping
>  	 */
>  	if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) {
> -		__kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs);
> +		__kvm_bp_vect_base = kvm_hyp_ref(__bp_harden_hyp_vecs);
>  		__kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base);
>  	}
>  
>  	if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) {
> -		phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs);
> +		phys_addr_t vect_pa =
> +			__pa_symbol(kvm_nvhe_sym(__bp_harden_hyp_vecs));

Please keep the assignment on a single line (and screw checkpatch).

>  		unsigned long size = __BP_HARDEN_HYP_VECS_SZ;
>  
>  		/*
> @@ -534,7 +535,7 @@ static inline int kvm_map_vectors(void)
>  #else
>  static inline void *kvm_get_hyp_vector(void)
>  {
> -	return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector));
> +	return kern_hyp_va(kvm_hyp_ref(__kvm_hyp_vector));
>  }
>  
>  static inline int kvm_map_vectors(void)
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 68140fdd89d6..4d913f6dd366 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -42,13 +42,6 @@ struct bp_hardening_data {
>  	bp_hardening_cb_t	fn;
>  };
>  
> -#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) ||	\
> -     defined(CONFIG_HARDEN_EL2_VECTORS))
> -
> -extern char __bp_harden_hyp_vecs[];
> -extern atomic_t arm64_el2_vector_last_slot;
> -#endif  /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */
> -
>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
>  DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
>  
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index a102321fc8a2..b4b41552c6df 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -117,7 +117,7 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
>  static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
>  				const char *hyp_vecs_end)
>  {
> -	void *dst = lm_alias(__bp_harden_hyp_vecs + slot * SZ_2K);
> +	void *dst = lm_alias(kvm_hyp_sym(__bp_harden_hyp_vecs) + slot * SZ_2K);
>  	int i;
>  
>  	for (i = 0; i < SZ_2K; i += 0x80)
> diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
> index 13850134fc28..59c53566eceb 100644
> --- a/arch/arm64/kernel/image-vars.h
> +++ b/arch/arm64/kernel/image-vars.h
> @@ -61,6 +61,34 @@ __efistub__ctype		= _ctype;
>   * memory mappings.
>   */
>  
> +__hyp_text___guest_exit = __guest_exit;
> +__hyp_text___kvm_enable_ssbs = __kvm_enable_ssbs;
> +__hyp_text___kvm_flush_vm_context = __kvm_flush_vm_context;
> +__hyp_text___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2;
> +__hyp_text___kvm_handle_stub_hvc = __kvm_handle_stub_hvc;
> +__hyp_text___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff;
> +__hyp_text___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid;
> +__hyp_text___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid;
> +__hyp_text___kvm_tlb_flush_vmid_ipa = __kvm_tlb_flush_vmid_ipa;
> +__hyp_text___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe;
> +__hyp_text___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2;
> +__hyp_text___vgic_v3_init_lrs = __vgic_v3_init_lrs;
> +__hyp_text___vgic_v3_read_vmcr = __vgic_v3_read_vmcr;
> +__hyp_text___vgic_v3_restore_aprs = __vgic_v3_restore_aprs;
> +__hyp_text___vgic_v3_save_aprs = __vgic_v3_save_aprs;
> +__hyp_text___vgic_v3_write_vmcr = __vgic_v3_write_vmcr;
> +__hyp_text_abort_guest_exit_end = abort_guest_exit_end;
> +__hyp_text_abort_guest_exit_start = abort_guest_exit_start;
> +__hyp_text_arm64_enable_wa2_handling = arm64_enable_wa2_handling;
> +__hyp_text_arm64_ssbd_callback_required = arm64_ssbd_callback_required;
> +__hyp_text_hyp_panic = hyp_panic;
> +__hyp_text_kimage_voffset = kimage_voffset;
> +__hyp_text_kvm_host_data = kvm_host_data;
> +__hyp_text_kvm_patch_vector_branch = kvm_patch_vector_branch;
> +__hyp_text_kvm_update_va_mask = kvm_update_va_mask;
> +__hyp_text_panic = panic;
> +__hyp_text_physvirt_offset = physvirt_offset;
> +

Err... What on Earth is this? Why can't this be generated? one way or
another? Safety is our goal ("But let's be realistic here" ;-), so
having to manually maintain this symbol list is unlikely to work long
term. And are all these symbols required at this stage in the series?

>  #endif /* CONFIG_KVM */
>  
>  #endif /* __ARM64_KERNEL_IMAGE_VARS_H */
> diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
> index 79bf822a484b..a6883f4fed9e 100644
> --- a/arch/arm64/kvm/hyp/Makefile
> +++ b/arch/arm64/kvm/hyp/Makefile
> @@ -9,7 +9,7 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \
>  obj-$(CONFIG_KVM) += vhe.o nvhe/
>  
>  vhe-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \
> -	 debug-sr.o entry.o switch.o fpsimd.o tlb.o host_hypercall.o hyp-entry.o
> +	 debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o
>  
>  # KVM code is run at a different exception code with a different map, so
>  # compiler instrumentation that inserts callbacks or checks into the code may
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> index 1a51a0958504..5986e1d78d3f 100644
> --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -36,12 +36,6 @@
>  	ldr	lr, [sp], #16
>  .endm
>  
> -/* kern_hyp_va(lm_alias(ksym)) */
> -.macro ksym_hyp_va ksym, lm_offset
> -	sub	\ksym, \ksym, \lm_offset
> -	kern_hyp_va	\ksym
> -.endm
> -
>  el1_sync:				// Guest trapped into EL2
>  
>  	mrs	x0, esr_el2
> @@ -53,7 +47,15 @@ el1_sync:				// Guest trapped into EL2
>  	mrs	x1, vttbr_el2		// If vttbr is valid, the guest
>  	cbnz	x1, el1_hvc_guest	// called HVC
>  
> -	/* Here, we're pretty sure the host called HVC. */
> +#ifdef __HYPERVISOR__

nit: We may need a better symbol than __HYPERVISOR__.
__nVHE_HYPERVISOR__, maybe?

> +
> +/* kern_hyp_va(lm_alias(ksym)) */
> +.macro ksym_hyp_va ksym, lm_offset
> +	sub	\ksym, \ksym, \lm_offset
> +	kern_hyp_va	\ksym
> +.endm
> +
> +	/* nVHE: Here, we're pretty sure the host called HVC. */
>  	ldp	x0, x1, [sp], #16
>  
>  	/* Check for a stub HVC call */
> @@ -107,6 +109,13 @@ el1_sync:				// Guest trapped into EL2
>  	eret
>  	sb
>  
> +#else /* __HYPERVISOR__ */
> +
> +	/* VHE: Guest called HVC but vttbr is not valid. */
> +	b	__hyp_panic

The comment doesn't quite describe the reality of this case. If
VTTBR_EL2 is zero, you cannot run the guest at all, let alone have it
to execute a HVC. Where are its S2 PTs? And it cannot come from the
host either, as a HVC would go via the local EL vector.

I honestly don't see how you can realistically reach this code. But
maybe the whole point is to have a "catch the unexpected" branch.

> +
> +#endif /* __HYPERVISOR__ */
> +
>  el1_hvc_guest:
>  	/*
>  	 * Fastest possible path for ARM_SMCCC_ARCH_WORKAROUND_1.
> @@ -354,6 +363,7 @@ SYM_CODE_END(__bp_harden_hyp_vecs)
>  
>  	.popsection
>  
> +#ifndef __HYPERVISOR__
>  SYM_CODE_START(__smccc_workaround_1_smc)
>  	esb
>  	sub	sp, sp, #(8 * 4)
> @@ -367,4 +377,5 @@ SYM_CODE_START(__smccc_workaround_1_smc)
>  1:	.org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ
>  	.org 1b
>  SYM_CODE_END(__smccc_workaround_1_smc)
> -#endif
> +#endif /* __HYPERVISOR__ */

Instead of wrapping this in yet another layer of ifdefs, how about
moving it to a different object file altogether?

> +#endif /* CONFIG_KVM_INDIRECT_VECTORS */
> diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
> index 873d3ab0fd68..fa20b2723652 100644
> --- a/arch/arm64/kvm/hyp/nvhe/Makefile
> +++ b/arch/arm64/kvm/hyp/nvhe/Makefile
> @@ -7,7 +7,7 @@ asflags-y := -D__HYPERVISOR__
>  ccflags-y := -D__HYPERVISOR__ -fno-stack-protector -DDISABLE_BRANCH_PROFILING \
>  	     $(DISABLE_STACKLEAK_PLUGIN)
>  
> -obj-y :=
> +obj-y := host_hypercall.o ../hyp-entry.o

"The return of the vengeance of relative paths in KVM...". Oh well.

>  
>  obj-y := $(patsubst %.o,%.hyp.o,$(obj-y))
>  extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y))
> diff --git a/arch/arm64/kvm/hyp/host_hypercall.c b/arch/arm64/kvm/hyp/nvhe/host_hypercall.c
> similarity index 100%
> rename from arch/arm64/kvm/hyp/host_hypercall.c
> rename to arch/arm64/kvm/hyp/nvhe/host_hypercall.c
> diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
> index a4f48c1ac28c..8d941d19085d 100644
> --- a/arch/arm64/kvm/va_layout.c
> +++ b/arch/arm64/kvm/va_layout.c
> @@ -150,7 +150,7 @@ void kvm_patch_vector_branch(struct alt_instr *alt,
>  	/*
>  	 * Compute HYP VA by using the same computation as kern_hyp_va()
>  	 */
> -	addr = (uintptr_t)kvm_ksym_ref(__kvm_hyp_vector);
> +	addr = (uintptr_t)kvm_hyp_ref(__kvm_hyp_vector);
>  	addr &= va_mask;
>  	addr |= tag_val << tag_lsb;
>  
> -- 
> 2.26.1
> 
> 

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply index

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-30 14:48 [PATCH 00/15] Split off nVHE hyp code David Brazdil
2020-04-30 14:48 ` [PATCH 01/15] arm64: kvm: Unify users of HVC instruction David Brazdil
2020-05-07 14:01   ` Marc Zyngier
2020-05-07 14:36     ` Quentin Perret
2020-05-13 10:30       ` David Brazdil
2020-04-30 14:48 ` [PATCH 02/15] arm64: kvm: Formalize host-hyp hypcall ABI David Brazdil
2020-05-07 13:10   ` Marc Zyngier
2020-05-07 13:33     ` Quentin Perret
2020-05-10 10:16       ` Marc Zyngier
2020-05-13 10:48         ` David Brazdil
2020-04-30 14:48 ` [PATCH 03/15] arm64: kvm: Fix symbol dependency in __hyp_call_panic_nvhe David Brazdil
2020-05-07 13:22   ` Marc Zyngier
2020-05-07 14:36     ` David Brazdil
2020-05-10 11:14       ` Marc Zyngier
2020-04-30 14:48 ` [PATCH 04/15] arm64: kvm: Add build rules for separate nVHE object files David Brazdil
2020-05-07 13:46   ` Marc Zyngier
2020-04-30 14:48 ` [PATCH 05/15] arm64: kvm: Build hyp-entry.S separately for VHE/nVHE David Brazdil
2020-05-07 15:07   ` Marc Zyngier [this message]
2020-05-07 15:15     ` Marc Zyngier
2020-04-30 14:48 ` [PATCH 06/15] arm64: kvm: Move __smccc_workaround_1_smc to .rodata David Brazdil
2020-05-11 10:04   ` Marc Zyngier
2020-05-11 10:29     ` Will Deacon
2020-04-30 14:48 ` [PATCH 07/15] arm64: kvm: Split hyp/tlb.c to VHE/nVHE David Brazdil
2020-04-30 14:48 ` [PATCH 08/15] arm64: kvm: Split hyp/switch.c " David Brazdil
2020-04-30 14:48 ` [PATCH 09/15] arm64: kvm: Split hyp/debug-sr.c " David Brazdil
2020-04-30 14:48 ` [PATCH 10/15] arm64: kvm: Split hyp/sysreg-sr.c " David Brazdil
2020-04-30 14:48 ` [PATCH 11/15] arm64: kvm: Split hyp/timer-sr.c " David Brazdil
2020-04-30 14:48 ` [PATCH 12/15] arm64: kvm: Compile remaining hyp/ files for both VHE/nVHE David Brazdil
2020-04-30 14:48 ` [PATCH 13/15] arm64: kvm: Add comments around __hyp_text_ symbol aliases David Brazdil
2020-04-30 14:48 ` [PATCH 14/15] arm64: kvm: Remove __hyp_text macro, use build rules instead David Brazdil
2020-04-30 14:48 ` [PATCH 15/15] arm64: kvm: Lift instrumentation restrictions on VHE David Brazdil
2020-04-30 17:53 ` [PATCH 00/15] Split off nVHE hyp code Marc Zyngier
2020-04-30 18:57   ` David Brazdil

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87imh72379.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=dbrazdil@google.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

KVM ARM Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/kvmarm/0 kvmarm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 kvmarm kvmarm/ https://lore.kernel.org/kvmarm \
		kvmarm@lists.cs.columbia.edu
	public-inbox-index kvmarm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/edu.columbia.cs.lists.kvmarm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git