From: Will Deacon <will@kernel.org> To: kvmarm@lists.linux.dev Cc: "Will Deacon" <will@kernel.org>, "Sean Christopherson" <seanjc@google.com>, "Vincent Donnefort" <vdonnefort@google.com>, "Alexandru Elisei" <alexandru.elisei@arm.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Philippe Mathieu-Daudé" <philmd@linaro.org>, "James Morse" <james.morse@arm.com>, "Chao Peng" <chao.p.peng@linux.intel.com>, "Quentin Perret" <qperret@google.com>, "Suzuki K Poulose" <suzuki.poulose@arm.com>, "Mark Rutland" <mark.rutland@arm.com>, "Fuad Tabba" <tabba@google.com>, "Oliver Upton" <oliver.upton@linux.dev>, "Marc Zyngier" <maz@kernel.org>, kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH v5 25/25] KVM: arm64: Use the pKVM hyp vCPU structure in handle___kvm_vcpu_run() Date: Thu, 20 Oct 2022 14:38:27 +0100 [thread overview] Message-ID: <20221020133827.5541-26-will@kernel.org> (raw) In-Reply-To: <20221020133827.5541-1-will@kernel.org> As a stepping stone towards deprivileging the host's access to the guest's vCPU structures, introduce some naive flush/sync routines to copy most of the host vCPU into the hyp vCPU on vCPU run and back again on return to EL1. This allows us to run using the pKVM hyp structures when KVM is initialised in protected mode. Tested-by: Vincent Donnefort <vdonnefort@google.com> Co-developed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Fuad Tabba <tabba@google.com> Signed-off-by: Will Deacon <will@kernel.org> --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 4 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 79 +++++++++++++++++++++++++- arch/arm64/kvm/hyp/nvhe/pkvm.c | 28 +++++++++ 3 files changed, 109 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index d14dfbcb7da1..82b3d62538a6 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -61,4 +61,8 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, unsigned long vcpu_hva); int __pkvm_teardown_vm(pkvm_handle_t handle); +struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, + unsigned int vcpu_idx); +void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index b5f3fcfe9135..728e01d4536b 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -22,11 +22,86 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); +static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + + hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt; + + hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); + hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl; + + hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; + + hyp_vcpu->vcpu.arch.hcr_el2 = host_vcpu->arch.hcr_el2; + hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; + hyp_vcpu->vcpu.arch.cptr_el2 = host_vcpu->arch.cptr_el2; + + hyp_vcpu->vcpu.arch.iflags = host_vcpu->arch.iflags; + hyp_vcpu->vcpu.arch.fp_state = host_vcpu->arch.fp_state; + + hyp_vcpu->vcpu.arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); + hyp_vcpu->vcpu.arch.host_fpsimd_state = host_vcpu->arch.host_fpsimd_state; + + hyp_vcpu->vcpu.arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; + + hyp_vcpu->vcpu.arch.vgic_cpu.vgic_v3 = host_vcpu->arch.vgic_cpu.vgic_v3; +} + +static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + struct vgic_v3_cpu_if *hyp_cpu_if = &hyp_vcpu->vcpu.arch.vgic_cpu.vgic_v3; + struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; + unsigned int i; + + host_vcpu->arch.ctxt = hyp_vcpu->vcpu.arch.ctxt; + + host_vcpu->arch.hcr_el2 = hyp_vcpu->vcpu.arch.hcr_el2; + host_vcpu->arch.cptr_el2 = hyp_vcpu->vcpu.arch.cptr_el2; + + host_vcpu->arch.fault = hyp_vcpu->vcpu.arch.fault; + + host_vcpu->arch.iflags = hyp_vcpu->vcpu.arch.iflags; + host_vcpu->arch.fp_state = hyp_vcpu->vcpu.arch.fp_state; + + host_cpu_if->vgic_hcr = hyp_cpu_if->vgic_hcr; + for (i = 0; i < hyp_cpu_if->used_lrs; ++i) + host_cpu_if->vgic_lr[i] = hyp_cpu_if->vgic_lr[i]; +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { - DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); + DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); + int ret; + + host_vcpu = kern_hyp_va(host_vcpu); + + if (unlikely(is_protected_kvm_enabled())) { + struct pkvm_hyp_vcpu *hyp_vcpu; + struct kvm *host_kvm; + + host_kvm = kern_hyp_va(host_vcpu->kvm); + hyp_vcpu = pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, + host_vcpu->vcpu_idx); + if (!hyp_vcpu) { + ret = -EINVAL; + goto out; + } + + flush_hyp_vcpu(hyp_vcpu); + + ret = __kvm_vcpu_run(&hyp_vcpu->vcpu); + + sync_hyp_vcpu(hyp_vcpu); + pkvm_put_hyp_vcpu(hyp_vcpu); + } else { + /* The host is fully trusted, run its vCPU directly. */ + ret = __kvm_vcpu_run(host_vcpu); + } - cpu_reg(host_ctxt, 1) = __kvm_vcpu_run(kern_hyp_va(vcpu)); +out: + cpu_reg(host_ctxt, 1) = ret; } static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 0504f2678bd4..6f78a92679f4 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -241,6 +241,33 @@ static struct pkvm_hyp_vm *get_vm_by_handle(pkvm_handle_t handle) return vm_table[idx]; } +struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, + unsigned int vcpu_idx) +{ + struct pkvm_hyp_vcpu *hyp_vcpu = NULL; + struct pkvm_hyp_vm *hyp_vm; + + hyp_spin_lock(&vm_table_lock); + hyp_vm = get_vm_by_handle(handle); + if (!hyp_vm || hyp_vm->nr_vcpus <= vcpu_idx) + goto unlock; + + hyp_vcpu = hyp_vm->vcpus[vcpu_idx]; + hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); +unlock: + hyp_spin_unlock(&vm_table_lock); + return hyp_vcpu; +} + +void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct pkvm_hyp_vm *hyp_vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); + + hyp_spin_lock(&vm_table_lock); + hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); +} + static void unpin_host_vcpu(struct kvm_vcpu *host_vcpu) { if (host_vcpu) @@ -286,6 +313,7 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, hyp_vcpu->vcpu.vcpu_idx = vcpu_idx; hyp_vcpu->vcpu.arch.hw_mmu = &hyp_vm->kvm.arch.mmu; + hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags); done: if (ret) unpin_host_vcpu(host_vcpu); -- 2.38.0.413.g74048e4d9e-goog
WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will@kernel.org> To: kvmarm@lists.linux.dev Cc: "Will Deacon" <will@kernel.org>, "Sean Christopherson" <seanjc@google.com>, "Vincent Donnefort" <vdonnefort@google.com>, "Alexandru Elisei" <alexandru.elisei@arm.com>, "Catalin Marinas" <catalin.marinas@arm.com>, "Philippe Mathieu-Daudé" <philmd@linaro.org>, "James Morse" <james.morse@arm.com>, "Chao Peng" <chao.p.peng@linux.intel.com>, "Quentin Perret" <qperret@google.com>, "Suzuki K Poulose" <suzuki.poulose@arm.com>, "Mark Rutland" <mark.rutland@arm.com>, "Fuad Tabba" <tabba@google.com>, "Oliver Upton" <oliver.upton@linux.dev>, "Marc Zyngier" <maz@kernel.org>, kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH v5 25/25] KVM: arm64: Use the pKVM hyp vCPU structure in handle___kvm_vcpu_run() Date: Thu, 20 Oct 2022 14:38:27 +0100 [thread overview] Message-ID: <20221020133827.5541-26-will@kernel.org> (raw) In-Reply-To: <20221020133827.5541-1-will@kernel.org> As a stepping stone towards deprivileging the host's access to the guest's vCPU structures, introduce some naive flush/sync routines to copy most of the host vCPU into the hyp vCPU on vCPU run and back again on return to EL1. This allows us to run using the pKVM hyp structures when KVM is initialised in protected mode. Tested-by: Vincent Donnefort <vdonnefort@google.com> Co-developed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Fuad Tabba <tabba@google.com> Signed-off-by: Will Deacon <will@kernel.org> --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 4 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 79 +++++++++++++++++++++++++- arch/arm64/kvm/hyp/nvhe/pkvm.c | 28 +++++++++ 3 files changed, 109 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h index d14dfbcb7da1..82b3d62538a6 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -61,4 +61,8 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu, unsigned long vcpu_hva); int __pkvm_teardown_vm(pkvm_handle_t handle); +struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, + unsigned int vcpu_idx); +void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index b5f3fcfe9135..728e01d4536b 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -22,11 +22,86 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); +static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + + hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt; + + hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); + hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl; + + hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu; + + hyp_vcpu->vcpu.arch.hcr_el2 = host_vcpu->arch.hcr_el2; + hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; + hyp_vcpu->vcpu.arch.cptr_el2 = host_vcpu->arch.cptr_el2; + + hyp_vcpu->vcpu.arch.iflags = host_vcpu->arch.iflags; + hyp_vcpu->vcpu.arch.fp_state = host_vcpu->arch.fp_state; + + hyp_vcpu->vcpu.arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr); + hyp_vcpu->vcpu.arch.host_fpsimd_state = host_vcpu->arch.host_fpsimd_state; + + hyp_vcpu->vcpu.arch.vsesr_el2 = host_vcpu->arch.vsesr_el2; + + hyp_vcpu->vcpu.arch.vgic_cpu.vgic_v3 = host_vcpu->arch.vgic_cpu.vgic_v3; +} + +static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu; + struct vgic_v3_cpu_if *hyp_cpu_if = &hyp_vcpu->vcpu.arch.vgic_cpu.vgic_v3; + struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3; + unsigned int i; + + host_vcpu->arch.ctxt = hyp_vcpu->vcpu.arch.ctxt; + + host_vcpu->arch.hcr_el2 = hyp_vcpu->vcpu.arch.hcr_el2; + host_vcpu->arch.cptr_el2 = hyp_vcpu->vcpu.arch.cptr_el2; + + host_vcpu->arch.fault = hyp_vcpu->vcpu.arch.fault; + + host_vcpu->arch.iflags = hyp_vcpu->vcpu.arch.iflags; + host_vcpu->arch.fp_state = hyp_vcpu->vcpu.arch.fp_state; + + host_cpu_if->vgic_hcr = hyp_cpu_if->vgic_hcr; + for (i = 0; i < hyp_cpu_if->used_lrs; ++i) + host_cpu_if->vgic_lr[i] = hyp_cpu_if->vgic_lr[i]; +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { - DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); + DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); + int ret; + + host_vcpu = kern_hyp_va(host_vcpu); + + if (unlikely(is_protected_kvm_enabled())) { + struct pkvm_hyp_vcpu *hyp_vcpu; + struct kvm *host_kvm; + + host_kvm = kern_hyp_va(host_vcpu->kvm); + hyp_vcpu = pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, + host_vcpu->vcpu_idx); + if (!hyp_vcpu) { + ret = -EINVAL; + goto out; + } + + flush_hyp_vcpu(hyp_vcpu); + + ret = __kvm_vcpu_run(&hyp_vcpu->vcpu); + + sync_hyp_vcpu(hyp_vcpu); + pkvm_put_hyp_vcpu(hyp_vcpu); + } else { + /* The host is fully trusted, run its vCPU directly. */ + ret = __kvm_vcpu_run(host_vcpu); + } - cpu_reg(host_ctxt, 1) = __kvm_vcpu_run(kern_hyp_va(vcpu)); +out: + cpu_reg(host_ctxt, 1) = ret; } static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 0504f2678bd4..6f78a92679f4 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -241,6 +241,33 @@ static struct pkvm_hyp_vm *get_vm_by_handle(pkvm_handle_t handle) return vm_table[idx]; } +struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, + unsigned int vcpu_idx) +{ + struct pkvm_hyp_vcpu *hyp_vcpu = NULL; + struct pkvm_hyp_vm *hyp_vm; + + hyp_spin_lock(&vm_table_lock); + hyp_vm = get_vm_by_handle(handle); + if (!hyp_vm || hyp_vm->nr_vcpus <= vcpu_idx) + goto unlock; + + hyp_vcpu = hyp_vm->vcpus[vcpu_idx]; + hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); +unlock: + hyp_spin_unlock(&vm_table_lock); + return hyp_vcpu; +} + +void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct pkvm_hyp_vm *hyp_vm = pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); + + hyp_spin_lock(&vm_table_lock); + hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); +} + static void unpin_host_vcpu(struct kvm_vcpu *host_vcpu) { if (host_vcpu) @@ -286,6 +313,7 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, hyp_vcpu->vcpu.vcpu_idx = vcpu_idx; hyp_vcpu->vcpu.arch.hw_mmu = &hyp_vm->kvm.arch.mmu; + hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags); done: if (ret) unpin_host_vcpu(host_vcpu); -- 2.38.0.413.g74048e4d9e-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2022-10-20 13:40 UTC|newest] Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-10-20 13:38 [PATCH v5 00/25] KVM: arm64: Introduce pKVM hyp VM and vCPU state at EL2 Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 01/25] KVM: arm64: Move hyp refcount manipulation helpers to common header file Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 02/25] KVM: arm64: Allow attaching of non-coalescable pages to a hyp pool Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-28 0:17 ` Oliver Upton 2022-10-28 0:17 ` Oliver Upton 2022-10-28 8:09 ` Oliver Upton 2022-10-28 8:09 ` Oliver Upton 2022-10-28 9:29 ` Quentin Perret 2022-10-28 9:29 ` Quentin Perret 2022-10-28 9:28 ` Quentin Perret 2022-10-28 9:28 ` Quentin Perret 2022-10-20 13:38 ` [PATCH v5 03/25] KVM: arm64: Back the hypervisor 'struct hyp_page' array for all memory Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 04/25] KVM: arm64: Fix-up hyp stage-1 refcounts for all pages mapped at EL2 Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-28 0:31 ` Oliver Upton 2022-10-28 0:31 ` Oliver Upton 2022-10-20 13:38 ` [PATCH v5 05/25] KVM: arm64: Unify identifiers used to distinguish host and hypervisor Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-28 0:32 ` Oliver Upton 2022-10-28 0:32 ` Oliver Upton 2022-10-20 13:38 ` [PATCH v5 06/25] KVM: arm64: Implement do_donate() helper for donating memory Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-28 7:52 ` Oliver Upton 2022-10-28 7:52 ` Oliver Upton 2022-10-28 10:01 ` Quentin Perret 2022-10-28 10:01 ` Quentin Perret 2022-10-20 13:38 ` [PATCH v5 07/25] KVM: arm64: Prevent the donation of no-map pages Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 08/25] KVM: arm64: Add helpers to pin memory shared with the hypervisor at EL2 Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 09/25] KVM: arm64: Include asm/kvm_mmu.h in nvhe/mem_protect.h Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 10/25] KVM: arm64: Add hyp_spinlock_t static initializer Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 11/25] KVM: arm64: Rename 'host_kvm' to 'host_mmu' Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 12/25] KVM: arm64: Add infrastructure to create and track pKVM instances at EL2 Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 13/25] KVM: arm64: Instantiate pKVM hypervisor VM and vCPU structures from EL1 Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 14/25] KVM: arm64: Add per-cpu fixmap infrastructure at EL2 Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 15/25] KVM: arm64: Initialise hypervisor copies of host symbols unconditionally Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 16/25] KVM: arm64: Provide I-cache invalidation by virtual address at EL2 Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 17/25] KVM: arm64: Add generic hyp_memcache helpers Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 18/25] KVM: arm64: Consolidate stage-2 initialisation into a single function Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 19/25] KVM: arm64: Instantiate guest stage-2 page-tables at EL2 Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 20/25] KVM: arm64: Return guest memory from EL2 via dedicated teardown memcache Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-27 13:13 ` Quentin Perret 2022-10-27 13:13 ` Quentin Perret 2022-10-20 13:38 ` [PATCH v5 21/25] KVM: arm64: Unmap 'kvm_arm_hyp_percpu_base' from the host Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 22/25] KVM: arm64: Maintain a copy of 'kvm_arm_vmid_bits' at EL2 Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 23/25] KVM: arm64: Explicitly map 'kvm_vgic_global_state' " Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` [PATCH v5 24/25] KVM: arm64: Don't unnecessarily map host kernel sections " Will Deacon 2022-10-20 13:38 ` Will Deacon 2022-10-20 13:38 ` Will Deacon [this message] 2022-10-20 13:38 ` [RFC PATCH v5 25/25] KVM: arm64: Use the pKVM hyp vCPU structure in handle___kvm_vcpu_run() Will Deacon
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20221020133827.5541-26-will@kernel.org \ --to=will@kernel.org \ --cc=alexandru.elisei@arm.com \ --cc=catalin.marinas@arm.com \ --cc=chao.p.peng@linux.intel.com \ --cc=james.morse@arm.com \ --cc=kernel-team@android.com \ --cc=kvm@vger.kernel.org \ --cc=kvmarm@lists.linux.dev \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=mark.rutland@arm.com \ --cc=maz@kernel.org \ --cc=oliver.upton@linux.dev \ --cc=philmd@linaro.org \ --cc=qperret@google.com \ --cc=seanjc@google.com \ --cc=suzuki.poulose@arm.com \ --cc=tabba@google.com \ --cc=vdonnefort@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.