From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by smtp.lore.kernel.org (Postfix) with ESMTP id 282AFC433EF for ; Thu, 19 May 2022 13:47:33 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id D30C04B465; Thu, 19 May 2022 09:47:32 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@kernel.org Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id lcehhCTHXc0U; Thu, 19 May 2022 09:47:31 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 6F9544B49A; Thu, 19 May 2022 09:47:31 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 2BB3C4B434 for ; Thu, 19 May 2022 09:47:30 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id mUDXfnWIf7Mv for ; Thu, 19 May 2022 09:47:29 -0400 (EDT) Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id C89CB4B498 for ; Thu, 19 May 2022 09:47:28 -0400 (EDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DE3C0B824DA; Thu, 19 May 2022 13:47:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32815C34119; Thu, 19 May 2022 13:47:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968046; bh=sbfGSCQlcdHr0AlNILq/FUQuZCUkIjPNbUYGIv+SreI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S4PtIS2/n2txKk9VfZXhyhkRcZz/zUXjKAdMDN52IZB/o8BcLi/awKd4YB4FiDQ43 N3w2Er4fYT4XMfGPGDsuubvTUYYCHStQ0JqjYo5oT0eyx32HEIje1+5aFAvKFCoip/ ysf+6GXXvjBXygNxYdaf9HVAwdUmGJTR4C3SlHi8LKU8ihQMfuIJ4uVLa7SKt3tfUz 42Q1TfImobSrMi1VAnOW25q8aEooXZ0j7S7Qm29O8US3xhRmwIb4RJGM49UYZZZ4A0 GIy9U8KQMgBelbsYEQAHqqnjyBxJoC8OtqD7aGA1FlW+mwXdxcDjOP1MtAlYxarglN Q7DjDjNKaorPg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Subject: [PATCH 76/89] KVM: arm64: Factor out vcpu_reset code for core registers and PSCI Date: Thu, 19 May 2022 14:41:51 +0100 Message-Id: <20220519134204.5379-77-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 Cc: Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, Andy Lutomirski , linux-arm-kernel@lists.infradead.org, Michael Roth , Catalin Marinas , Chao Peng , Will Deacon X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu From: Fuad Tabba Factor out logic that resets a vcpu's core registers, including additional PSCI handling. This code will be reused when resetting VMs in protected mode. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 41 +++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 45 +++++----------------------- 2 files changed, 48 insertions(+), 38 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 82515b015eb4..2a79c861b8e0 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -522,4 +522,45 @@ static inline unsigned long psci_affinity_mask(unsigned long affinity_level) return 0; } +/* Reset a vcpu's core registers. */ +static inline void kvm_reset_vcpu_core(struct kvm_vcpu *vcpu) +{ + u32 pstate; + + if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { + pstate = VCPU_RESET_PSTATE_SVC; + } else { + pstate = VCPU_RESET_PSTATE_EL1; + } + + /* Reset core registers */ + memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); + memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); + vcpu->arch.ctxt.spsr_abt = 0; + vcpu->arch.ctxt.spsr_und = 0; + vcpu->arch.ctxt.spsr_irq = 0; + vcpu->arch.ctxt.spsr_fiq = 0; + vcpu_gp_regs(vcpu)->pstate = pstate; +} + +/* PSCI reset handling for a vcpu. */ +static inline void kvm_reset_vcpu_psci(struct kvm_vcpu *vcpu, + struct vcpu_reset_state *reset_state) +{ + unsigned long target_pc = reset_state->pc; + + /* Gracefully handle Thumb2 entry point */ + if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) { + target_pc &= ~1UL; + vcpu_set_thumb(vcpu); + } + + /* Propagate caller endianness */ + if (reset_state->be) + kvm_vcpu_set_be(vcpu); + + *vcpu_pc(vcpu) = target_pc; + vcpu_set_reg(vcpu, 0, reset_state->r0); +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 6bc979aece3c..4d223fae996d 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -109,7 +109,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) kfree(buf); return ret; } - + vcpu->arch.sve_state = buf; vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED; return 0; @@ -202,7 +202,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) struct vcpu_reset_state reset_state; int ret; bool loaded; - u32 pstate; mutex_lock(&vcpu->kvm->lock); reset_state = vcpu->arch.reset_state; @@ -240,29 +239,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) goto out; } - switch (vcpu->arch.target) { - default: - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { - pstate = VCPU_RESET_PSTATE_SVC; - } else { - pstate = VCPU_RESET_PSTATE_EL1; - } - - if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { - ret = -EINVAL; - goto out; - } - break; + if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { + ret = -EINVAL; + goto out; } /* Reset core registers */ - memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); - memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); - vcpu->arch.ctxt.spsr_abt = 0; - vcpu->arch.ctxt.spsr_und = 0; - vcpu->arch.ctxt.spsr_irq = 0; - vcpu->arch.ctxt.spsr_fiq = 0; - vcpu_gp_regs(vcpu)->pstate = pstate; + kvm_reset_vcpu_core(vcpu); /* Reset system registers */ kvm_reset_sys_regs(vcpu); @@ -271,22 +254,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) * Additional reset state handling that PSCI may have imposed on us. * Must be done after all the sys_reg reset. */ - if (reset_state.reset) { - unsigned long target_pc = reset_state.pc; - - /* Gracefully handle Thumb2 entry point */ - if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) { - target_pc &= ~1UL; - vcpu_set_thumb(vcpu); - } - - /* Propagate caller endianness */ - if (reset_state.be) - kvm_vcpu_set_be(vcpu); - - *vcpu_pc(vcpu) = target_pc; - vcpu_set_reg(vcpu, 0, reset_state.r0); - } + if (reset_state.reset) + kvm_reset_vcpu_psci(vcpu, &reset_state); /* Reset timer */ ret = kvm_timer_vcpu_reset(vcpu); -- 2.36.1.124.g0e6072fb45-goog _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AAD4C433EF for ; Thu, 19 May 2022 13:48:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239263AbiESNsR (ORCPT ); Thu, 19 May 2022 09:48:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239117AbiESNrw (ORCPT ); Thu, 19 May 2022 09:47:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65BF643ACE for ; Thu, 19 May 2022 06:47:27 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 411496179E for ; Thu, 19 May 2022 13:47:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32815C34119; Thu, 19 May 2022 13:47:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968046; bh=sbfGSCQlcdHr0AlNILq/FUQuZCUkIjPNbUYGIv+SreI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S4PtIS2/n2txKk9VfZXhyhkRcZz/zUXjKAdMDN52IZB/o8BcLi/awKd4YB4FiDQ43 N3w2Er4fYT4XMfGPGDsuubvTUYYCHStQ0JqjYo5oT0eyx32HEIje1+5aFAvKFCoip/ ysf+6GXXvjBXygNxYdaf9HVAwdUmGJTR4C3SlHi8LKU8ihQMfuIJ4uVLa7SKt3tfUz 42Q1TfImobSrMi1VAnOW25q8aEooXZ0j7S7Qm29O8US3xhRmwIb4RJGM49UYZZZ4A0 GIy9U8KQMgBelbsYEQAHqqnjyBxJoC8OtqD7aGA1FlW+mwXdxcDjOP1MtAlYxarglN Q7DjDjNKaorPg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 76/89] KVM: arm64: Factor out vcpu_reset code for core registers and PSCI Date: Thu, 19 May 2022 14:41:51 +0100 Message-Id: <20220519134204.5379-77-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Fuad Tabba Factor out logic that resets a vcpu's core registers, including additional PSCI handling. This code will be reused when resetting VMs in protected mode. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 41 +++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 45 +++++----------------------- 2 files changed, 48 insertions(+), 38 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 82515b015eb4..2a79c861b8e0 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -522,4 +522,45 @@ static inline unsigned long psci_affinity_mask(unsigned long affinity_level) return 0; } +/* Reset a vcpu's core registers. */ +static inline void kvm_reset_vcpu_core(struct kvm_vcpu *vcpu) +{ + u32 pstate; + + if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { + pstate = VCPU_RESET_PSTATE_SVC; + } else { + pstate = VCPU_RESET_PSTATE_EL1; + } + + /* Reset core registers */ + memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); + memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); + vcpu->arch.ctxt.spsr_abt = 0; + vcpu->arch.ctxt.spsr_und = 0; + vcpu->arch.ctxt.spsr_irq = 0; + vcpu->arch.ctxt.spsr_fiq = 0; + vcpu_gp_regs(vcpu)->pstate = pstate; +} + +/* PSCI reset handling for a vcpu. */ +static inline void kvm_reset_vcpu_psci(struct kvm_vcpu *vcpu, + struct vcpu_reset_state *reset_state) +{ + unsigned long target_pc = reset_state->pc; + + /* Gracefully handle Thumb2 entry point */ + if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) { + target_pc &= ~1UL; + vcpu_set_thumb(vcpu); + } + + /* Propagate caller endianness */ + if (reset_state->be) + kvm_vcpu_set_be(vcpu); + + *vcpu_pc(vcpu) = target_pc; + vcpu_set_reg(vcpu, 0, reset_state->r0); +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 6bc979aece3c..4d223fae996d 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -109,7 +109,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) kfree(buf); return ret; } - + vcpu->arch.sve_state = buf; vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED; return 0; @@ -202,7 +202,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) struct vcpu_reset_state reset_state; int ret; bool loaded; - u32 pstate; mutex_lock(&vcpu->kvm->lock); reset_state = vcpu->arch.reset_state; @@ -240,29 +239,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) goto out; } - switch (vcpu->arch.target) { - default: - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { - pstate = VCPU_RESET_PSTATE_SVC; - } else { - pstate = VCPU_RESET_PSTATE_EL1; - } - - if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { - ret = -EINVAL; - goto out; - } - break; + if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { + ret = -EINVAL; + goto out; } /* Reset core registers */ - memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); - memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); - vcpu->arch.ctxt.spsr_abt = 0; - vcpu->arch.ctxt.spsr_und = 0; - vcpu->arch.ctxt.spsr_irq = 0; - vcpu->arch.ctxt.spsr_fiq = 0; - vcpu_gp_regs(vcpu)->pstate = pstate; + kvm_reset_vcpu_core(vcpu); /* Reset system registers */ kvm_reset_sys_regs(vcpu); @@ -271,22 +254,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) * Additional reset state handling that PSCI may have imposed on us. * Must be done after all the sys_reg reset. */ - if (reset_state.reset) { - unsigned long target_pc = reset_state.pc; - - /* Gracefully handle Thumb2 entry point */ - if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) { - target_pc &= ~1UL; - vcpu_set_thumb(vcpu); - } - - /* Propagate caller endianness */ - if (reset_state.be) - kvm_vcpu_set_be(vcpu); - - *vcpu_pc(vcpu) = target_pc; - vcpu_set_reg(vcpu, 0, reset_state.r0); - } + if (reset_state.reset) + kvm_reset_vcpu_psci(vcpu, &reset_state); /* Reset timer */ ret = kvm_timer_vcpu_reset(vcpu); -- 2.36.1.124.g0e6072fb45-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30E65C433EF for ; Thu, 19 May 2022 14:42:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lIed3mfQUO5B+PrV8QCTB5jgrsPMpmNjzCf+S/bMmJ8=; b=Jrw98swIWnypjZ ZrbkPo3HJV0h2H4UDahvfJw6ovCaBPykusHjJOBnGU0VAWPsaWWci+Sc2t1EcnMaCDhAHlW0L8pJz gvUxsaXeGpbbjFBuWDmtjzIwTlrr8LEZYZuMxNuQayfN+1KzswbhpQRM1W8wMmUv1HC/h6eyZNp/L 9pcOZGj+VoFWGWc5T4lBgVzBA/eASSdjPm+fXDtGNhl90adebCtG2vBgvmBCO57mp4WS/4mvdx/Z1 wAzs3VWvfSCodfhwPTHt9Q8k4PPSjCHGMMlLsjNalQsO42RwMOi2Wknz6sE35ezObpnKUPUa1eH77 +H04nRFXiC9ngS7Dlv+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrhKz-007WKw-VW; Thu, 19 May 2022 14:41:14 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nrgUz-0077KX-CB for linux-arm-kernel@lists.infradead.org; Thu, 19 May 2022 13:47:32 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DE3C0B824DA; Thu, 19 May 2022 13:47:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32815C34119; Thu, 19 May 2022 13:47:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652968046; bh=sbfGSCQlcdHr0AlNILq/FUQuZCUkIjPNbUYGIv+SreI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S4PtIS2/n2txKk9VfZXhyhkRcZz/zUXjKAdMDN52IZB/o8BcLi/awKd4YB4FiDQ43 N3w2Er4fYT4XMfGPGDsuubvTUYYCHStQ0JqjYo5oT0eyx32HEIje1+5aFAvKFCoip/ ysf+6GXXvjBXygNxYdaf9HVAwdUmGJTR4C3SlHi8LKU8ihQMfuIJ4uVLa7SKt3tfUz 42Q1TfImobSrMi1VAnOW25q8aEooXZ0j7S7Qm29O8US3xhRmwIb4RJGM49UYZZZ4A0 GIy9U8KQMgBelbsYEQAHqqnjyBxJoC8OtqD7aGA1FlW+mwXdxcDjOP1MtAlYxarglN Q7DjDjNKaorPg== From: Will Deacon To: kvmarm@lists.cs.columbia.edu Cc: Will Deacon , Ard Biesheuvel , Sean Christopherson , Alexandru Elisei , Andy Lutomirski , Catalin Marinas , James Morse , Chao Peng , Quentin Perret , Suzuki K Poulose , Michael Roth , Mark Rutland , Fuad Tabba , Oliver Upton , Marc Zyngier , kernel-team@android.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 76/89] KVM: arm64: Factor out vcpu_reset code for core registers and PSCI Date: Thu, 19 May 2022 14:41:51 +0100 Message-Id: <20220519134204.5379-77-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220519134204.5379-1-will@kernel.org> References: <20220519134204.5379-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220519_064729_814889_8240AD91 X-CRM114-Status: GOOD ( 18.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Fuad Tabba Factor out logic that resets a vcpu's core registers, including additional PSCI handling. This code will be reused when resetting VMs in protected mode. Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_emulate.h | 41 +++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 45 +++++----------------------- 2 files changed, 48 insertions(+), 38 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 82515b015eb4..2a79c861b8e0 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -522,4 +522,45 @@ static inline unsigned long psci_affinity_mask(unsigned long affinity_level) return 0; } +/* Reset a vcpu's core registers. */ +static inline void kvm_reset_vcpu_core(struct kvm_vcpu *vcpu) +{ + u32 pstate; + + if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { + pstate = VCPU_RESET_PSTATE_SVC; + } else { + pstate = VCPU_RESET_PSTATE_EL1; + } + + /* Reset core registers */ + memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); + memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); + vcpu->arch.ctxt.spsr_abt = 0; + vcpu->arch.ctxt.spsr_und = 0; + vcpu->arch.ctxt.spsr_irq = 0; + vcpu->arch.ctxt.spsr_fiq = 0; + vcpu_gp_regs(vcpu)->pstate = pstate; +} + +/* PSCI reset handling for a vcpu. */ +static inline void kvm_reset_vcpu_psci(struct kvm_vcpu *vcpu, + struct vcpu_reset_state *reset_state) +{ + unsigned long target_pc = reset_state->pc; + + /* Gracefully handle Thumb2 entry point */ + if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) { + target_pc &= ~1UL; + vcpu_set_thumb(vcpu); + } + + /* Propagate caller endianness */ + if (reset_state->be) + kvm_vcpu_set_be(vcpu); + + *vcpu_pc(vcpu) = target_pc; + vcpu_set_reg(vcpu, 0, reset_state->r0); +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 6bc979aece3c..4d223fae996d 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -109,7 +109,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) kfree(buf); return ret; } - + vcpu->arch.sve_state = buf; vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED; return 0; @@ -202,7 +202,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) struct vcpu_reset_state reset_state; int ret; bool loaded; - u32 pstate; mutex_lock(&vcpu->kvm->lock); reset_state = vcpu->arch.reset_state; @@ -240,29 +239,13 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) goto out; } - switch (vcpu->arch.target) { - default: - if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { - pstate = VCPU_RESET_PSTATE_SVC; - } else { - pstate = VCPU_RESET_PSTATE_EL1; - } - - if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { - ret = -EINVAL; - goto out; - } - break; + if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { + ret = -EINVAL; + goto out; } /* Reset core registers */ - memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); - memset(&vcpu->arch.ctxt.fp_regs, 0, sizeof(vcpu->arch.ctxt.fp_regs)); - vcpu->arch.ctxt.spsr_abt = 0; - vcpu->arch.ctxt.spsr_und = 0; - vcpu->arch.ctxt.spsr_irq = 0; - vcpu->arch.ctxt.spsr_fiq = 0; - vcpu_gp_regs(vcpu)->pstate = pstate; + kvm_reset_vcpu_core(vcpu); /* Reset system registers */ kvm_reset_sys_regs(vcpu); @@ -271,22 +254,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) * Additional reset state handling that PSCI may have imposed on us. * Must be done after all the sys_reg reset. */ - if (reset_state.reset) { - unsigned long target_pc = reset_state.pc; - - /* Gracefully handle Thumb2 entry point */ - if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) { - target_pc &= ~1UL; - vcpu_set_thumb(vcpu); - } - - /* Propagate caller endianness */ - if (reset_state.be) - kvm_vcpu_set_be(vcpu); - - *vcpu_pc(vcpu) = target_pc; - vcpu_set_reg(vcpu, 0, reset_state.r0); - } + if (reset_state.reset) + kvm_reset_vcpu_psci(vcpu, &reset_state); /* Reset timer */ ret = kvm_timer_vcpu_reset(vcpu); -- 2.36.1.124.g0e6072fb45-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel