From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D2FFC4646C for ; Fri, 21 Jun 2019 09:39:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7197521537 for ; Fri, 21 Jun 2019 09:39:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726686AbfFUJjf (ORCPT ); Fri, 21 Jun 2019 05:39:35 -0400 Received: from foss.arm.com ([217.140.110.172]:53960 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726681AbfFUJjf (ORCPT ); Fri, 21 Jun 2019 05:39:35 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 77FD114FF; Fri, 21 Jun 2019 02:39:34 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 249F43F246; Fri, 21 Jun 2019 02:39:33 -0700 (PDT) From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: Andre Przywara , Christoffer Dall , Dave Martin , Jintack Lim , Julien Thierry , James Morse , Suzuki K Poulose Subject: [PATCH 14/59] KVM: arm64: nv: Handle SPSR_EL2 specially Date: Fri, 21 Jun 2019 10:37:58 +0100 Message-Id: <20190621093843.220980-15-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190621093843.220980-1-marc.zyngier@arm.com> References: <20190621093843.220980-1-marc.zyngier@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org SPSR_EL2 needs special attention when running nested on ARMv8.3: If taking an exception while running at vEL2 (actually EL1), the HW will update the SPSR_EL1 register with the EL1 mode. We need to track this in order to make sure that accesses to the virtual view of SPSR_EL2 is correct. To do so, we place an illegal value in SPSR_EL1.M, and patch it accordingly if required when accessing it. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_emulate.h | 45 ++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 28 ++++++++++++++++- 2 files changed, 72 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index f37006b6eec4..2644258e96ba 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -274,11 +274,51 @@ static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu) return __is_hyp_ctxt(&vcpu->arch.ctxt); } +static inline u64 __fixup_spsr_el2_write(struct kvm_cpu_context *ctxt, u64 val) +{ + if (!__vcpu_el2_e2h_is_set(ctxt)) { + /* + * Clear the .M field when writing SPSR to the CPU, so that we + * can detect when the CPU clobbered our SPSR copy during a + * local exception. + */ + val &= ~0xc; + } + + return val; +} + +static inline u64 __fixup_spsr_el2_read(const struct kvm_cpu_context *ctxt, u64 val) +{ + if (__vcpu_el2_e2h_is_set(ctxt)) + return val; + + /* + * SPSR.M == 0 means the CPU has not touched the SPSR, so the + * register has still the value we saved on the last write. + */ + if ((val & 0xc) == 0) + return ctxt->sys_regs[SPSR_EL2]; + + /* + * Otherwise there was a "local" exception on the CPU, + * which from the guest's point of view was being taken from + * EL2 to EL2, although it actually happened to be from + * EL1 to EL1. + * So we need to fix the .M field in SPSR, to make it look + * like EL2, which is what the guest would expect. + */ + return (val & ~0x0c) | CurrentEL_EL2; +} + static inline unsigned long vcpu_read_spsr(const struct kvm_vcpu *vcpu) { if (vcpu_mode_is_32bit(vcpu)) return vcpu_read_spsr32(vcpu); + if (unlikely(vcpu_mode_el2(vcpu))) + return vcpu_read_sys_reg(vcpu, SPSR_EL2); + if (vcpu->arch.sysregs_loaded_on_cpu) return read_sysreg_el1(SYS_SPSR); else @@ -292,6 +332,11 @@ static inline void vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long v) return; } + if (unlikely(vcpu_mode_el2(vcpu))) { + vcpu_write_sys_reg(vcpu, v, SPSR_EL2); + return; + } + if (vcpu->arch.sysregs_loaded_on_cpu) write_sysreg_el1(v, SYS_SPSR); else diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index d024114da162..2b8734f75a09 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -184,6 +184,7 @@ const struct el2_sysreg_map *find_el2_sysreg(const struct el2_sysreg_map *map, u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) { + u64 val; if (!vcpu->arch.sysregs_loaded_on_cpu) goto immediate_read; @@ -194,6 +195,12 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) if (!is_hyp_ctxt(vcpu)) goto immediate_read; + switch (reg) { + case SPSR_EL2: + val = read_sysreg_el1(SYS_SPSR); + return __fixup_spsr_el2_read(&vcpu->arch.ctxt, val); + } + el2_reg = find_el2_sysreg(nested_sysreg_map, reg); if (el2_reg) { /* @@ -267,6 +274,13 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) /* Store the EL2 version in the sysregs array. */ __vcpu_sys_reg(vcpu, reg) = val; + switch (reg) { + case SPSR_EL2: + val = __fixup_spsr_el2_write(&vcpu->arch.ctxt, val); + write_sysreg_el1(val, SYS_SPSR); + return; + } + el2_reg = find_el2_sysreg(nested_sysreg_map, reg); if (el2_reg) { /* Does this register have an EL1 counterpart? */ @@ -1556,6 +1570,18 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu, return true; } +static bool access_spsr_el2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2); + else + p->regval = vcpu_read_sys_reg(vcpu, SPSR_EL2); + + return true; +} + /* * Architected system registers. * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 @@ -1866,7 +1892,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_VTCR_EL2), access_rw, reset_val, VTCR_EL2, 0 }, { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, - { SYS_DESC(SYS_SPSR_EL2), access_rw, reset_val, SPSR_EL2, 0 }, + { SYS_DESC(SYS_SPSR_EL2), access_spsr_el2, reset_val, SPSR_EL2, 0 }, { SYS_DESC(SYS_ELR_EL2), access_rw, reset_val, ELR_EL2, 0 }, { SYS_DESC(SYS_SP_EL1), access_sp_el1}, -- 2.20.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 326F3C48BE0 for ; Fri, 21 Jun 2019 09:39:39 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id E875C20673 for ; Fri, 21 Jun 2019 09:39:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E875C20673 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 9DFAB4A4FB; Fri, 21 Jun 2019 05:39:38 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id KAEQ+Bx+p5xx; Fri, 21 Jun 2019 05:39:37 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id EFBD64A4E8; Fri, 21 Jun 2019 05:39:36 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 001504A483 for ; Fri, 21 Jun 2019 05:39:36 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id NWMmhpjMFkUv for ; Fri, 21 Jun 2019 05:39:34 -0400 (EDT) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mm01.cs.columbia.edu (Postfix) with ESMTP id AE4D94A50D for ; Fri, 21 Jun 2019 05:39:34 -0400 (EDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 77FD114FF; Fri, 21 Jun 2019 02:39:34 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 249F43F246; Fri, 21 Jun 2019 02:39:33 -0700 (PDT) From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 14/59] KVM: arm64: nv: Handle SPSR_EL2 specially Date: Fri, 21 Jun 2019 10:37:58 +0100 Message-Id: <20190621093843.220980-15-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190621093843.220980-1-marc.zyngier@arm.com> References: <20190621093843.220980-1-marc.zyngier@arm.com> MIME-Version: 1.0 Cc: Andre Przywara , Dave Martin X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu SPSR_EL2 needs special attention when running nested on ARMv8.3: If taking an exception while running at vEL2 (actually EL1), the HW will update the SPSR_EL1 register with the EL1 mode. We need to track this in order to make sure that accesses to the virtual view of SPSR_EL2 is correct. To do so, we place an illegal value in SPSR_EL1.M, and patch it accordingly if required when accessing it. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_emulate.h | 45 ++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 28 ++++++++++++++++- 2 files changed, 72 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index f37006b6eec4..2644258e96ba 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -274,11 +274,51 @@ static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu) return __is_hyp_ctxt(&vcpu->arch.ctxt); } +static inline u64 __fixup_spsr_el2_write(struct kvm_cpu_context *ctxt, u64 val) +{ + if (!__vcpu_el2_e2h_is_set(ctxt)) { + /* + * Clear the .M field when writing SPSR to the CPU, so that we + * can detect when the CPU clobbered our SPSR copy during a + * local exception. + */ + val &= ~0xc; + } + + return val; +} + +static inline u64 __fixup_spsr_el2_read(const struct kvm_cpu_context *ctxt, u64 val) +{ + if (__vcpu_el2_e2h_is_set(ctxt)) + return val; + + /* + * SPSR.M == 0 means the CPU has not touched the SPSR, so the + * register has still the value we saved on the last write. + */ + if ((val & 0xc) == 0) + return ctxt->sys_regs[SPSR_EL2]; + + /* + * Otherwise there was a "local" exception on the CPU, + * which from the guest's point of view was being taken from + * EL2 to EL2, although it actually happened to be from + * EL1 to EL1. + * So we need to fix the .M field in SPSR, to make it look + * like EL2, which is what the guest would expect. + */ + return (val & ~0x0c) | CurrentEL_EL2; +} + static inline unsigned long vcpu_read_spsr(const struct kvm_vcpu *vcpu) { if (vcpu_mode_is_32bit(vcpu)) return vcpu_read_spsr32(vcpu); + if (unlikely(vcpu_mode_el2(vcpu))) + return vcpu_read_sys_reg(vcpu, SPSR_EL2); + if (vcpu->arch.sysregs_loaded_on_cpu) return read_sysreg_el1(SYS_SPSR); else @@ -292,6 +332,11 @@ static inline void vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long v) return; } + if (unlikely(vcpu_mode_el2(vcpu))) { + vcpu_write_sys_reg(vcpu, v, SPSR_EL2); + return; + } + if (vcpu->arch.sysregs_loaded_on_cpu) write_sysreg_el1(v, SYS_SPSR); else diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index d024114da162..2b8734f75a09 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -184,6 +184,7 @@ const struct el2_sysreg_map *find_el2_sysreg(const struct el2_sysreg_map *map, u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) { + u64 val; if (!vcpu->arch.sysregs_loaded_on_cpu) goto immediate_read; @@ -194,6 +195,12 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) if (!is_hyp_ctxt(vcpu)) goto immediate_read; + switch (reg) { + case SPSR_EL2: + val = read_sysreg_el1(SYS_SPSR); + return __fixup_spsr_el2_read(&vcpu->arch.ctxt, val); + } + el2_reg = find_el2_sysreg(nested_sysreg_map, reg); if (el2_reg) { /* @@ -267,6 +274,13 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) /* Store the EL2 version in the sysregs array. */ __vcpu_sys_reg(vcpu, reg) = val; + switch (reg) { + case SPSR_EL2: + val = __fixup_spsr_el2_write(&vcpu->arch.ctxt, val); + write_sysreg_el1(val, SYS_SPSR); + return; + } + el2_reg = find_el2_sysreg(nested_sysreg_map, reg); if (el2_reg) { /* Does this register have an EL1 counterpart? */ @@ -1556,6 +1570,18 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu, return true; } +static bool access_spsr_el2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2); + else + p->regval = vcpu_read_sys_reg(vcpu, SPSR_EL2); + + return true; +} + /* * Architected system registers. * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 @@ -1866,7 +1892,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_VTCR_EL2), access_rw, reset_val, VTCR_EL2, 0 }, { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, - { SYS_DESC(SYS_SPSR_EL2), access_rw, reset_val, SPSR_EL2, 0 }, + { SYS_DESC(SYS_SPSR_EL2), access_spsr_el2, reset_val, SPSR_EL2, 0 }, { SYS_DESC(SYS_ELR_EL2), access_rw, reset_val, ELR_EL2, 0 }, { SYS_DESC(SYS_SP_EL1), access_sp_el1}, -- 2.20.1 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3A22C43613 for ; Fri, 21 Jun 2019 09:59:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7E15920665 for ; Fri, 21 Jun 2019 09:59:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="gKzVxDx+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E15920665 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pd8lUIjztJobeJ+RJa9fsA3iZnopaazXE0VUuU62FK4=; b=gKzVxDx+VHYqT8 pXcObX2H9/8C8RUKdMnKnD3XLGVm1tQdjAwLrt7b1NqdJ1Ok3KY+I8P9VDtV68yTuEJwctG9wopAw /UMS9Khr2b7G7xvEOv9EDx+BceY6OuWZlpFOPiHH9h+SF7sVUOm7/6I3OUlbJXHtAUdJoxIqVlNjC kPWI1FRhS/wL3TRYl0MC6ky/7Tp6b9hSVui4jDbokdJgOQ8D2PGMFRCXoDRUWSuuBIarHeb57W4TN 77MONFZkQ9FCUVxBNceeUgfDBXiL306imGGbbpv/J6Kj9RNdipMp8uqsp3k6aTUDImJ9vYUE4FEdI XY7cDE+vMpfIGp2K/+BA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1heGKP-0001Oc-49; Fri, 21 Jun 2019 09:59:29 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1heG18-00071o-OL for linux-arm-kernel@lists.infradead.org; Fri, 21 Jun 2019 09:39:37 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 77FD114FF; Fri, 21 Jun 2019 02:39:34 -0700 (PDT) Received: from filthy-habits.cambridge.arm.com (filthy-habits.cambridge.arm.com [10.1.197.61]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 249F43F246; Fri, 21 Jun 2019 02:39:33 -0700 (PDT) From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [PATCH 14/59] KVM: arm64: nv: Handle SPSR_EL2 specially Date: Fri, 21 Jun 2019 10:37:58 +0100 Message-Id: <20190621093843.220980-15-marc.zyngier@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190621093843.220980-1-marc.zyngier@arm.com> References: <20190621093843.220980-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190621_023934_948890_2562056C X-CRM114-Status: GOOD ( 17.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Julien Thierry , Andre Przywara , Suzuki K Poulose , Christoffer Dall , Dave Martin , James Morse , Jintack Lim Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org SPSR_EL2 needs special attention when running nested on ARMv8.3: If taking an exception while running at vEL2 (actually EL1), the HW will update the SPSR_EL1 register with the EL1 mode. We need to track this in order to make sure that accesses to the virtual view of SPSR_EL2 is correct. To do so, we place an illegal value in SPSR_EL1.M, and patch it accordingly if required when accessing it. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_emulate.h | 45 ++++++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 28 ++++++++++++++++- 2 files changed, 72 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index f37006b6eec4..2644258e96ba 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -274,11 +274,51 @@ static inline bool is_hyp_ctxt(const struct kvm_vcpu *vcpu) return __is_hyp_ctxt(&vcpu->arch.ctxt); } +static inline u64 __fixup_spsr_el2_write(struct kvm_cpu_context *ctxt, u64 val) +{ + if (!__vcpu_el2_e2h_is_set(ctxt)) { + /* + * Clear the .M field when writing SPSR to the CPU, so that we + * can detect when the CPU clobbered our SPSR copy during a + * local exception. + */ + val &= ~0xc; + } + + return val; +} + +static inline u64 __fixup_spsr_el2_read(const struct kvm_cpu_context *ctxt, u64 val) +{ + if (__vcpu_el2_e2h_is_set(ctxt)) + return val; + + /* + * SPSR.M == 0 means the CPU has not touched the SPSR, so the + * register has still the value we saved on the last write. + */ + if ((val & 0xc) == 0) + return ctxt->sys_regs[SPSR_EL2]; + + /* + * Otherwise there was a "local" exception on the CPU, + * which from the guest's point of view was being taken from + * EL2 to EL2, although it actually happened to be from + * EL1 to EL1. + * So we need to fix the .M field in SPSR, to make it look + * like EL2, which is what the guest would expect. + */ + return (val & ~0x0c) | CurrentEL_EL2; +} + static inline unsigned long vcpu_read_spsr(const struct kvm_vcpu *vcpu) { if (vcpu_mode_is_32bit(vcpu)) return vcpu_read_spsr32(vcpu); + if (unlikely(vcpu_mode_el2(vcpu))) + return vcpu_read_sys_reg(vcpu, SPSR_EL2); + if (vcpu->arch.sysregs_loaded_on_cpu) return read_sysreg_el1(SYS_SPSR); else @@ -292,6 +332,11 @@ static inline void vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long v) return; } + if (unlikely(vcpu_mode_el2(vcpu))) { + vcpu_write_sys_reg(vcpu, v, SPSR_EL2); + return; + } + if (vcpu->arch.sysregs_loaded_on_cpu) write_sysreg_el1(v, SYS_SPSR); else diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index d024114da162..2b8734f75a09 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -184,6 +184,7 @@ const struct el2_sysreg_map *find_el2_sysreg(const struct el2_sysreg_map *map, u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) { + u64 val; if (!vcpu->arch.sysregs_loaded_on_cpu) goto immediate_read; @@ -194,6 +195,12 @@ u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg) if (!is_hyp_ctxt(vcpu)) goto immediate_read; + switch (reg) { + case SPSR_EL2: + val = read_sysreg_el1(SYS_SPSR); + return __fixup_spsr_el2_read(&vcpu->arch.ctxt, val); + } + el2_reg = find_el2_sysreg(nested_sysreg_map, reg); if (el2_reg) { /* @@ -267,6 +274,13 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg) /* Store the EL2 version in the sysregs array. */ __vcpu_sys_reg(vcpu, reg) = val; + switch (reg) { + case SPSR_EL2: + val = __fixup_spsr_el2_write(&vcpu->arch.ctxt, val); + write_sysreg_el1(val, SYS_SPSR); + return; + } + el2_reg = find_el2_sysreg(nested_sysreg_map, reg); if (el2_reg) { /* Does this register have an EL1 counterpart? */ @@ -1556,6 +1570,18 @@ static bool access_sp_el1(struct kvm_vcpu *vcpu, return true; } +static bool access_spsr_el2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + vcpu_write_sys_reg(vcpu, p->regval, SPSR_EL2); + else + p->regval = vcpu_read_sys_reg(vcpu, SPSR_EL2); + + return true; +} + /* * Architected system registers. * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 @@ -1866,7 +1892,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_VTCR_EL2), access_rw, reset_val, VTCR_EL2, 0 }, { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, - { SYS_DESC(SYS_SPSR_EL2), access_rw, reset_val, SPSR_EL2, 0 }, + { SYS_DESC(SYS_SPSR_EL2), access_spsr_el2, reset_val, SPSR_EL2, 0 }, { SYS_DESC(SYS_ELR_EL2), access_rw, reset_val, ELR_EL2, 0 }, { SYS_DESC(SYS_SP_EL1), access_sp_el1}, -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel