From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BBA3C433B4 for ; Thu, 20 May 2021 12:57:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDF4E6100C for ; Thu, 20 May 2021 12:57:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240672AbhETM6d (ORCPT ); Thu, 20 May 2021 08:58:33 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:3627 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237405AbhETM5P (ORCPT ); Thu, 20 May 2021 08:57:15 -0400 Received: from dggems706-chm.china.huawei.com (unknown [172.30.72.58]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Fm8ms1KGbzmXlB; Thu, 20 May 2021 20:53:33 +0800 (CST) Received: from dggema764-chm.china.huawei.com (10.1.198.206) by dggems706-chm.china.huawei.com (10.3.19.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Thu, 20 May 2021 20:55:49 +0800 Received: from [10.174.185.179] (10.174.185.179) by dggema764-chm.china.huawei.com (10.1.198.206) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Thu, 20 May 2021 20:55:49 +0800 Subject: Re: [PATCH v4 09/66] KVM: arm64: nv: Support virtual EL2 exceptions To: Marc Zyngier CC: , , , , Andre Przywara , , , , James Morse , Suzuki K Poulose , "Alexandru Elisei" , References: <20210510165920.1913477-1-maz@kernel.org> <20210510165920.1913477-10-maz@kernel.org> From: Zenghui Yu Message-ID: Date: Thu, 20 May 2021 20:55:48 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0 MIME-Version: 1.0 In-Reply-To: <20210510165920.1913477-10-maz@kernel.org> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.185.179] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggema764-chm.china.huawei.com (10.1.198.206) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On 2021/5/11 0:58, Marc Zyngier wrote: > From: Jintack Lim > > Support injecting exceptions and performing exception returns to and > from virtual EL2. This must be done entirely in software except when > taking an exception from vEL0 to vEL2 when the virtual HCR_EL2.{E2H,TGE} > == {1,1} (a VHE guest hypervisor). > > Signed-off-by: Jintack Lim > Signed-off-by: Christoffer Dall > [maz: switch to common exception injection framework] > Signed-off-by: Marc Zyngier > --- > arch/arm64/include/asm/kvm_arm.h | 17 +++ > arch/arm64/include/asm/kvm_emulate.h | 10 ++ > arch/arm64/kvm/Makefile | 2 +- > arch/arm64/kvm/emulate-nested.c | 176 +++++++++++++++++++++++++++ > arch/arm64/kvm/hyp/exception.c | 45 +++++-- > arch/arm64/kvm/inject_fault.c | 63 ++++++++-- > arch/arm64/kvm/trace_arm.h | 59 +++++++++ > 7 files changed, 354 insertions(+), 18 deletions(-) > create mode 100644 arch/arm64/kvm/emulate-nested.c [...] > static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr) > { > unsigned long cpsr = *vcpu_cpsr(vcpu); > bool is_aarch32 = vcpu_mode_is_32bit(vcpu); > u32 esr = 0; > > - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 | > - KVM_ARM64_EXCEPT_AA64_ELx_SYNC | > - KVM_ARM64_PENDING_EXCEPTION); > - > - vcpu_write_sys_reg(vcpu, addr, FAR_EL1); > + pend_sync_exception(vcpu); > > /* > * Build an {i,d}abort, depending on the level and the > @@ -45,16 +79,22 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr > if (!is_iabt) > esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT; > > - vcpu_write_sys_reg(vcpu, esr | ESR_ELx_FSC_EXTABT, ESR_EL1); > + esr |= ESR_ELx_FSC_EXTABT; > + > + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) { This isn't the right way to pick between EL1 and EL2 since KVM_ARM64_EXCEPT_AA64_EL1 is (0 << 11), we will not be able to inject abort to EL1 that way. > + vcpu_write_sys_reg(vcpu, addr, FAR_EL1); > + vcpu_write_sys_reg(vcpu, esr, ESR_EL1); > + } else { > + vcpu_write_sys_reg(vcpu, addr, FAR_EL2); > + vcpu_write_sys_reg(vcpu, esr, ESR_EL2); > + } > } > > static void inject_undef64(struct kvm_vcpu *vcpu) > { > u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); > > - vcpu->arch.flags |= (KVM_ARM64_EXCEPT_AA64_EL1 | > - KVM_ARM64_EXCEPT_AA64_ELx_SYNC | > - KVM_ARM64_PENDING_EXCEPTION); > + pend_sync_exception(vcpu); > > /* > * Build an unknown exception, depending on the instruction > @@ -63,7 +103,10 @@ static void inject_undef64(struct kvm_vcpu *vcpu) > if (kvm_vcpu_trap_il_is32bit(vcpu)) > esr |= ESR_ELx_IL; > > - vcpu_write_sys_reg(vcpu, esr, ESR_EL1); > + if (vcpu->arch.flags & KVM_ARM64_EXCEPT_AA64_EL1) > + vcpu_write_sys_reg(vcpu, esr, ESR_EL1); > + else > + vcpu_write_sys_reg(vcpu, esr, ESR_EL2); Same here. Thanks, Zenghui