From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71E58C433F5 for ; Mon, 29 Nov 2021 20:08:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233144AbhK2ULy (ORCPT ); Mon, 29 Nov 2021 15:11:54 -0500 Received: from ams.source.kernel.org ([145.40.68.75]:42266 "EHLO ams.source.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230243AbhK2UJv (ORCPT ); Mon, 29 Nov 2021 15:09:51 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1CB93B815E9 for ; Mon, 29 Nov 2021 20:06:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE28BC53FCD; Mon, 29 Nov 2021 20:06:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1638216391; bh=HOZdzEjZoeKGETrJPFOERpPygAZITG+WMe69qICsnxY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ava+keXKw6IIMzpQUO+tL1qONJ2H21WVoLC4+orWHICkvY+0kswSXgLyDPzkNeN6t yTeJqq3pCyICyTur1hzTP/Bk8TmcCC0wUuUKW6DahMdPzgQaFWifPDt0a4ft/+k5VO k77rKHZBJH8NPMVtp0paNX3W73Su5YoO4SMnQTPOJ3N1KEsJ9GYZXWBvuM3zZtFN4X 65fnN+EoXxVPgXyKzMqrgce87NqIw7r1B8hEI9dwb9zA5Tu+8EnR75ztwJmdnTziAp VINw0fzo/NkineRWz9ZprgZmgCUdVj84Tcbd5g+TDEa6/soWD8csP1n+UMc56wQQ3f 530ctogYlo39A== Received: from sofa.misterjones.org ([185.219.108.64] helo=why.lan) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mrmqy-008gvR-I8; Mon, 29 Nov 2021 20:02:20 +0000 From: Marc Zyngier To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Cc: Andre Przywara , Christoffer Dall , Jintack Lim , Haibo Xu , Ganapatrao Kulkarni , James Morse , Suzuki K Poulose , Alexandru Elisei , kernel-team@android.com Subject: [PATCH v5 35/69] KVM: arm64: nv: Only toggle cache for virtual EL2 when SCTLR_EL2 changes Date: Mon, 29 Nov 2021 20:01:16 +0000 Message-Id: <20211129200150.351436-36-maz@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211129200150.351436-1-maz@kernel.org> References: <20211129200150.351436-1-maz@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, andre.przywara@arm.com, christoffer.dall@arm.com, jintack@cs.columbia.edu, haibo.xu@linaro.org, gankulkarni@os.amperecomputing.com, james.morse@arm.com, suzuki.poulose@arm.com, alexandru.elisei@arm.com, kernel-team@android.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Christoffer Dall So far we were flushing almost the entire universe whenever a VM would load/unload the SCTLR_EL1 and the two versions of that register had different MMU enabled settings. This turned out to be so slow that it prevented forward progress for a nested VM, because a scheduler timer tick interrupt would always be pending when we reached the nested VM. To avoid this problem, we consider the SCTLR_EL2 when evaluating if caches are on or off when entering virtual EL2 (because this is the value that we end up shadowing onto the hardware EL1 register). Signed-off-by: Christoffer Dall Signed-off-by: Jintack Lim Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_mmu.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 02d378887743..c018c7b40761 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -115,6 +115,7 @@ alternative_cb_end #include #include #include +#include void kvm_update_va_mask(struct alt_instr *alt, __le32 *origptr, __le32 *updptr, int nr_inst); @@ -185,7 +186,10 @@ struct kvm; static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) { - return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; + if (vcpu_mode_el2(vcpu)) + return (__vcpu_sys_reg(vcpu, SCTLR_EL2) & 0b101) == 0b101; + else + return (vcpu_read_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; } static inline void __clean_dcache_guest_page(void *va, size_t size) -- 2.30.2