From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F75CC4338F for ; Wed, 18 Aug 2021 10:07:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C61661053 for ; Wed, 18 Aug 2021 10:07:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234212AbhHRKHX (ORCPT ); Wed, 18 Aug 2021 06:07:23 -0400 Received: from mail.kernel.org ([198.145.29.99]:40634 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233490AbhHRKHK (ORCPT ); Wed, 18 Aug 2021 06:07:10 -0400 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6E7EA60524; Wed, 18 Aug 2021 10:06:31 +0000 (UTC) Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mGISr-005iqf-IV; Wed, 18 Aug 2021 11:06:29 +0100 Date: Wed, 18 Aug 2021 11:06:28 +0100 Message-ID: <87sfz7rrrv.wl-maz@kernel.org> From: Marc Zyngier To: Oliver Upton Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, Peter Shier , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose Subject: Re: [PATCH 1/4] KVM: arm64: Fix read-side race on updates to vcpu reset state In-Reply-To: <20210818085047.1005285-2-oupton@google.com> References: <20210818085047.1005285-1-oupton@google.com> <20210818085047.1005285-2-oupton@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: oupton@google.com, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, pshier@google.com, ricarkol@google.com, jingzhangos@google.com, rananta@google.com, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Wed, 18 Aug 2021 09:50:44 +0100, Oliver Upton wrote: > > KVM correctly serializes writes to a vCPU's reset state, however since > we do not take the KVM lock on the read side it is entirely possible to > read state from two different reset requests. > > Cure the race for now by taking the KVM lock when reading the > reset_state structure. > > Fixes: 358b28f09f0a ("arm/arm64: KVM: Allow a VCPU to fully reset itself") > Signed-off-by: Oliver Upton > --- > arch/arm64/kvm/reset.c | 16 ++++++++++------ > 1 file changed, 10 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c > index 18ffc6ad67b8..3507e64ff8ad 100644 > --- a/arch/arm64/kvm/reset.c > +++ b/arch/arm64/kvm/reset.c > @@ -210,10 +210,16 @@ static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu) > */ > int kvm_reset_vcpu(struct kvm_vcpu *vcpu) > { > + struct vcpu_reset_state reset_state; > int ret; > bool loaded; > u32 pstate; > > + mutex_lock(&vcpu->kvm->lock); > + memcpy(&reset_state, &vcpu->arch.reset_state, sizeof(reset_state)); nit: "reset_state = vcpu->arch.reset_state;" should do the trick. > + vcpu->arch.reset_state.reset = false; We should probably need to upgrade this to a WRITE_ONCE() to match the PSCI side. > + mutex_unlock(&vcpu->kvm->lock); > + > /* Reset PMU outside of the non-preemptible section */ > kvm_pmu_vcpu_reset(vcpu); > > @@ -276,8 +282,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) > * Additional reset state handling that PSCI may have imposed on us. > * Must be done after all the sys_reg reset. > */ > - if (vcpu->arch.reset_state.reset) { > - unsigned long target_pc = vcpu->arch.reset_state.pc; > + if (reset_state.reset) { > + unsigned long target_pc = reset_state.pc; > > /* Gracefully handle Thumb2 entry point */ > if (vcpu_mode_is_32bit(vcpu) && (target_pc & 1)) { > @@ -286,13 +292,11 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) > } > > /* Propagate caller endianness */ > - if (vcpu->arch.reset_state.be) > + if (reset_state.be) > kvm_vcpu_set_be(vcpu); > > *vcpu_pc(vcpu) = target_pc; > - vcpu_set_reg(vcpu, 0, vcpu->arch.reset_state.r0); > - > - vcpu->arch.reset_state.reset = false; > + vcpu_set_reg(vcpu, 0, reset_state.r0); > } > > /* Reset timer */ Thanks, M. -- Without deviation from the norm, progress is not possible.