From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH v3 7/8] kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits. Date: Wed, 14 Dec 2016 18:35:21 -0500 (EST) Message-ID: <1605983479.4075127.1481758521178.JavaMail.zimbra@redhat.com> References: <1481071577-40250-8-git-send-email-junaids@google.com> <24585621-5485-3213-17d9-20225f836443@redhat.com> <3256158.yty7x4WoCC@js-desktop.mtv.corp.google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, andreslc@google.com, pfeiner@google.com, guangrong xiao To: Junaid Shahid Return-path: Received: from mx4-phx2.redhat.com ([209.132.183.25]:46642 "EHLO mx4-phx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754818AbcLOAcY (ORCPT ); Wed, 14 Dec 2016 19:32:24 -0500 In-Reply-To: <3256158.yty7x4WoCC@js-desktop.mtv.corp.google.com> Sender: kvm-owner@vger.kernel.org List-ID: ----- Original Message ----- > From: "Junaid Shahid" > To: "Paolo Bonzini" > Cc: kvm@vger.kernel.org, andreslc@google.com, pfeiner@google.com, "guangrong xiao" > Sent: Wednesday, December 14, 2016 11:36:37 PM > Subject: Re: [PATCH v3 7/8] kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits. > > On Wednesday, December 14, 2016 05:28:48 PM Paolo Bonzini wrote: > > > > Just a few changes bordering on aesthetics... > > > > Please review and let me know if you agree: > > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > index 6ba62200530a..6b5d8ff66026 100644 > > --- a/arch/x86/kvm/mmu.c > > +++ b/arch/x86/kvm/mmu.c > > @@ -213,8 +213,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask) > > > > static inline bool is_access_track_spte(u64 spte) > > { > > - return shadow_acc_track_mask != 0 && > > - (spte & shadow_acc_track_mask) == shadow_acc_track_value; > > + /* Always false if shadow_acc_track_mask is zero. */ > > + return (spte & shadow_acc_track_mask) == shadow_acc_track_value; > > } > > This looks good. > > > > > /* > > @@ -526,23 +526,24 @@ static bool spte_can_locklessly_be_made_writable(u64 > > spte) > > > > static bool spte_has_volatile_bits(u64 spte) > > { > > + if (is_shadow_present_pte(spte)) > > + return false; > > + > > This should be !is_shadow_present_pte Caught me sending wrong patch. :( > > /* > > * Always atomically update spte if it can be updated > > * out of mmu-lock, it can ensure dirty bit is not lost, > > * also, it can help us to get a stable is_writable_pte() > > * to ensure tlb flush is not missed. > > */ > > - if (spte_can_locklessly_be_made_writable(spte)) > > + if (spte_can_locklessly_be_made_writable(spte) || > > + is_access_track_spte(spte)) > > return true; > > > > - if (!is_shadow_present_pte(spte)) > > - return false; > > - > > - if (!shadow_accessed_mask) > > - return is_access_track_spte(spte); > > + if ((spte & shadow_accessed_mask) == 0 || > > + (is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0)) > > + return true; > > We also need a shadow_accessed_mask != 0 check here, otherwise it will always > return true when shadow_accessed_mask is 0. ... but this is not the wrong patch, it's a genuine mistake. Thanks! Paolo