From mboxrd@z Thu Jan 1 00:00:00 1970 From: Junaid Shahid Subject: Re: [PATCH v3 7/8] kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits. Date: Wed, 14 Dec 2016 14:36:37 -0800 Message-ID: <3256158.yty7x4WoCC@js-desktop.mtv.corp.google.com> References: <1481071577-40250-8-git-send-email-junaids@google.com> <24585621-5485-3213-17d9-20225f836443@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7Bit Cc: kvm@vger.kernel.org, andreslc@google.com, pfeiner@google.com, guangrong.xiao@linux.intel.com To: Paolo Bonzini Return-path: Received: from mail-pg0-f51.google.com ([74.125.83.51]:33014 "EHLO mail-pg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753105AbcLNWgx (ORCPT ); Wed, 14 Dec 2016 17:36:53 -0500 Received: by mail-pg0-f51.google.com with SMTP id 3so12500099pgd.0 for ; Wed, 14 Dec 2016 14:36:38 -0800 (PST) In-Reply-To: <24585621-5485-3213-17d9-20225f836443@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Wednesday, December 14, 2016 05:28:48 PM Paolo Bonzini wrote: > > Just a few changes bordering on aesthetics... > > Please review and let me know if you agree: > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 6ba62200530a..6b5d8ff66026 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -213,8 +213,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask) > > static inline bool is_access_track_spte(u64 spte) > { > - return shadow_acc_track_mask != 0 && > - (spte & shadow_acc_track_mask) == shadow_acc_track_value; > + /* Always false if shadow_acc_track_mask is zero. */ > + return (spte & shadow_acc_track_mask) == shadow_acc_track_value; > } This looks good. > > /* > @@ -526,23 +526,24 @@ static bool spte_can_locklessly_be_made_writable(u64 spte) > > static bool spte_has_volatile_bits(u64 spte) > { > + if (is_shadow_present_pte(spte)) > + return false; > + This should be !is_shadow_present_pte > /* > * Always atomically update spte if it can be updated > * out of mmu-lock, it can ensure dirty bit is not lost, > * also, it can help us to get a stable is_writable_pte() > * to ensure tlb flush is not missed. > */ > - if (spte_can_locklessly_be_made_writable(spte)) > + if (spte_can_locklessly_be_made_writable(spte) || > + is_access_track_spte(spte)) > return true; > > - if (!is_shadow_present_pte(spte)) > - return false; > - > - if (!shadow_accessed_mask) > - return is_access_track_spte(spte); > + if ((spte & shadow_accessed_mask) == 0 || > + (is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0)) > + return true; We also need a shadow_accessed_mask != 0 check here, otherwise it will always return true when shadow_accessed_mask is 0. > > - return (spte & shadow_accessed_mask) == 0 || > - (is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0); > + return false; > } > > static bool is_accessed_spte(u64 spte) > @@ -726,16 +727,13 @@ static bool mmu_spte_age(u64 *sptep) > { > u64 spte = mmu_spte_get_lockless(sptep); > > - if (spte & shadow_accessed_mask) { > + if (!is_accessed_spte(spte)) > + return false; > + > + if (shadow_accessed_mask) { > clear_bit((ffs(shadow_accessed_mask) - 1), > (unsigned long *)sptep); > - return true; > - } > - > - if (shadow_accessed_mask == 0) { > - if (is_access_track_spte(spte)) > - return false; > - > + } else { > /* > * Capture the dirty status of the page, so that it doesn't get > * lost when the SPTE is marked for access tracking. > @@ -745,10 +743,9 @@ static bool mmu_spte_age(u64 *sptep) > > spte = mark_spte_for_access_track(spte); > mmu_spte_update_no_track(sptep, spte); > - return true; > } > > - return false; > + return true; > } > This looks good as well. Thanks, Junaid