From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH v3 7/8] kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits. Date: Wed, 14 Dec 2016 17:28:48 +0100 Message-ID: <24585621-5485-3213-17d9-20225f836443@redhat.com> References: <1481071577-40250-1-git-send-email-junaids@google.com> <1481071577-40250-8-git-send-email-junaids@google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: andreslc@google.com, pfeiner@google.com, guangrong.xiao@linux.intel.com To: Junaid Shahid , kvm@vger.kernel.org Return-path: Received: from mail-wm0-f68.google.com ([74.125.82.68]:33358 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756082AbcLNQm3 (ORCPT ); Wed, 14 Dec 2016 11:42:29 -0500 Received: by mail-wm0-f68.google.com with SMTP id u144so345985wmu.0 for ; Wed, 14 Dec 2016 08:42:28 -0800 (PST) In-Reply-To: <1481071577-40250-8-git-send-email-junaids@google.com> Sender: kvm-owner@vger.kernel.org List-ID: On 07/12/2016 01:46, Junaid Shahid wrote: > This change implements lockless access tracking for Intel CPUs without EPT > A bits. This is achieved by marking the PTEs as not-present (but not > completely clearing them) when clear_flush_young() is called after marking > the pages as accessed. When an EPT Violation is generated as a result of > the VM accessing those pages, the PTEs are restored to their original values. > > Signed-off-by: Junaid Shahid Just a few changes bordering on aesthetics... Please review and let me know if you agree: diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 6ba62200530a..6b5d8ff66026 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -213,8 +213,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask) static inline bool is_access_track_spte(u64 spte) { - return shadow_acc_track_mask != 0 && - (spte & shadow_acc_track_mask) == shadow_acc_track_value; + /* Always false if shadow_acc_track_mask is zero. */ + return (spte & shadow_acc_track_mask) == shadow_acc_track_value; } /* @@ -526,23 +526,24 @@ static bool spte_can_locklessly_be_made_writable(u64 spte) static bool spte_has_volatile_bits(u64 spte) { + if (is_shadow_present_pte(spte)) + return false; + /* * Always atomically update spte if it can be updated * out of mmu-lock, it can ensure dirty bit is not lost, * also, it can help us to get a stable is_writable_pte() * to ensure tlb flush is not missed. */ - if (spte_can_locklessly_be_made_writable(spte)) + if (spte_can_locklessly_be_made_writable(spte) || + is_access_track_spte(spte)) return true; - if (!is_shadow_present_pte(spte)) - return false; - - if (!shadow_accessed_mask) - return is_access_track_spte(spte); + if ((spte & shadow_accessed_mask) == 0 || + (is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0)) + return true; - return (spte & shadow_accessed_mask) == 0 || - (is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0); + return false; } static bool is_accessed_spte(u64 spte) @@ -726,16 +727,13 @@ static bool mmu_spte_age(u64 *sptep) { u64 spte = mmu_spte_get_lockless(sptep); - if (spte & shadow_accessed_mask) { + if (!is_accessed_spte(spte)) + return false; + + if (shadow_accessed_mask) { clear_bit((ffs(shadow_accessed_mask) - 1), (unsigned long *)sptep); - return true; - } - - if (shadow_accessed_mask == 0) { - if (is_access_track_spte(spte)) - return false; - + } else { /* * Capture the dirty status of the page, so that it doesn't get * lost when the SPTE is marked for access tracking. @@ -745,10 +743,9 @@ static bool mmu_spte_age(u64 *sptep) spte = mark_spte_for_access_track(spte); mmu_spte_update_no_track(sptep, spte); - return true; } - return false; + return true; } static void walk_shadow_page_lockless_begin(struct kvm_vcpu *vcpu) Thanks, Paolo