From mboxrd@z Thu Jan 1 00:00:00 1970 From: Junaid Shahid Subject: [PATCH v3 2/8] kvm: x86: mmu: Rename spte_is_locklessly_modifiable() Date: Tue, 6 Dec 2016 16:46:11 -0800 Message-ID: <1481071577-40250-3-git-send-email-junaids@google.com> References: <1481071577-40250-1-git-send-email-junaids@google.com> Cc: andreslc@google.com, pfeiner@google.com, pbonzini@redhat.com, guangrong.xiao@linux.intel.com To: kvm@vger.kernel.org Return-path: Received: from mail-pf0-f169.google.com ([209.85.192.169]:36824 "EHLO mail-pf0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753169AbcLGAqV (ORCPT ); Tue, 6 Dec 2016 19:46:21 -0500 Received: by mail-pf0-f169.google.com with SMTP id 189so73340297pfz.3 for ; Tue, 06 Dec 2016 16:46:21 -0800 (PST) In-Reply-To: <1481071577-40250-1-git-send-email-junaids@google.com> Sender: kvm-owner@vger.kernel.org List-ID: This change renames spte_is_locklessly_modifiable() to spte_can_locklessly_be_made_writable() to distinguish it from other forms of lockless modifications. The full set of lockless modifications is covered by spte_has_volatile_bits(). Signed-off-by: Junaid Shahid Reviewed-by: Paolo Bonzini --- arch/x86/kvm/mmu.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 7012de4..4d33275 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -473,7 +473,7 @@ static u64 __get_spte_lockless(u64 *sptep) } #endif -static bool spte_is_locklessly_modifiable(u64 spte) +static bool spte_can_locklessly_be_made_writable(u64 spte) { return (spte & (SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE)) == (SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE); @@ -487,7 +487,7 @@ static bool spte_has_volatile_bits(u64 spte) * also, it can help us to get a stable is_writable_pte() * to ensure tlb flush is not missed. */ - if (spte_is_locklessly_modifiable(spte)) + if (spte_can_locklessly_be_made_writable(spte)) return true; if (!shadow_accessed_mask) @@ -556,7 +556,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) * we always atomically update it, see the comments in * spte_has_volatile_bits(). */ - if (spte_is_locklessly_modifiable(old_spte) && + if (spte_can_locklessly_be_made_writable(old_spte) && !is_writable_pte(new_spte)) ret = true; @@ -1212,7 +1212,7 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) u64 spte = *sptep; if (!is_writable_pte(spte) && - !(pt_protect && spte_is_locklessly_modifiable(spte))) + !(pt_protect && spte_can_locklessly_be_made_writable(spte))) return false; rmap_printk("rmap_write_protect: spte %p %llx\n", sptep, *sptep); @@ -2965,7 +2965,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, * Currently, to simplify the code, only the spte write-protected * by dirty-log can be fast fixed. */ - if (!spte_is_locklessly_modifiable(spte)) + if (!spte_can_locklessly_be_made_writable(spte)) goto exit; /* -- 2.8.0.rc3.226.g39d4020