From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH v2 2/5] kvm: x86: mmu: Rename spte_is_locklessly_modifiable() Date: Mon, 21 Nov 2016 14:07:11 +0100 Message-ID: References: <1478646030-101103-1-git-send-email-junaids@google.com> <1478646030-101103-3-git-send-email-junaids@google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: andreslc@google.com, pfeiner@google.com, guangrong.xiao@linux.intel.com To: Junaid Shahid , kvm@vger.kernel.org Return-path: Received: from mx1.redhat.com ([209.132.183.28]:51524 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932332AbcKUNHU (ORCPT ); Mon, 21 Nov 2016 08:07:20 -0500 In-Reply-To: <1478646030-101103-3-git-send-email-junaids@google.com> Sender: kvm-owner@vger.kernel.org List-ID: On 09/11/2016 00:00, Junaid Shahid wrote: > This change renames spte_is_locklessly_modifiable() to > spte_can_locklessly_be_made_writable() to distinguish it from other > forms of lockless modifications. The full set of lockless modifications > is covered by spte_has_volatile_bits(). > > Signed-off-by: Junaid Shahid > --- > arch/x86/kvm/mmu.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index d9c7e98..e580134 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -473,7 +473,7 @@ retry: > } > #endif > > -static bool spte_is_locklessly_modifiable(u64 spte) > +static bool spte_can_locklessly_be_made_writable(u64 spte) > { > return (spte & (SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE)) == > (SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE); > @@ -487,7 +487,7 @@ static bool spte_has_volatile_bits(u64 spte) > * also, it can help us to get a stable is_writable_pte() > * to ensure tlb flush is not missed. > */ > - if (spte_is_locklessly_modifiable(spte)) > + if (spte_can_locklessly_be_made_writable(spte)) > return true; > > if (!shadow_accessed_mask) > @@ -556,7 +556,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) > * we always atomically update it, see the comments in > * spte_has_volatile_bits(). > */ > - if (spte_is_locklessly_modifiable(old_spte) && > + if (spte_can_locklessly_be_made_writable(old_spte) && > !is_writable_pte(new_spte)) > ret = true; > > @@ -1212,7 +1212,7 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) > u64 spte = *sptep; > > if (!is_writable_pte(spte) && > - !(pt_protect && spte_is_locklessly_modifiable(spte))) > + !(pt_protect && spte_can_locklessly_be_made_writable(spte))) > return false; > > rmap_printk("rmap_write_protect: spte %p %llx\n", sptep, *sptep); > @@ -2973,7 +2973,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, > * Currently, to simplify the code, only the spte write-protected > * by dirty-log can be fast fixed. > */ > - if (!spte_is_locklessly_modifiable(spte)) > + if (!spte_can_locklessly_be_made_writable(spte)) > goto exit; > > /* > Reviewed-by: Paolo Bonzini