All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM: MMU: Simple mmu cleanups
@ 2013-05-09  6:43 Takuya Yoshikawa
  2013-05-09  6:44 ` [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling Takuya Yoshikawa
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Takuya Yoshikawa @ 2013-05-09  6:43 UTC (permalink / raw)
  To: gleb, pbonzini, mtosatti; +Cc: kvm

Found during documenting mmu_lock usage for myself.

Takuya Yoshikawa (3):
  KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling
  KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load()
  KVM: MMU: Consolidate common code in mmu_free_roots()

 arch/x86/kvm/mmu.c |   48 +++++++++++++++++++++---------------------------
 1 files changed, 21 insertions(+), 27 deletions(-)

-- 
1.7.5.4


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling
  2013-05-09  6:43 [PATCH 0/3] KVM: MMU: Simple mmu cleanups Takuya Yoshikawa
@ 2013-05-09  6:44 ` Takuya Yoshikawa
  2013-05-09 10:16   ` Xiao Guangrong
  2013-05-09  6:45 ` [PATCH 2/3] KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load() Takuya Yoshikawa
  2013-05-09  6:46 ` [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots() Takuya Yoshikawa
  2 siblings, 1 reply; 16+ messages in thread
From: Takuya Yoshikawa @ 2013-05-09  6:44 UTC (permalink / raw)
  To: gleb, pbonzini, mtosatti; +Cc: kvm

Rather than clearing the ACC_WRITE_MASK bit of pte_access in the
"if (mmu_need_write_protect())" block not to call mark_page_dirty() in
the following if statement, simply moving the call into the appropriate
else block is better.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |    7 ++-----
 1 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 004cc87..08119a8 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2386,14 +2386,11 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 			pgprintk("%s: found shadow page for %llx, marking ro\n",
 				 __func__, gfn);
 			ret = 1;
-			pte_access &= ~ACC_WRITE_MASK;
 			spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE);
-		}
+		} else
+			mark_page_dirty(vcpu->kvm, gfn);
 	}
 
-	if (pte_access & ACC_WRITE_MASK)
-		mark_page_dirty(vcpu->kvm, gfn);
-
 set_pte:
 	if (mmu_spte_update(sptep, spte))
 		kvm_flush_remote_tlbs(vcpu->kvm);
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load()
  2013-05-09  6:43 [PATCH 0/3] KVM: MMU: Simple mmu cleanups Takuya Yoshikawa
  2013-05-09  6:44 ` [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling Takuya Yoshikawa
@ 2013-05-09  6:45 ` Takuya Yoshikawa
  2013-05-09 10:18   ` Xiao Guangrong
  2013-05-12 11:53   ` Gleb Natapov
  2013-05-09  6:46 ` [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots() Takuya Yoshikawa
  2 siblings, 2 replies; 16+ messages in thread
From: Takuya Yoshikawa @ 2013-05-09  6:45 UTC (permalink / raw)
  To: gleb, pbonzini, mtosatti; +Cc: kvm

No need to open-code this function.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 08119a8..d01f340 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3761,9 +3761,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
 	if (r)
 		goto out;
 	r = mmu_alloc_roots(vcpu);
-	spin_lock(&vcpu->kvm->mmu_lock);
-	mmu_sync_roots(vcpu);
-	spin_unlock(&vcpu->kvm->mmu_lock);
+	kvm_mmu_sync_roots(vcpu);
 	if (r)
 		goto out;
 	/* set_cr3() should ensure TLB has been flushed */
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots()
  2013-05-09  6:43 [PATCH 0/3] KVM: MMU: Simple mmu cleanups Takuya Yoshikawa
  2013-05-09  6:44 ` [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling Takuya Yoshikawa
  2013-05-09  6:45 ` [PATCH 2/3] KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load() Takuya Yoshikawa
@ 2013-05-09  6:46 ` Takuya Yoshikawa
  2013-05-09 10:11   ` Xiao Guangrong
  2 siblings, 1 reply; 16+ messages in thread
From: Takuya Yoshikawa @ 2013-05-09  6:46 UTC (permalink / raw)
  To: gleb, pbonzini, mtosatti; +Cc: kvm

By making the last three statements common to both if/else cases, the
symmetry between the locking and unlocking becomes clearer. One note
here is that VCPU's root_hpa does not need to be protected by mmu_lock.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
 1 files changed, 19 insertions(+), 20 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d01f340..bf80b46 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2861,42 +2861,41 @@ out_unlock:
 static void mmu_free_roots(struct kvm_vcpu *vcpu)
 {
 	int i;
+	hpa_t root;
 	struct kvm_mmu_page *sp;
 	LIST_HEAD(invalid_list);
 
 	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
 		return;
+
 	spin_lock(&vcpu->kvm->mmu_lock);
+
 	if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL &&
 	    (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL ||
 	     vcpu->arch.mmu.direct_map)) {
-		hpa_t root = vcpu->arch.mmu.root_hpa;
-
+		root = vcpu->arch.mmu.root_hpa;
 		sp = page_header(root);
 		--sp->root_count;
-		if (!sp->root_count && sp->role.invalid) {
+		if (!sp->root_count && sp->role.invalid)
 			kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
-			kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
+	} else {
+		for (i = 0; i < 4; ++i) {
+			root = vcpu->arch.mmu.pae_root[i];
+			if (root) {
+				root &= PT64_BASE_ADDR_MASK;
+				sp = page_header(root);
+				--sp->root_count;
+				if (!sp->root_count && sp->role.invalid)
+					kvm_mmu_prepare_zap_page(vcpu->kvm, sp,
+								 &invalid_list);
+			}
+			vcpu->arch.mmu.pae_root[i] = INVALID_PAGE;
 		}
-		vcpu->arch.mmu.root_hpa = INVALID_PAGE;
-		spin_unlock(&vcpu->kvm->mmu_lock);
-		return;
 	}
-	for (i = 0; i < 4; ++i) {
-		hpa_t root = vcpu->arch.mmu.pae_root[i];
 
-		if (root) {
-			root &= PT64_BASE_ADDR_MASK;
-			sp = page_header(root);
-			--sp->root_count;
-			if (!sp->root_count && sp->role.invalid)
-				kvm_mmu_prepare_zap_page(vcpu->kvm, sp,
-							 &invalid_list);
-		}
-		vcpu->arch.mmu.pae_root[i] = INVALID_PAGE;
-	}
 	kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 	spin_unlock(&vcpu->kvm->mmu_lock);
+
 	vcpu->arch.mmu.root_hpa = INVALID_PAGE;
 }
 
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots()
  2013-05-09  6:46 ` [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots() Takuya Yoshikawa
@ 2013-05-09 10:11   ` Xiao Guangrong
  2013-05-10  1:05     ` Takuya Yoshikawa
  0 siblings, 1 reply; 16+ messages in thread
From: Xiao Guangrong @ 2013-05-09 10:11 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: gleb, pbonzini, mtosatti, kvm

On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
> By making the last three statements common to both if/else cases, the
> symmetry between the locking and unlocking becomes clearer. One note
> here is that VCPU's root_hpa does not need to be protected by mmu_lock.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> ---
>  arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
>  1 files changed, 19 insertions(+), 20 deletions(-)

DO NOT think it makes any thing better.



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling
  2013-05-09  6:44 ` [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling Takuya Yoshikawa
@ 2013-05-09 10:16   ` Xiao Guangrong
  2013-05-09 11:18     ` Gleb Natapov
  0 siblings, 1 reply; 16+ messages in thread
From: Xiao Guangrong @ 2013-05-09 10:16 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: gleb, pbonzini, mtosatti, kvm

On 05/09/2013 02:44 PM, Takuya Yoshikawa wrote:
> Rather than clearing the ACC_WRITE_MASK bit of pte_access in the
> "if (mmu_need_write_protect())" block not to call mark_page_dirty() in
> the following if statement, simply moving the call into the appropriate
> else block is better.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> ---
>  arch/x86/kvm/mmu.c |    7 ++-----
>  1 files changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 004cc87..08119a8 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2386,14 +2386,11 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
>  			pgprintk("%s: found shadow page for %llx, marking ro\n",
>  				 __func__, gfn);
>  			ret = 1;
> -			pte_access &= ~ACC_WRITE_MASK;
>  			spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE);
> -		}
> +		} else
> +			mark_page_dirty(vcpu->kvm, gfn);
>  	}
> 
> -	if (pte_access & ACC_WRITE_MASK)
> -		mark_page_dirty(vcpu->kvm, gfn);
> -
>  set_pte:
>  	if (mmu_spte_update(sptep, spte))
>  		kvm_flush_remote_tlbs(vcpu->kvm);

That function is really magic, and this change do no really help it. I had several
patches posted some months ago to make these kind of code better understanding, but
i am too tired to update them.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load()
  2013-05-09  6:45 ` [PATCH 2/3] KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load() Takuya Yoshikawa
@ 2013-05-09 10:18   ` Xiao Guangrong
  2013-05-12 11:53   ` Gleb Natapov
  1 sibling, 0 replies; 16+ messages in thread
From: Xiao Guangrong @ 2013-05-09 10:18 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: gleb, pbonzini, mtosatti, kvm

On 05/09/2013 02:45 PM, Takuya Yoshikawa wrote:
> No need to open-code this function.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> ---
>  arch/x86/kvm/mmu.c |    4 +---
>  1 files changed, 1 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 08119a8..d01f340 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3761,9 +3761,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
>  	if (r)
>  		goto out;
>  	r = mmu_alloc_roots(vcpu);
> -	spin_lock(&vcpu->kvm->mmu_lock);
> -	mmu_sync_roots(vcpu);
> -	spin_unlock(&vcpu->kvm->mmu_lock);
> +	kvm_mmu_sync_roots(vcpu);

Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling
  2013-05-09 10:16   ` Xiao Guangrong
@ 2013-05-09 11:18     ` Gleb Natapov
  2013-05-09 12:16       ` Xiao Guangrong
  0 siblings, 1 reply; 16+ messages in thread
From: Gleb Natapov @ 2013-05-09 11:18 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Takuya Yoshikawa, pbonzini, mtosatti, kvm

On Thu, May 09, 2013 at 06:16:55PM +0800, Xiao Guangrong wrote:
> On 05/09/2013 02:44 PM, Takuya Yoshikawa wrote:
> > Rather than clearing the ACC_WRITE_MASK bit of pte_access in the
> > "if (mmu_need_write_protect())" block not to call mark_page_dirty() in
> > the following if statement, simply moving the call into the appropriate
> > else block is better.
> > 
> > Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> > ---
> >  arch/x86/kvm/mmu.c |    7 ++-----
> >  1 files changed, 2 insertions(+), 5 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> > index 004cc87..08119a8 100644
> > --- a/arch/x86/kvm/mmu.c
> > +++ b/arch/x86/kvm/mmu.c
> > @@ -2386,14 +2386,11 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
> >  			pgprintk("%s: found shadow page for %llx, marking ro\n",
> >  				 __func__, gfn);
> >  			ret = 1;
> > -			pte_access &= ~ACC_WRITE_MASK;
> >  			spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE);
> > -		}
> > +		} else
> > +			mark_page_dirty(vcpu->kvm, gfn);
> >  	}
> > 
> > -	if (pte_access & ACC_WRITE_MASK)
> > -		mark_page_dirty(vcpu->kvm, gfn);
> > -
> >  set_pte:
> >  	if (mmu_spte_update(sptep, spte))
> >  		kvm_flush_remote_tlbs(vcpu->kvm);
> 
> That function is really magic, and this change do no really help it. I had several
> patches posted some months ago to make these kind of code better understanding, but
> i am too tired to update them.
Can you point me to them? Your work is really appreciated, I am sorry
you feel this way.

--
			Gleb.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling
  2013-05-09 11:18     ` Gleb Natapov
@ 2013-05-09 12:16       ` Xiao Guangrong
  2013-05-10  1:33         ` Takuya Yoshikawa
  0 siblings, 1 reply; 16+ messages in thread
From: Xiao Guangrong @ 2013-05-09 12:16 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: Takuya Yoshikawa, pbonzini, mtosatti, kvm

On 05/09/2013 07:18 PM, Gleb Natapov wrote:
> On Thu, May 09, 2013 at 06:16:55PM +0800, Xiao Guangrong wrote:
>> On 05/09/2013 02:44 PM, Takuya Yoshikawa wrote:
>>> Rather than clearing the ACC_WRITE_MASK bit of pte_access in the
>>> "if (mmu_need_write_protect())" block not to call mark_page_dirty() in
>>> the following if statement, simply moving the call into the appropriate
>>> else block is better.
>>>
>>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
>>> ---
>>>  arch/x86/kvm/mmu.c |    7 ++-----
>>>  1 files changed, 2 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>>> index 004cc87..08119a8 100644
>>> --- a/arch/x86/kvm/mmu.c
>>> +++ b/arch/x86/kvm/mmu.c
>>> @@ -2386,14 +2386,11 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
>>>  			pgprintk("%s: found shadow page for %llx, marking ro\n",
>>>  				 __func__, gfn);
>>>  			ret = 1;
>>> -			pte_access &= ~ACC_WRITE_MASK;
>>>  			spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE);
>>> -		}
>>> +		} else
>>> +			mark_page_dirty(vcpu->kvm, gfn);
>>>  	}
>>>
>>> -	if (pte_access & ACC_WRITE_MASK)
>>> -		mark_page_dirty(vcpu->kvm, gfn);
>>> -
>>>  set_pte:
>>>  	if (mmu_spte_update(sptep, spte))
>>>  		kvm_flush_remote_tlbs(vcpu->kvm);
>>
>> That function is really magic, and this change do no really help it. I had several
>> patches posted some months ago to make these kind of code better understanding, but
>> i am too tired to update them.
> Can you point me to them? Your work is really appreciated, I am sorry

There are two patches about this set_spte cleanups:

https://lkml.org/lkml/2013/1/23/125
https://lkml.org/lkml/2013/1/23/138

> you feel this way.

It is not your fault, it is mine.

Will update these patches when i finish the zap-all-page and zap-mmio-sp
related things.




^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots()
  2013-05-09 10:11   ` Xiao Guangrong
@ 2013-05-10  1:05     ` Takuya Yoshikawa
  2013-05-10  1:43       ` Xiao Guangrong
  0 siblings, 1 reply; 16+ messages in thread
From: Takuya Yoshikawa @ 2013-05-10  1:05 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: gleb, pbonzini, mtosatti, kvm

On Thu, 09 May 2013 18:11:31 +0800
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:

> On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
> > By making the last three statements common to both if/else cases, the
> > symmetry between the locking and unlocking becomes clearer. One note
> > here is that VCPU's root_hpa does not need to be protected by mmu_lock.
> > 
> > Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> > ---
> >  arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
> >  1 files changed, 19 insertions(+), 20 deletions(-)
> 
> DO NOT think it makes any thing better.
> 

Why do we need to do the same thing differently in two paths?
Especially one path looks like protecting root_hpa while the other does not.

This one may be a small issue, but these small issues make it difficult
to see what mmu_lock is protecting.

	Takuya

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling
  2013-05-09 12:16       ` Xiao Guangrong
@ 2013-05-10  1:33         ` Takuya Yoshikawa
  0 siblings, 0 replies; 16+ messages in thread
From: Takuya Yoshikawa @ 2013-05-10  1:33 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Gleb Natapov, pbonzini, mtosatti, kvm

On Thu, 09 May 2013 20:16:18 +0800
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:

> >> That function is really magic, and this change do no really help it. I had several
> >> patches posted some months ago to make these kind of code better understanding, but
> >> i am too tired to update them.

Thank you for reviewing my patches.

But please don't express your personal frustrations on your pending work
while reviewing patches from others.  "too tired" made me think you would
not send updated patches...

If you want to explain possible conflicts with your patches, I can
understand.  Please say so in that casse.

	Takuya

> > Can you point me to them? Your work is really appreciated, I am sorry
> 
> There are two patches about this set_spte cleanups:
> 
> https://lkml.org/lkml/2013/1/23/125
> https://lkml.org/lkml/2013/1/23/138
> 
> > you feel this way.
> 
> It is not your fault, it is mine.
> 
> Will update these patches when i finish the zap-all-page and zap-mmio-sp
> related things.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots()
  2013-05-10  1:05     ` Takuya Yoshikawa
@ 2013-05-10  1:43       ` Xiao Guangrong
  2013-05-13 11:24         ` Gleb Natapov
  0 siblings, 1 reply; 16+ messages in thread
From: Xiao Guangrong @ 2013-05-10  1:43 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: gleb, pbonzini, mtosatti, kvm

On 05/10/2013 09:05 AM, Takuya Yoshikawa wrote:
> On Thu, 09 May 2013 18:11:31 +0800
> Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:
> 
>> On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
>>> By making the last three statements common to both if/else cases, the
>>> symmetry between the locking and unlocking becomes clearer. One note
>>> here is that VCPU's root_hpa does not need to be protected by mmu_lock.
>>>
>>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
>>> ---
>>>  arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
>>>  1 files changed, 19 insertions(+), 20 deletions(-)
>>
>> DO NOT think it makes any thing better.
>>
> 
> Why do we need to do the same thing differently in two paths?

They are different cases, one is the long mode, anther is PAE mode,
return from one case and continue to handle the anther case is common
used, and it can reduce the indentation and easily follow the code.

Anyway, this is the code style, using if-else is not better than
if() return.

> Especially one path looks like protecting root_hpa while the other does not.

The simple code, "vcpu->arch.mmu.root_hpa = INVALID_PAGE;", does not hurt
the lock. But moving them out of the lock is ok.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/3] KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load()
  2013-05-09  6:45 ` [PATCH 2/3] KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load() Takuya Yoshikawa
  2013-05-09 10:18   ` Xiao Guangrong
@ 2013-05-12 11:53   ` Gleb Natapov
  1 sibling, 0 replies; 16+ messages in thread
From: Gleb Natapov @ 2013-05-12 11:53 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: pbonzini, mtosatti, kvm

On Thu, May 09, 2013 at 03:45:12PM +0900, Takuya Yoshikawa wrote:
> No need to open-code this function.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> ---
>  arch/x86/kvm/mmu.c |    4 +---
>  1 files changed, 1 insertions(+), 3 deletions(-)
> 
Applied, thanks.

> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 08119a8..d01f340 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3761,9 +3761,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
>  	if (r)
>  		goto out;
>  	r = mmu_alloc_roots(vcpu);
> -	spin_lock(&vcpu->kvm->mmu_lock);
> -	mmu_sync_roots(vcpu);
> -	spin_unlock(&vcpu->kvm->mmu_lock);
> +	kvm_mmu_sync_roots(vcpu);
>  	if (r)
>  		goto out;
>  	/* set_cr3() should ensure TLB has been flushed */
> -- 
> 1.7.5.4

--
			Gleb.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots()
  2013-05-10  1:43       ` Xiao Guangrong
@ 2013-05-13 11:24         ` Gleb Natapov
  2013-05-13 13:02           ` Xiao Guangrong
  0 siblings, 1 reply; 16+ messages in thread
From: Gleb Natapov @ 2013-05-13 11:24 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Takuya Yoshikawa, pbonzini, mtosatti, kvm

On Fri, May 10, 2013 at 09:43:50AM +0800, Xiao Guangrong wrote:
> On 05/10/2013 09:05 AM, Takuya Yoshikawa wrote:
> > On Thu, 09 May 2013 18:11:31 +0800
> > Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:
> > 
> >> On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
> >>> By making the last three statements common to both if/else cases, the
> >>> symmetry between the locking and unlocking becomes clearer. One note
> >>> here is that VCPU's root_hpa does not need to be protected by mmu_lock.
> >>>
> >>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
> >>> ---
> >>>  arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
> >>>  1 files changed, 19 insertions(+), 20 deletions(-)
> >>
> >> DO NOT think it makes any thing better.
> >>
> > 
> > Why do we need to do the same thing differently in two paths?
> 
> They are different cases, one is the long mode, anther is PAE mode,
> return from one case and continue to handle the anther case is common
> used, and it can reduce the indentation and easily follow the code.
> 
I agree that this is mostly code style issue and with Takuya patch the
indentation is dipper. Also the structure of mmu_free_roots() resembles
mmu_alloc_shadow_roots() currently. The latter has the same if(){return}
pattern. I also agree with Takuya that locking symmetry can be improved.
What about something like this (untested):

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 40d7b2d..f8ca2f3 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2869,22 +2869,25 @@ static void mmu_free_roots(struct kvm_vcpu *vcpu)
 
 	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
 		return;
-	spin_lock(&vcpu->kvm->mmu_lock);
+
 	if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL &&
 	    (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL ||
 	     vcpu->arch.mmu.direct_map)) {
 		hpa_t root = vcpu->arch.mmu.root_hpa;
 
+		spin_lock(&vcpu->kvm->mmu_lock);
 		sp = page_header(root);
 		--sp->root_count;
 		if (!sp->root_count && sp->role.invalid) {
 			kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
 			kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 		}
-		vcpu->arch.mmu.root_hpa = INVALID_PAGE;
 		spin_unlock(&vcpu->kvm->mmu_lock);
+		vcpu->arch.mmu.root_hpa = INVALID_PAGE;
 		return;
 	}
+
+	spin_lock(&vcpu->kvm->mmu_lock);
 	for (i = 0; i < 4; ++i) {
 		hpa_t root = vcpu->arch.mmu.pae_root[i];
 

--
			Gleb.

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots()
  2013-05-13 11:24         ` Gleb Natapov
@ 2013-05-13 13:02           ` Xiao Guangrong
  2013-05-13 13:20             ` Takuya Yoshikawa
  0 siblings, 1 reply; 16+ messages in thread
From: Xiao Guangrong @ 2013-05-13 13:02 UTC (permalink / raw)
  To: Gleb Natapov; +Cc: Takuya Yoshikawa, pbonzini, mtosatti, kvm

On 05/13/2013 07:24 PM, Gleb Natapov wrote:
> On Fri, May 10, 2013 at 09:43:50AM +0800, Xiao Guangrong wrote:
>> On 05/10/2013 09:05 AM, Takuya Yoshikawa wrote:
>>> On Thu, 09 May 2013 18:11:31 +0800
>>> Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:
>>>
>>>> On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
>>>>> By making the last three statements common to both if/else cases, the
>>>>> symmetry between the locking and unlocking becomes clearer. One note
>>>>> here is that VCPU's root_hpa does not need to be protected by mmu_lock.
>>>>>
>>>>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
>>>>> ---
>>>>>  arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
>>>>>  1 files changed, 19 insertions(+), 20 deletions(-)
>>>>
>>>> DO NOT think it makes any thing better.
>>>>
>>>
>>> Why do we need to do the same thing differently in two paths?
>>
>> They are different cases, one is the long mode, anther is PAE mode,
>> return from one case and continue to handle the anther case is common
>> used, and it can reduce the indentation and easily follow the code.
>>
> I agree that this is mostly code style issue and with Takuya patch the
> indentation is dipper. Also the structure of mmu_free_roots() resembles
> mmu_alloc_shadow_roots() currently. The latter has the same if(){return}
> pattern. I also agree with Takuya that locking symmetry can be improved.
> What about something like this (untested):

It is good to me! ;)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots()
  2013-05-13 13:02           ` Xiao Guangrong
@ 2013-05-13 13:20             ` Takuya Yoshikawa
  0 siblings, 0 replies; 16+ messages in thread
From: Takuya Yoshikawa @ 2013-05-13 13:20 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Gleb Natapov, Takuya Yoshikawa, pbonzini, mtosatti, kvm

On Mon, 13 May 2013 21:02:10 +0800
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:

> On 05/13/2013 07:24 PM, Gleb Natapov wrote:

> > I agree that this is mostly code style issue and with Takuya patch the
> > indentation is dipper. Also the structure of mmu_free_roots() resembles
> > mmu_alloc_shadow_roots() currently. The latter has the same if(){return}
> > pattern. I also agree with Takuya that locking symmetry can be improved.
> > What about something like this (untested):
> 
> It is good to me! ;)
> 

I agree with you two!  Thank you for reviewing.

Gleb, could you please make it your official patch?

	Takuya

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2013-05-13 13:20 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-09  6:43 [PATCH 0/3] KVM: MMU: Simple mmu cleanups Takuya Yoshikawa
2013-05-09  6:44 ` [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling Takuya Yoshikawa
2013-05-09 10:16   ` Xiao Guangrong
2013-05-09 11:18     ` Gleb Natapov
2013-05-09 12:16       ` Xiao Guangrong
2013-05-10  1:33         ` Takuya Yoshikawa
2013-05-09  6:45 ` [PATCH 2/3] KVM: MMU: Use kvm_mmu_sync_roots() in kvm_mmu_load() Takuya Yoshikawa
2013-05-09 10:18   ` Xiao Guangrong
2013-05-12 11:53   ` Gleb Natapov
2013-05-09  6:46 ` [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots() Takuya Yoshikawa
2013-05-09 10:11   ` Xiao Guangrong
2013-05-10  1:05     ` Takuya Yoshikawa
2013-05-10  1:43       ` Xiao Guangrong
2013-05-13 11:24         ` Gleb Natapov
2013-05-13 13:02           ` Xiao Guangrong
2013-05-13 13:20             ` Takuya Yoshikawa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.