All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Jing Zhang <jingzhangos@google.com>
Cc: KVM <kvm@vger.kernel.org>, KVMARM <kvmarm@lists.cs.columbia.edu>,
	Will Deacon <will@kernel.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	David Matlack <dmatlack@google.com>,
	Oliver Upton <oupton@google.com>,
	Reiji Watanabe <reijiw@google.com>,
	Ricardo Koller <ricarkol@google.com>,
	Raghavendra Rao Ananta <rananta@google.com>
Subject: Re: [PATCH v1 2/3] KVM: arm64: Add fast path to handle permission relaxation during dirty logging
Date: Sun, 16 Jan 2022 11:14:46 +0000	[thread overview]
Message-ID: <87wnj0x789.wl-maz@kernel.org> (raw)
In-Reply-To: <20220113221829.2785604-3-jingzhangos@google.com>

On Thu, 13 Jan 2022 22:18:28 +0000,
Jing Zhang <jingzhangos@google.com> wrote:
> 
> To reduce MMU lock contention during dirty logging, all permission
> relaxation operations would be performed under read lock.
> 
> Signed-off-by: Jing Zhang <jingzhangos@google.com>
> ---
>  arch/arm64/kvm/mmu.c | 20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index cafd5813c949..15393cb61a3f 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1084,6 +1084,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	unsigned long vma_pagesize, fault_granule;
>  	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
>  	struct kvm_pgtable *pgt;
> +	bool use_mmu_readlock = false;

Group this with the rest of the flags. It would also be better if it
described the condition this represent rather than what we use it for.
For example, 'perm_fault_while_logging', or something along those
lines.

>  
>  	fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
>  	write_fault = kvm_is_write_fault(vcpu);
> @@ -1212,7 +1213,19 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	if (exec_fault && device)
>  		return -ENOEXEC;
>  
> -	write_lock(&kvm->mmu_lock);
> +	if (fault_status == FSC_PERM && fault_granule == PAGE_SIZE
> +				     && logging_active && write_fault)
> +		use_mmu_readlock = true;

This looks a bit clumsy, and would be better if this was kept together
with the rest of the logging_active==true code. Something like:

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index bc2aba953299..59b1d5f46b06 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1114,6 +1114,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (logging_active) {
 		force_pte = true;
 		vma_shift = PAGE_SHIFT;
+		use_readlock = (fault_status == FSC_PERM && write_fault);
 	} else {
 		vma_shift = get_vma_page_shift(vma, hva);
 	}

I don't think we have to check for fault_granule here, as I don't see
how you could get a permission fault for something other than a page
size mapping.

> +	/*
> +	 * To reduce MMU contentions and enhance concurrency during dirty
> +	 * logging dirty logging, only acquire read lock for permission
> +	 * relaxation. This fast path would greatly reduce the performance
> +	 * degradation of guest workloads.
> +	 */

This comment makes more sense with the previous hunk. Drop the last
sentence though, as it doesn't bring much information.

> +	if (use_mmu_readlock)
> +		read_lock(&kvm->mmu_lock);
> +	else
> +		write_lock(&kvm->mmu_lock);
>  	pgt = vcpu->arch.hw_mmu->pgt;
>  	if (mmu_notifier_retry(kvm, mmu_seq))
>  		goto out_unlock;
> @@ -1271,7 +1284,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	}
>  
>  out_unlock:
> -	write_unlock(&kvm->mmu_lock);
> +	if (use_mmu_readlock)
> +		read_unlock(&kvm->mmu_lock);
> +	else
> +		write_unlock(&kvm->mmu_lock);
>  	kvm_set_pfn_accessed(pfn);
>  	kvm_release_pfn_clean(pfn);
>  	return ret != -EAGAIN ? ret : 0;

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Jing Zhang <jingzhangos@google.com>
Cc: KVM <kvm@vger.kernel.org>, David Matlack <dmatlack@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Will Deacon <will@kernel.org>,
	KVMARM <kvmarm@lists.cs.columbia.edu>
Subject: Re: [PATCH v1 2/3] KVM: arm64: Add fast path to handle permission relaxation during dirty logging
Date: Sun, 16 Jan 2022 11:14:46 +0000	[thread overview]
Message-ID: <87wnj0x789.wl-maz@kernel.org> (raw)
In-Reply-To: <20220113221829.2785604-3-jingzhangos@google.com>

On Thu, 13 Jan 2022 22:18:28 +0000,
Jing Zhang <jingzhangos@google.com> wrote:
> 
> To reduce MMU lock contention during dirty logging, all permission
> relaxation operations would be performed under read lock.
> 
> Signed-off-by: Jing Zhang <jingzhangos@google.com>
> ---
>  arch/arm64/kvm/mmu.c | 20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index cafd5813c949..15393cb61a3f 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1084,6 +1084,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	unsigned long vma_pagesize, fault_granule;
>  	enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
>  	struct kvm_pgtable *pgt;
> +	bool use_mmu_readlock = false;

Group this with the rest of the flags. It would also be better if it
described the condition this represent rather than what we use it for.
For example, 'perm_fault_while_logging', or something along those
lines.

>  
>  	fault_granule = 1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(fault_level);
>  	write_fault = kvm_is_write_fault(vcpu);
> @@ -1212,7 +1213,19 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	if (exec_fault && device)
>  		return -ENOEXEC;
>  
> -	write_lock(&kvm->mmu_lock);
> +	if (fault_status == FSC_PERM && fault_granule == PAGE_SIZE
> +				     && logging_active && write_fault)
> +		use_mmu_readlock = true;

This looks a bit clumsy, and would be better if this was kept together
with the rest of the logging_active==true code. Something like:

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index bc2aba953299..59b1d5f46b06 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1114,6 +1114,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	if (logging_active) {
 		force_pte = true;
 		vma_shift = PAGE_SHIFT;
+		use_readlock = (fault_status == FSC_PERM && write_fault);
 	} else {
 		vma_shift = get_vma_page_shift(vma, hva);
 	}

I don't think we have to check for fault_granule here, as I don't see
how you could get a permission fault for something other than a page
size mapping.

> +	/*
> +	 * To reduce MMU contentions and enhance concurrency during dirty
> +	 * logging dirty logging, only acquire read lock for permission
> +	 * relaxation. This fast path would greatly reduce the performance
> +	 * degradation of guest workloads.
> +	 */

This comment makes more sense with the previous hunk. Drop the last
sentence though, as it doesn't bring much information.

> +	if (use_mmu_readlock)
> +		read_lock(&kvm->mmu_lock);
> +	else
> +		write_lock(&kvm->mmu_lock);
>  	pgt = vcpu->arch.hw_mmu->pgt;
>  	if (mmu_notifier_retry(kvm, mmu_seq))
>  		goto out_unlock;
> @@ -1271,7 +1284,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	}
>  
>  out_unlock:
> -	write_unlock(&kvm->mmu_lock);
> +	if (use_mmu_readlock)
> +		read_unlock(&kvm->mmu_lock);
> +	else
> +		write_unlock(&kvm->mmu_lock);
>  	kvm_set_pfn_accessed(pfn);
>  	kvm_release_pfn_clean(pfn);
>  	return ret != -EAGAIN ? ret : 0;

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

  reply	other threads:[~2022-01-16 11:14 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-13 22:18 [PATCH v1 0/3] ARM64: Guest performance improvement during dirty Jing Zhang
2022-01-13 22:18 ` Jing Zhang
2022-01-13 22:18 ` [PATCH v1 1/3] KVM: arm64: Use read/write spin lock for MMU protection Jing Zhang
2022-01-13 22:18   ` Jing Zhang
2022-01-13 22:18 ` [PATCH v1 2/3] KVM: arm64: Add fast path to handle permission relaxation during dirty logging Jing Zhang
2022-01-13 22:18   ` Jing Zhang
2022-01-16 11:14   ` Marc Zyngier [this message]
2022-01-16 11:14     ` Marc Zyngier
2022-01-17  3:23     ` Jing Zhang
2022-01-17  3:23       ` Jing Zhang
2022-01-13 22:18 ` [PATCH v1 3/3] KVM: selftests: Add vgic initialization for dirty log perf test for ARM Jing Zhang
2022-01-13 22:18   ` Jing Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wnj0x789.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=dmatlack@google.com \
    --cc=jingzhangos@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=oupton@google.com \
    --cc=pbonzini@redhat.com \
    --cc=rananta@google.com \
    --cc=reijiw@google.com \
    --cc=ricarkol@google.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.