All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: lantianyu1986@gmail.com
Cc: Lan Tianyu <Tianyu.Lan@microsoft.com>,
	christoffer.dall@arm.com, marc.zyngier@arm.com,
	linux@armlinux.org.uk, catalin.marinas@arm.com,
	will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org,
	paul.burton@mips.com, paulus@ozlabs.org,
	benh@kernel.crashing.org, mpe@ellerman.id.au, rkrcmar@redhat.com,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	hpa@zytor.com, x86@kernel.org,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
	linux-mips@vger.kernel.org, kvm-ppc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org,
	michael.h.kelley@microsoft.com, kys@microsoft.com,
	vkuznets@redhat.com
Subject: Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
Date: Mon, 7 Jan 2019 17:26:35 +0100	[thread overview]
Message-ID: <7eb0cde4-9436-9719-dd13-caf4ab5083a2@redhat.com> (raw)
In-Reply-To: <20190104085405.40356-10-Tianyu.Lan@microsoft.com>

On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
>  		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
>  					  PT_PAGE_TABLE_LEVEL, slot);
> -		__rmap_write_protect(kvm, rmap_head, false);
> +		flush |= __rmap_write_protect(kvm, rmap_head, false);
>  
>  		/* clear the first set bit */
>  		mask &= mask - 1;
>  	}
> +
> +	if (flush && kvm_available_flush_tlb_with_range()) {
> +		kvm_flush_remote_tlbs_with_address(kvm,
> +				slot->base_gfn + gfn_offset,
> +				hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient.  Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> +		flush = false;
> +	}
> +


WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: lantianyu1986@gmail.com
Cc: kvm@vger.kernel.org, rkrcmar@redhat.com, catalin.marinas@arm.com,
	will.deacon@arm.com, christoffer.dall@arm.com, hpa@zytor.com,
	kys@microsoft.com, kvmarm@lists.cs.columbia.edu, x86@kernel.org,
	linux@armlinux.org.uk, michael.h.kelley@microsoft.com,
	mingo@redhat.com, jhogan@kernel.org, linux-mips@vger.kernel.org,
	Lan Tianyu <Tianyu.Lan@microsoft.com>,
	marc.zyngier@arm.com, kvm-ppc@vger.kernel.org, bp@alien8.de,
	tglx@linutronix.de, linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, ralf@linux-mips.org,
	paul.burton@mips.com, vkuznets@redhat.com,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
Date: Mon, 7 Jan 2019 17:26:35 +0100	[thread overview]
Message-ID: <7eb0cde4-9436-9719-dd13-caf4ab5083a2@redhat.com> (raw)
In-Reply-To: <20190104085405.40356-10-Tianyu.Lan@microsoft.com>

On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
>  		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
>  					  PT_PAGE_TABLE_LEVEL, slot);
> -		__rmap_write_protect(kvm, rmap_head, false);
> +		flush |= __rmap_write_protect(kvm, rmap_head, false);
>  
>  		/* clear the first set bit */
>  		mask &= mask - 1;
>  	}
> +
> +	if (flush && kvm_available_flush_tlb_with_range()) {
> +		kvm_flush_remote_tlbs_with_address(kvm,
> +				slot->base_gfn + gfn_offset,
> +				hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient.  Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> +		flush = false;
> +	}
> +


WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: lantianyu1986@gmail.com
Cc: kvm@vger.kernel.org, rkrcmar@redhat.com, catalin.marinas@arm.com,
	will.deacon@arm.com, christoffer.dall@arm.com, paulus@ozlabs.org,
	hpa@zytor.com, kys@microsoft.com, kvmarm@lists.cs.columbia.edu,
	mpe@ellerman.id.au, x86@kernel.org, linux@armlinux.org.uk,
	michael.h.kelley@microsoft.com, mingo@redhat.com,
	benh@kernel.crashing.org, jhogan@kernel.org,
	linux-mips@vger.kernel.org, Lan Tianyu <Tianyu.Lan@microsoft.com>,
	marc.zyngier@arm.com, kvm-ppc@vger.kernel.org, bp@alien8.de,
	tglx@linutronix.de, linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, ralf@linux-mips.org,
	paul.burton@mips.com, vkuznets@redhat.com,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
Date: Mon, 7 Jan 2019 17:26:35 +0100	[thread overview]
Message-ID: <7eb0cde4-9436-9719-dd13-caf4ab5083a2@redhat.com> (raw)
In-Reply-To: <20190104085405.40356-10-Tianyu.Lan@microsoft.com>

On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
>  		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
>  					  PT_PAGE_TABLE_LEVEL, slot);
> -		__rmap_write_protect(kvm, rmap_head, false);
> +		flush |= __rmap_write_protect(kvm, rmap_head, false);
>  
>  		/* clear the first set bit */
>  		mask &= mask - 1;
>  	}
> +
> +	if (flush && kvm_available_flush_tlb_with_range()) {
> +		kvm_flush_remote_tlbs_with_address(kvm,
> +				slot->base_gfn + gfn_offset,
> +				hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient.  Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> +		flush = false;
> +	}
> +


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: lantianyu1986@gmail.com
Cc: Lan Tianyu <Tianyu.Lan@microsoft.com>,
	christoffer.dall@arm.com, marc.zyngier@arm.com,
	linux@armlinux.org.uk, catalin.marinas@arm.com,
	will.deacon@arm.com, jhogan@kernel.org, ralf@linux-mips.org,
	paul.burton@mips.com, paulus@ozlabs.org,
	benh@kernel.crashing.org, mpe@ellerman.id.au, rkrcmar@redhat.com,
	tglx@linutronix.de, mingo@redhat.com, bp@alien8.de,
	hpa@zytor.com, x86@kernel.org,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org,
	linux-mips@vger.kernel.org, kvm-ppc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org,
	michael.h.kelley@microsoft.com, kys@microsoft.com,
	vkuznets@redhat.com
Subject: Re: [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
Date: Mon, 07 Jan 2019 16:26:35 +0000	[thread overview]
Message-ID: <7eb0cde4-9436-9719-dd13-caf4ab5083a2@redhat.com> (raw)
In-Reply-To: <20190104085405.40356-10-Tianyu.Lan@microsoft.com>

On 04/01/19 09:54, lantianyu1986@gmail.com wrote:
>  		rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
>  					  PT_PAGE_TABLE_LEVEL, slot);
> -		__rmap_write_protect(kvm, rmap_head, false);
> +		flush |= __rmap_write_protect(kvm, rmap_head, false);
>  
>  		/* clear the first set bit */
>  		mask &= mask - 1;
>  	}
> +
> +	if (flush && kvm_available_flush_tlb_with_range()) {
> +		kvm_flush_remote_tlbs_with_address(kvm,
> +				slot->base_gfn + gfn_offset,
> +				hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient.  Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> +		flush = false;
> +	}
> +

  reply	other threads:[~2019-01-07 16:26 UTC|newest]

Thread overview: 106+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-04  8:53 [PATCH 00/11] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM lantianyu1986
2019-01-04  8:53 ` lantianyu1986
2019-01-04  8:53 ` lantianyu1986
2019-01-04  8:53 ` lantianyu1986
2019-01-04  8:53 ` lantianyu1986
2019-01-04  8:53 ` [PATCH 1/11] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list() lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53 ` [PATCH 2/11] KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func() lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53 ` [PATCH 3/11] KVM: Add spte's point in the struct kvm_mmu_page lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-07 16:34   ` Paolo Bonzini
2019-01-07 16:34     ` Paolo Bonzini
2019-01-07 16:34     ` Paolo Bonzini
2019-01-07 16:34     ` Paolo Bonzini
2019-01-07 16:34     ` Paolo Bonzini
2019-01-04  8:53 ` [PATCH 4/11] KVM/MMU: Introduce tlb flush with range list lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-07 16:39   ` Paolo Bonzini
2019-01-07 16:39     ` Paolo Bonzini
2019-01-07 16:39     ` Paolo Bonzini
2019-01-07 16:39     ` Paolo Bonzini
2019-01-04  8:53 ` [PATCH 5/11] KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect() lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:53   ` lantianyu1986
2019-01-04  8:54 ` [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page() lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04 16:30   ` Sean Christopherson
2019-01-04 16:30     ` Sean Christopherson
2019-01-04 16:30     ` Sean Christopherson
2019-01-04 16:30     ` Sean Christopherson
2019-01-07  5:13     ` Tianyu Lan
2019-01-07  5:13       ` Tianyu Lan
2019-01-07  5:13       ` Tianyu Lan
2019-01-07  5:13       ` Tianyu Lan
2019-01-07  5:13       ` Tianyu Lan
2019-01-07 16:07     ` Paolo Bonzini
2019-01-07 16:07       ` Paolo Bonzini
2019-01-07 16:07       ` Paolo Bonzini
2019-01-07 16:07       ` Paolo Bonzini
2019-01-07 16:07       ` Paolo Bonzini
2019-01-04  8:54 ` [PATCH 7/11] KVM: Remove redundant check in the kvm_get_dirty_log_protect() lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04 15:50   ` Sean Christopherson
2019-01-04 15:50     ` Sean Christopherson
2019-01-04 15:50     ` Sean Christopherson
2019-01-04 15:50     ` Sean Christopherson
2019-01-04 21:27     ` Sean Christopherson
2019-01-04 21:27       ` Sean Christopherson
2019-01-04 21:27       ` Sean Christopherson
2019-01-04 21:27       ` Sean Christopherson
2019-01-07 16:20     ` Paolo Bonzini
2019-01-07 16:20       ` Paolo Bonzini
2019-01-07 16:20       ` Paolo Bonzini
2019-01-07 16:20       ` Paolo Bonzini
2019-01-07 16:20       ` Paolo Bonzini
2019-01-04  8:54 ` [PATCH 8/11] KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54 ` [PATCH 9/11] KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked() lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-07 16:26   ` Paolo Bonzini [this message]
2019-01-07 16:26     ` Paolo Bonzini
2019-01-07 16:26     ` Paolo Bonzini
2019-01-07 16:26     ` Paolo Bonzini
2019-01-10  9:06     ` Tianyu Lan
2019-01-10  9:06       ` Tianyu Lan
2019-01-10  9:06       ` Tianyu Lan
2019-01-10  9:06       ` Tianyu Lan
2019-01-10  9:06       ` Tianyu Lan
2019-01-04  8:54 ` [PATCH 10/11] KVM: Add flush parameter for kvm_age_hva() lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54   ` lantianyu1986
2019-01-04  8:54 ` [PATCH 11/11] KVM/MMU: Flush tlb in the kvm_age_rmapp() lantianyu1986
     [not found]   ` <20190104161235.GB11288@linux.intel.com>
2019-01-07  3:42     ` Tianyu Lan
2019-01-07 16:31       ` Paolo Bonzini
2019-01-08  3:42         ` Tianyu Lan
2019-01-08  3:42           ` Tianyu Lan
2019-01-08 11:52           ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7eb0cde4-9436-9719-dd13-caf4ab5083a2@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=Tianyu.Lan@microsoft.com \
    --cc=benh@kernel.crashing.org \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=christoffer.dall@arm.com \
    --cc=hpa@zytor.com \
    --cc=jhogan@kernel.org \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=kys@microsoft.com \
    --cc=lantianyu1986@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=marc.zyngier@arm.com \
    --cc=michael.h.kelley@microsoft.com \
    --cc=mingo@redhat.com \
    --cc=mpe@ellerman.id.au \
    --cc=paul.burton@mips.com \
    --cc=paulus@ozlabs.org \
    --cc=ralf@linux-mips.org \
    --cc=rkrcmar@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=vkuznets@redhat.com \
    --cc=will.deacon@arm.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.