From: Paolo Bonzini <pbonzini@redhat.com>
To: David Woodhouse <dwmw2@infradead.org>, kvm <kvm@vger.kernel.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
Joao Martins <joao.m.martins@oracle.com>,
"jmattson @ google . com" <jmattson@google.com>,
"wanpengli @ tencent . com" <wanpengli@tencent.com>,
"seanjc @ google . com" <seanjc@google.com>,
"vkuznets @ redhat . com" <vkuznets@redhat.com>,
"mtosatti @ redhat . com" <mtosatti@redhat.com>,
"joro @ 8bytes . org" <joro@8bytes.org>,
karahmed@amazon.com
Subject: Re: [PATCH 08/11] KVM: Kill kvm_map_gfn() / kvm_unmap_gfn() and gfn_to_pfn_cache
Date: Tue, 16 Nov 2021 11:21:11 +0100 [thread overview]
Message-ID: <b76e0ae5-bc5f-7101-9dd9-e8d0fba792bc@redhat.com> (raw)
In-Reply-To: <20211115165030.7422-8-dwmw2@infradead.org>
On 11/15/21 17:50, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> In commit 7e2175ebd695 ("KVM: x86: Fix recording of guest steal time /
> preempted status") I removed the only user of these functions because
> it was basically impossible to use them safely.
>
> There are two stages to the GFN → PFN mapping; first through the KVM
> memslots to a userspace HVA and then through the page tables to
> translate that HVA to an underlying PFN. Invalidations of the former
> were being handled correctly, but no attempt was made to use the MMU
> notifiers to invalidate the cache when the HVA→GFN mapping changed.
>
> As a prelude to reinventing the gfn_to_pfn_cache with more usable
> semantics, rip it out entirely and untangle the implementation of
> the unsafe kvm_vcpu_map()/kvm_vcpu_unmap() functions from it.
>
> All current users of kvm_vcpu_map() also look broken right now, and
> will be dealt with separately. They broadly fall into two classes:
>
> • Those which map, access the data and immediately unmap. This is
> mostly gratuitous and could just as well use the existing user
> HVA, and could probably benefit from a gfn_to_hva_cache as they
> do so.
>
> • Those which keep the mapping around for a longer time, perhaps
> even using the PFN directly from the guest. These will need to
> be converted to the new gfn_to_pfn_cache and then kvm_vcpu_map()
> can be removed too.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
> include/linux/kvm_host.h | 6 +--
> include/linux/kvm_types.h | 7 ---
> virt/kvm/kvm_main.c | 100 +++++---------------------------------
> 3 files changed, 12 insertions(+), 101 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 9e0667e3723e..c310648cc8f1 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -874,7 +874,7 @@ void kvm_release_pfn_dirty(kvm_pfn_t pfn);
> void kvm_set_pfn_dirty(kvm_pfn_t pfn);
> void kvm_set_pfn_accessed(kvm_pfn_t pfn);
>
> -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache);
> +void kvm_release_pfn(kvm_pfn_t pfn, bool dirty);
> int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset,
> int len);
> int kvm_read_guest(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len);
> @@ -950,12 +950,8 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
> kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn);
> kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn);
> int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map);
> -int kvm_map_gfn(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map,
> - struct gfn_to_pfn_cache *cache, bool atomic);
> struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn);
> void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty);
> -int kvm_unmap_gfn(struct kvm_vcpu *vcpu, struct kvm_host_map *map,
> - struct gfn_to_pfn_cache *cache, bool dirty, bool atomic);
> unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn);
> unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable);
> int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset,
> diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
> index 2237abb93ccd..234eab059839 100644
> --- a/include/linux/kvm_types.h
> +++ b/include/linux/kvm_types.h
> @@ -53,13 +53,6 @@ struct gfn_to_hva_cache {
> struct kvm_memory_slot *memslot;
> };
>
> -struct gfn_to_pfn_cache {
> - u64 generation;
> - gfn_t gfn;
> - kvm_pfn_t pfn;
> - bool dirty;
> -};
> -
> #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
> /*
> * Memory caches are used to preallocate memory ahead of various MMU flows,
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index d31724500501..9646bb9112c1 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2548,72 +2548,36 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
> }
> EXPORT_SYMBOL_GPL(gfn_to_page);
>
> -void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache)
> +void kvm_release_pfn(kvm_pfn_t pfn, bool dirty)
> {
> if (pfn == 0)
> return;
>
> - if (cache)
> - cache->pfn = cache->gfn = 0;
> -
> if (dirty)
> kvm_release_pfn_dirty(pfn);
> else
> kvm_release_pfn_clean(pfn);
> }
>
> -static void kvm_cache_gfn_to_pfn(struct kvm_memory_slot *slot, gfn_t gfn,
> - struct gfn_to_pfn_cache *cache, u64 gen)
> -{
> - kvm_release_pfn(cache->pfn, cache->dirty, cache);
> -
> - cache->pfn = gfn_to_pfn_memslot(slot, gfn);
> - cache->gfn = gfn;
> - cache->dirty = false;
> - cache->generation = gen;
> -}
> -
> -static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn,
> - struct kvm_host_map *map,
> - struct gfn_to_pfn_cache *cache,
> - bool atomic)
> +int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map)
> {
> kvm_pfn_t pfn;
> void *hva = NULL;
> struct page *page = KVM_UNMAPPED_PAGE;
> - struct kvm_memory_slot *slot = __gfn_to_memslot(slots, gfn);
> - u64 gen = slots->generation;
>
> if (!map)
> return -EINVAL;
>
> - if (cache) {
> - if (!cache->pfn || cache->gfn != gfn ||
> - cache->generation != gen) {
> - if (atomic)
> - return -EAGAIN;
> - kvm_cache_gfn_to_pfn(slot, gfn, cache, gen);
> - }
> - pfn = cache->pfn;
> - } else {
> - if (atomic)
> - return -EAGAIN;
> - pfn = gfn_to_pfn_memslot(slot, gfn);
> - }
> + pfn = gfn_to_pfn(vcpu->kvm, gfn);
> if (is_error_noslot_pfn(pfn))
> return -EINVAL;
>
> if (pfn_valid(pfn)) {
> page = pfn_to_page(pfn);
> - if (atomic)
> - hva = kmap_atomic(page);
> - else
> - hva = kmap(page);
> + hva = kmap(page);
> #ifdef CONFIG_HAS_IOMEM
> - } else if (!atomic) {
> - hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB);
> } else {
> - return -EINVAL;
> + hva = memremap(pfn_to_hpa(pfn), PAGE_SIZE, MEMREMAP_WB);
> #endif
> }
>
> @@ -2627,27 +2591,9 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn,
>
> return 0;
> }
> -
> -int kvm_map_gfn(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map,
> - struct gfn_to_pfn_cache *cache, bool atomic)
> -{
> - return __kvm_map_gfn(kvm_memslots(vcpu->kvm), gfn, map,
> - cache, atomic);
> -}
> -EXPORT_SYMBOL_GPL(kvm_map_gfn);
> -
> -int kvm_vcpu_map(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map)
> -{
> - return __kvm_map_gfn(kvm_vcpu_memslots(vcpu), gfn, map,
> - NULL, false);
> -}
> EXPORT_SYMBOL_GPL(kvm_vcpu_map);
>
> -static void __kvm_unmap_gfn(struct kvm *kvm,
> - struct kvm_memory_slot *memslot,
> - struct kvm_host_map *map,
> - struct gfn_to_pfn_cache *cache,
> - bool dirty, bool atomic)
> +void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty)
> {
> if (!map)
> return;
> @@ -2655,45 +2601,21 @@ static void __kvm_unmap_gfn(struct kvm *kvm,
> if (!map->hva)
> return;
>
> - if (map->page != KVM_UNMAPPED_PAGE) {
> - if (atomic)
> - kunmap_atomic(map->hva);
> - else
> - kunmap(map->page);
> - }
> + if (map->page != KVM_UNMAPPED_PAGE)
> + kunmap(map->page);
> #ifdef CONFIG_HAS_IOMEM
> - else if (!atomic)
> - memunmap(map->hva);
> else
> - WARN_ONCE(1, "Unexpected unmapping in atomic context");
> + memunmap(map->hva);
> #endif
>
> if (dirty)
> - mark_page_dirty_in_slot(kvm, memslot, map->gfn);
> + kvm_vcpu_mark_page_dirty(vcpu, map->gfn);
>
> - if (cache)
> - cache->dirty |= dirty;
> - else
> - kvm_release_pfn(map->pfn, dirty, NULL);
> + kvm_release_pfn(map->pfn, dirty);
>
> map->hva = NULL;
> map->page = NULL;
> }
> -
> -int kvm_unmap_gfn(struct kvm_vcpu *vcpu, struct kvm_host_map *map,
> - struct gfn_to_pfn_cache *cache, bool dirty, bool atomic)
> -{
> - __kvm_unmap_gfn(vcpu->kvm, gfn_to_memslot(vcpu->kvm, map->gfn), map,
> - cache, dirty, atomic);
> - return 0;
> -}
> -EXPORT_SYMBOL_GPL(kvm_unmap_gfn);
> -
> -void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty)
> -{
> - __kvm_unmap_gfn(vcpu->kvm, kvm_vcpu_gfn_to_memslot(vcpu, map->gfn),
> - map, NULL, dirty, false);
> -}
> EXPORT_SYMBOL_GPL(kvm_vcpu_unmap);
>
> struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn)
>
Queued patches 2-8 as well.
Paolo
next prev parent reply other threads:[~2021-11-16 10:21 UTC|newest]
Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-01 14:09 [PATCH] KVM: x86: Fix recording of guest steal time / preempted status David Woodhouse
2021-11-02 16:38 ` [PATCH v2] " David Woodhouse
2021-11-02 17:01 ` Paolo Bonzini
2021-11-02 17:11 ` David Woodhouse
2021-11-02 17:19 ` Paolo Bonzini
2021-11-02 17:26 ` David Woodhouse
2021-11-02 17:36 ` [PATCH v3] " David Woodhouse
2021-11-11 13:23 ` Paolo Bonzini
2021-11-12 8:28 ` David Woodhouse
2021-11-12 9:31 ` Paolo Bonzini
2021-11-12 9:54 ` David Woodhouse
2021-11-12 10:49 ` Paolo Bonzini
2021-11-12 11:29 ` David Woodhouse
2021-11-12 12:27 ` Paolo Bonzini
2021-11-12 13:28 ` David Woodhouse
2021-11-12 14:56 ` Paolo Bonzini
2021-11-12 15:27 ` David Woodhouse
2021-11-15 16:47 ` [RFC PATCH 0/11] Rework gfn_to_pfn_cache David Woodhouse
2021-11-15 16:50 ` [PATCH 01/11] KVM: x86: Fix steal time asm constraints in 32-bit mode David Woodhouse
2021-11-15 16:50 ` [PATCH 02/11] KVM: x86/xen: Fix get_attr of KVM_XEN_ATTR_TYPE_SHARED_INFO David Woodhouse
2021-11-15 16:50 ` [PATCH 03/11] KVM: selftests: Add event channel upcall support to xen_shinfo_test David Woodhouse
2021-11-15 16:50 ` [PATCH 04/11] KVM: x86/xen: Use sizeof_field() instead of open-coding it David Woodhouse
2021-11-15 16:50 ` [PATCH 05/11] KVM: nVMX: Use kvm_{read,write}_guest_cached() for shadow_vmcs12 David Woodhouse
2021-11-15 16:50 ` [PATCH 06/11] KVM: nVMX: Use kvm_read_guest_offset_cached() for nested VMCS check David Woodhouse
2021-11-15 16:50 ` [PATCH 07/11] KVM: nVMX: Use a gfn_to_hva_cache for vmptrld David Woodhouse
2021-11-15 16:50 ` [PATCH 08/11] KVM: Kill kvm_map_gfn() / kvm_unmap_gfn() and gfn_to_pfn_cache David Woodhouse
2021-11-16 10:21 ` Paolo Bonzini [this message]
2021-11-17 17:18 ` David Woodhouse
2021-11-15 16:50 ` [PATCH 09/11] KVM: Reinstate gfn_to_pfn_cache with invalidation support David Woodhouse
2021-11-15 16:50 ` [PATCH 10/11] KVM: x86/xen: Maintain valid mapping of Xen shared_info page David Woodhouse
2021-11-15 16:50 ` [PATCH 11/11] KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN and event channel delivery David Woodhouse
2021-11-15 17:02 ` David Woodhouse
2021-11-15 18:49 ` Paolo Bonzini
2021-11-15 18:55 ` David Woodhouse
2021-11-15 18:50 ` [RFC PATCH 0/11] Rework gfn_to_pfn_cache Paolo Bonzini
2021-11-15 19:11 ` David Woodhouse
2021-11-15 19:26 ` Paolo Bonzini
2021-11-15 22:59 ` Sean Christopherson
2021-11-15 23:22 ` David Woodhouse
2021-11-16 13:17 ` David Woodhouse
2021-11-16 14:11 ` Paolo Bonzini
2021-11-16 14:25 ` David Woodhouse
2021-11-16 14:57 ` Paolo Bonzini
2021-11-16 15:09 ` David Woodhouse
2021-11-16 15:49 ` Paolo Bonzini
2021-11-16 16:06 ` David Woodhouse
2021-11-16 17:42 ` Paolo Bonzini
2021-11-16 17:57 ` David Woodhouse
2021-11-16 18:46 ` Paolo Bonzini
2021-11-16 19:34 ` David Woodhouse
2021-11-15 23:24 ` David Woodhouse
2021-11-16 11:50 ` [PATCH 0/7] KVM: Add Makefile.kvm for common files David Woodhouse
2021-11-16 11:50 ` [PATCH 1/7] KVM: Introduce CONFIG_HAVE_KVM_DIRTY_RING David Woodhouse
2021-11-16 11:50 ` [PATCH 2/7] KVM: Add Makefile.kvm for common files, use it for x86 David Woodhouse
2021-11-16 11:50 ` [PATCH 3/7] KVM: s390: Use Makefile.kvm for common files David Woodhouse
2021-11-17 7:29 ` Christian Borntraeger
2021-11-16 11:50 ` [PATCH 4/7] KVM: mips: " David Woodhouse
2021-11-16 11:50 ` [PATCH 5/7] KVM: RISC-V: " David Woodhouse
2021-11-16 11:50 ` [PATCH 6/7] KVM: powerpc: " David Woodhouse
2021-11-16 18:43 ` Sean Christopherson
2021-11-16 19:13 ` David Woodhouse
2021-11-16 11:50 ` [PATCH 7/7] KVM: arm64: " David Woodhouse
2021-11-15 21:38 ` [RFC PATCH 0/11] Rework gfn_to_pfn_cache David Woodhouse
2021-11-12 19:44 ` [PATCH v3] KVM: x86: Fix recording of guest steal time / preempted status David Woodhouse
2021-11-03 9:47 ` [PATCH v2] " David Woodhouse
2021-11-03 12:35 ` Paolo Bonzini
2021-11-03 12:56 ` David Woodhouse
2021-11-03 13:05 ` Paolo Bonzini
2021-11-03 13:23 ` David Woodhouse
2021-11-03 13:34 ` David Woodhouse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b76e0ae5-bc5f-7101-9dd9-e8d0fba792bc@redhat.com \
--to=pbonzini@redhat.com \
--cc=boris.ostrovsky@oracle.com \
--cc=dwmw2@infradead.org \
--cc=jmattson@google.com \
--cc=joao.m.martins@oracle.com \
--cc=joro@8bytes.org \
--cc=karahmed@amazon.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=seanjc@google.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).