From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07CD5C433F5 for ; Thu, 10 Feb 2022 02:09:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232179AbiBJCI4 (ORCPT ); Wed, 9 Feb 2022 21:08:56 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:55296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235872AbiBJCIl (ORCPT ); Wed, 9 Feb 2022 21:08:41 -0500 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27799B07 for ; Wed, 9 Feb 2022 18:08:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=EPVVj8e5VYc/sxbpTFSedbue+iktt5ulk1qYFSL0wR4=; b=gVCywl6q+o+dD7XBDboVDhrtd9 lrsCEzfo80qqxnVR1ty55VgUIabx91Z3RFKeF8lrhlOO/nBzq4VAmATUEE+GcOtktBgnIqmobOv1j ncps3rDVEgRnj70ostGughl87Z+c3Aqt9fofiqATyM+7XFrkE2fH5Go4B15XsZymWI11kCEOzE7t3 wGwDpAqThAjHL/UhnjdOOnGJA8vXRfYdGsjigLO/KXgTcHpz84sJeH+pyzD9oHVYALPWM48UWFc1l tRdGGj6Umc2begwfgIgiBUh2Q/DwtOQoumU9Iw7h9RLdYd5SmjPc2aPfS+VWhwo80gt63rZ0nbnRe oogzri0A==; Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHxIy-008YWE-7w; Thu, 10 Feb 2022 00:27:24 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nHxIx-0019Co-Vc; Thu, 10 Feb 2022 00:27:23 +0000 From: David Woodhouse To: kvm@vger.kernel.org, Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Joao Martins , Boris Ostrovsky , Metin Kaya , Paul Durrant Subject: [PATCH v0 02/15] KVM: x86/xen: Use gfn_to_pfn_cache for runstate area Date: Thu, 10 Feb 2022 00:27:08 +0000 Message-Id: <20220210002721.273608-3-dwmw2@infradead.org> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220210002721.273608-1-dwmw2@infradead.org> References: <20220210002721.273608-1-dwmw2@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: David Woodhouse X-SRS-Rewrite: SMTP reverse-path rewritten from by desiato.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David Woodhouse Signed-off-by: David Woodhouse --- arch/x86/include/asm/kvm_host.h | 3 +- arch/x86/kvm/x86.c | 1 + arch/x86/kvm/xen.c | 111 ++++++++++++++++---------------- arch/x86/kvm/xen.h | 6 +- 4 files changed, 62 insertions(+), 59 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6e7c545bc7ee..1e73053fd2bf 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -603,10 +603,9 @@ struct kvm_vcpu_xen { u32 current_runstate; bool vcpu_info_set; bool vcpu_time_info_set; - bool runstate_set; struct gfn_to_hva_cache vcpu_info_cache; struct gfn_to_hva_cache vcpu_time_info_cache; - struct gfn_to_hva_cache runstate_cache; + struct gfn_to_pfn_cache runstate_cache; u64 last_steal; u64 runstate_entry_time; u64 runstate_times[4]; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 74b53a16f38a..5d0191bf30b3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11195,6 +11195,7 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) free_cpumask_var(vcpu->arch.wbinvd_dirty_mask); fpu_free_guest_fpstate(&vcpu->arch.guest_fpu); + kvm_xen_destroy_vcpu(vcpu); kvm_hv_vcpu_uninit(vcpu); kvm_pmu_destroy(vcpu); kfree(vcpu->arch.mce_banks); diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 39b319f428bc..5d40d6521440 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -133,27 +133,37 @@ static void kvm_xen_update_runstate(struct kvm_vcpu *v, int state) void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) { struct kvm_vcpu_xen *vx = &v->arch.xen; - struct gfn_to_hva_cache *ghc = &vx->runstate_cache; - struct kvm_memslots *slots = kvm_memslots(v->kvm); - bool atomic = (state == RUNSTATE_runnable); - uint64_t state_entry_time; - int __user *user_state; - uint64_t __user *user_times; + struct gfn_to_pfn_cache *gpc = &vx->runstate_cache; + uint64_t *user_times; + unsigned long flags; + size_t user_len; + int *user_state; kvm_xen_update_runstate(v, state); - if (!vx->runstate_set) + if (!vx->runstate_cache.active) return; - if (unlikely(slots->generation != ghc->generation || kvm_is_error_hva(ghc->hva)) && - kvm_gfn_to_hva_cache_init(v->kvm, ghc, ghc->gpa, ghc->len)) - return; + if (IS_ENABLED(CONFIG_64BIT) && v->kvm->arch.xen.long_mode) + user_len = sizeof(struct vcpu_runstate_info); + else + user_len = sizeof(struct compat_vcpu_runstate_info); - /* We made sure it fits in a single page */ - BUG_ON(!ghc->memslot); + read_lock_irqsave(&gpc->lock, flags); + while (!kvm_gfn_to_pfn_cache_check(v->kvm, gpc, gpc->gpa, + user_len)) { + read_unlock_irqrestore(&gpc->lock, flags); - if (atomic) - pagefault_disable(); + /* When invoked from kvm_sched_out() we cannot sleep */ + if (state == RUNSTATE_runnable) + return; + + if (kvm_gfn_to_pfn_cache_refresh(v->kvm, gpc, gpc->gpa, + user_len, false)) + return; + + read_lock_irqsave(&gpc->lock, flags); + } /* * The only difference between 32-bit and 64-bit versions of the @@ -167,37 +177,32 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) */ BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state) != 0); BUILD_BUG_ON(offsetof(struct compat_vcpu_runstate_info, state) != 0); - user_state = (int __user *)ghc->hva; - BUILD_BUG_ON(sizeof(struct compat_vcpu_runstate_info) != 0x2c); - - user_times = (uint64_t __user *)(ghc->hva + - offsetof(struct compat_vcpu_runstate_info, - state_entry_time)); #ifdef CONFIG_X86_64 BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, state_entry_time) != offsetof(struct compat_vcpu_runstate_info, state_entry_time) + 4); BUILD_BUG_ON(offsetof(struct vcpu_runstate_info, time) != offsetof(struct compat_vcpu_runstate_info, time) + 4); - - if (v->kvm->arch.xen.long_mode) - user_times = (uint64_t __user *)(ghc->hva + - offsetof(struct vcpu_runstate_info, - state_entry_time)); #endif + + user_state = gpc->khva; + + if (IS_ENABLED(CONFIG_64BIT) && v->kvm->arch.xen.long_mode) + user_times = gpc->khva + offsetof(struct vcpu_runstate_info, + state_entry_time); + else + user_times = gpc->khva + offsetof(struct compat_vcpu_runstate_info, + state_entry_time); + /* * First write the updated state_entry_time to the guest area. */ - state_entry_time = vx->runstate_entry_time; - state_entry_time |= XEN_RUNSTATE_UPDATE; - BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, state_entry_time) != - sizeof(state_entry_time)); + sizeof(user_times[0])); BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state_entry_time) != - sizeof(state_entry_time)); + sizeof(user_times[0])); - if (__put_user(state_entry_time, user_times)) - goto out; + user_times[0] = vx->runstate_entry_time | XEN_RUNSTATE_UPDATE; smp_wmb(); /* @@ -209,8 +214,7 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) BUILD_BUG_ON(sizeof_field(struct compat_vcpu_runstate_info, state) != sizeof(vx->current_runstate)); - if (__put_user(vx->current_runstate, user_state)) - goto out; + *user_state = vx->current_runstate; /* * Write the actual runstate times immediately after the @@ -225,23 +229,19 @@ void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state) BUILD_BUG_ON(sizeof_field(struct vcpu_runstate_info, time) != sizeof(vx->runstate_times)); - if (__copy_to_user(user_times + 1, vx->runstate_times, sizeof(vx->runstate_times))) - goto out; + memcpy(user_times + 1, vx->runstate_times, sizeof(vx->runstate_times)); smp_wmb(); /* * Finally, clear the XEN_RUNSTATE_UPDATE bit in the guest's * runstate_entry_time field. */ - state_entry_time &= ~XEN_RUNSTATE_UPDATE; - __put_user(state_entry_time, user_times); + user_times[0] &= ~XEN_RUNSTATE_UPDATE; smp_wmb(); - out: - mark_page_dirty_in_slot(v->kvm, ghc->memslot, ghc->gpa >> PAGE_SHIFT); + read_unlock_irqrestore(&gpc->lock, flags); - if (atomic) - pagefault_enable(); + mark_page_dirty_in_slot(v->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); } int __kvm_xen_has_interrupt(struct kvm_vcpu *v) @@ -504,24 +504,17 @@ int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) break; } if (data->u.gpa == GPA_INVALID) { - vcpu->arch.xen.runstate_set = false; + kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, + &vcpu->arch.xen.runstate_cache); r = 0; break; } - /* It must fit within a single page */ - if ((data->u.gpa & ~PAGE_MASK) + sizeof(struct vcpu_runstate_info) > PAGE_SIZE) { - r = -EINVAL; - break; - } - - r = kvm_gfn_to_hva_cache_init(vcpu->kvm, + r = kvm_gfn_to_pfn_cache_init(vcpu->kvm, &vcpu->arch.xen.runstate_cache, - data->u.gpa, - sizeof(struct vcpu_runstate_info)); - if (!r) { - vcpu->arch.xen.runstate_set = true; - } + NULL, false, true, data->u.gpa, + sizeof(struct vcpu_runstate_info), + false); break; case KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT: @@ -656,7 +649,7 @@ int kvm_xen_vcpu_get_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data) r = -EOPNOTSUPP; break; } - if (vcpu->arch.xen.runstate_set) { + if (vcpu->arch.xen.runstate_cache.active) { data->u.gpa = vcpu->arch.xen.runstate_cache.gpa; r = 0; } @@ -1054,3 +1047,9 @@ int kvm_xen_setup_evtchn(struct kvm *kvm, return 0; } + +void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) +{ + kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, + &vcpu->arch.xen.runstate_cache); +} diff --git a/arch/x86/kvm/xen.h b/arch/x86/kvm/xen.h index adbcc9ed59db..54b2bf4c3001 100644 --- a/arch/x86/kvm/xen.h +++ b/arch/x86/kvm/xen.h @@ -23,7 +23,7 @@ int kvm_xen_write_hypercall_page(struct kvm_vcpu *vcpu, u64 data); int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc); void kvm_xen_init_vm(struct kvm *kvm); void kvm_xen_destroy_vm(struct kvm *kvm); - +void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu); int kvm_xen_set_evtchn_fast(struct kvm_kernel_irq_routing_entry *e, struct kvm *kvm); int kvm_xen_setup_evtchn(struct kvm *kvm, @@ -65,6 +65,10 @@ static inline void kvm_xen_destroy_vm(struct kvm *kvm) { } +static inline void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu) +{ +} + static inline bool kvm_xen_msr_enabled(struct kvm *kvm) { return false; -- 2.33.1