From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69A35375 for ; Wed, 5 Oct 2022 00:43:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664930628; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rSkJSga6R9+gIcyTz8ya36SwfXawsfAyNINRVrliHkE=; b=dFMp4mbdQgvFfvBYQx+VG6f4OYXliooVgO5LAVyEFjWkJuLqOiS/825I9Celf9Nlfq9fgq tQZdvOirsolStoD7WVLaYQjPO6ds955s7T1oTyiUA1Ei84sfpaioKxxr1RwFDz1IVc+tpX A/ekNglSXdpqldt0ZGFVzxxQEk2KJ2E= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-175-h6aSv_UFOjCliW2ADO0VHg-1; Tue, 04 Oct 2022 20:43:44 -0400 X-MC-Unique: h6aSv_UFOjCliW2ADO0VHg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A27A02999B55; Wed, 5 Oct 2022 00:43:20 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-56.bne.redhat.com [10.64.54.56]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1704042221; Wed, 5 Oct 2022 00:43:02 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.linux.dev Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, peterx@redhat.com, catalin.marinas@arm.com, bgardon@google.com, shuah@kernel.org, andrew.jones@linux.dev, will@kernel.org, dmatlack@google.com, pbonzini@redhat.com, zhenyzha@redhat.com, james.morse@arm.com, suzuki.poulose@arm.com, alexandru.elisei@arm.com, oliver.upton@linux.dev, seanjc@google.com, shan.gavin@gmail.com Subject: [PATCH v5 1/7] KVM: x86: Introduce KVM_REQ_RING_SOFT_FULL Date: Wed, 5 Oct 2022 08:41:48 +0800 Message-ID: <20221005004154.83502-2-gshan@redhat.com> In-Reply-To: <20221005004154.83502-1-gshan@redhat.com> References: <20221005004154.83502-1-gshan@redhat.com> Precedence: bulk X-Mailing-List: kvmarm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Message-ID: <20221005004148.PKa2D3HbwOUM0dMS1ue9DK8YcyihIcSR8ZGicrJ4WBQ@z> This adds KVM_REQ_RING_SOFT_FULL, which is raised when the dirty ring of the specific VCPU becomes softly full in kvm_dirty_ring_push(). The VCPU is enforced to exit when the request is raised and its dirty ring is softly full on its entrance. The event is checked and handled in the newly introduced helper kvm_dirty_ring_check_request(). With this, kvm_dirty_ring_soft_full() becomes a private function. Suggested-by: Marc Zyngier Signed-off-by: Gavin Shan Reviewed-by: Peter Xu --- arch/x86/kvm/x86.c | 15 ++++++--------- include/linux/kvm_dirty_ring.h | 8 ++------ include/linux/kvm_host.h | 1 + virt/kvm/dirty_ring.c | 19 ++++++++++++++++++- 4 files changed, 27 insertions(+), 16 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b0c47b41c264..0dd0d32073e7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10260,16 +10260,13 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) bool req_immediate_exit = false; - /* Forbid vmenter if vcpu dirty ring is soft-full */ - if (unlikely(vcpu->kvm->dirty_ring_size && - kvm_dirty_ring_soft_full(&vcpu->dirty_ring))) { - vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL; - trace_kvm_dirty_ring_exit(vcpu); - r = 0; - goto out; - } - if (kvm_request_pending(vcpu)) { + /* Forbid vmenter if vcpu dirty ring is soft-full */ + if (kvm_dirty_ring_check_request(vcpu)) { + r = 0; + goto out; + } + if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) { r = -EIO; goto out; diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h index 906f899813dc..66508afa0b40 100644 --- a/include/linux/kvm_dirty_ring.h +++ b/include/linux/kvm_dirty_ring.h @@ -64,11 +64,6 @@ static inline void kvm_dirty_ring_free(struct kvm_dirty_ring *ring) { } -static inline bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring) -{ - return true; -} - #else /* CONFIG_HAVE_KVM_DIRTY_RING */ u32 kvm_dirty_ring_get_rsvd_entries(void); @@ -86,11 +81,12 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring); */ void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset); +bool kvm_dirty_ring_check_request(struct kvm_vcpu *vcpu); + /* for use in vm_operations_struct */ struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset); void kvm_dirty_ring_free(struct kvm_dirty_ring *ring); -bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring); #endif /* CONFIG_HAVE_KVM_DIRTY_RING */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f4519d3689e1..53fa3134fee0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -157,6 +157,7 @@ static inline bool is_error_page(struct page *page) #define KVM_REQ_VM_DEAD (1 | KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_UNBLOCK 2 #define KVM_REQ_UNHALT 3 +#define KVM_REQ_RING_SOFT_FULL 4 #define KVM_REQUEST_ARCH_BASE 8 /* diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c index d6fabf238032..f68d75026bc0 100644 --- a/virt/kvm/dirty_ring.c +++ b/virt/kvm/dirty_ring.c @@ -26,7 +26,7 @@ static u32 kvm_dirty_ring_used(struct kvm_dirty_ring *ring) return READ_ONCE(ring->dirty_index) - READ_ONCE(ring->reset_index); } -bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring) +static bool kvm_dirty_ring_soft_full(struct kvm_dirty_ring *ring) { return kvm_dirty_ring_used(ring) >= ring->soft_limit; } @@ -149,6 +149,7 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring) void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset) { + struct kvm_vcpu *vcpu = container_of(ring, struct kvm_vcpu, dirty_ring); struct kvm_dirty_gfn *entry; /* It should never get full */ @@ -166,6 +167,22 @@ void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset) kvm_dirty_gfn_set_dirtied(entry); ring->dirty_index++; trace_kvm_dirty_ring_push(ring, slot, offset); + + if (kvm_dirty_ring_soft_full(ring)) + kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu); +} + +bool kvm_dirty_ring_check_request(struct kvm_vcpu *vcpu) +{ + if (kvm_check_request(KVM_REQ_RING_SOFT_FULL, vcpu) && + kvm_dirty_ring_soft_full(&vcpu->dirty_ring)) { + kvm_make_request(KVM_REQ_RING_SOFT_FULL, vcpu); + vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL; + trace_kvm_dirty_ring_exit(vcpu); + return true; + } + + return false; } struct page *kvm_dirty_ring_get_page(struct kvm_dirty_ring *ring, u32 offset) -- 2.23.0