From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DADDC4320A for ; Fri, 30 Jul 2021 14:33:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 359E560E93 for ; Fri, 30 Jul 2021 14:33:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239238AbhG3Odm (ORCPT ); Fri, 30 Jul 2021 10:33:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239200AbhG3Odk (ORCPT ); Fri, 30 Jul 2021 10:33:40 -0400 Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com [IPv6:2a00:1450:4864:20::22e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F035C061765 for ; Fri, 30 Jul 2021 07:33:35 -0700 (PDT) Received: by mail-lj1-x22e.google.com with SMTP id m9so12637071ljp.7 for ; Fri, 30 Jul 2021 07:33:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=dQGtz4NxA/KCcb2UkEKczuSorFz82jXK+AMOuFhIpSY=; b=RDXh6xhSHzVcHoLlZtkJY8rwf+9yeKe0yYYIuU3JvH6YO4ghrz6ojczlapHQhwLmCd Fe+TAjzwdXYjAv06/eyWHH3Q10DkSfI9d/AfMDP7wA867e2ePSETc1r6V984B2jOSxuf amMOK/ZmyhihhEhZ8f/Sa3NU/ODRr5XPRXRIR9lvPBR3s9csTgQ1WHhVYHwGmlpeK9IT dDVZYTaWI9Vc59rDlXVpmgnW6t6/qB7KFS8XKcZdocIlK2yLURmYOJJfjNhhBnZ5DHFd FjPTbYr5RJa7XU4gvoi6v4QJy5l1V9kf7uESMvX+FkIFC6dNkaxHkyQDRuwJkGyuXETd ziGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=dQGtz4NxA/KCcb2UkEKczuSorFz82jXK+AMOuFhIpSY=; b=hgQR2TwhdRh8JfeitYZx/1wM4RG2WzrxfgcCzuyfKJdMQIESdJvjgXeDNxBmF3fsYG zbBwv8lYKldht0FVMjAc3dkR4fF8LitMs8nbSt47TOKQDGSYZIi/u6pod7zJk6oakIDp JzM0oX1p1/LG7PQaHvEbw3sP1p9S5KgIJ12N0Fojs3RE8Bzm72qoXOoaZHlfEEls9faB GKqeezrcmbGavTaHgSK48gZJbBZtA+9uXLckGj9Mz6flXpySKW601v8j6saqyUXam41E ZG3rNLgBA9HzB/LjZTmrcSw/TpUBkDtQsfLt7EdVm1mRBsF4lWwhpQQAHvWQDp8jFdmn uIfQ== X-Gm-Message-State: AOAM530DAVqkzXcrtqN6eTGJTPpkfet8QWU26uva8H8tv+o2qCMVVjmg Ejcrr9n5rgH48ANromCCZFJk9VHbx3jSGYo/ericbw== X-Google-Smtp-Source: ABdhPJzgGnVq10vXKLaHBZpfgLUJ0afJNvNbZo+mH2b7grVgG96TtjH9fn8FrBCfPY3RGRFW6+8Q94qxjfL1Nl+Zq/g= X-Received: by 2002:a2e:a68f:: with SMTP id q15mr1890524lje.314.1627655612988; Fri, 30 Jul 2021 07:33:32 -0700 (PDT) MIME-Version: 1.0 References: <20210729220916.1672875-1-oupton@google.com> <20210729220916.1672875-4-oupton@google.com> <878s1o2l6j.wl-maz@kernel.org> In-Reply-To: <878s1o2l6j.wl-maz@kernel.org> From: Oliver Upton Date: Fri, 30 Jul 2021 07:33:21 -0700 Message-ID: Subject: Re: [PATCH v2 3/3] KVM: arm64: Use generic KVM xfer to guest work function To: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Paolo Bonzini , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Thomas Gleixner , Peter Zijlstra , Andy Lutomirski , linux-arm-kernel@lists.infradead.org, Peter Shier , Shakeel Butt , Guangyu Shi Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Marc, On Fri, Jul 30, 2021 at 2:41 AM Marc Zyngier wrote: > > Hi Oliver, > > On Thu, 29 Jul 2021 23:09:16 +0100, > Oliver Upton wrote: > > > > Clean up handling of checks for pending work by switching to the generic > > infrastructure to do so. > > > > We pick up handling for TIF_NOTIFY_RESUME from this switch, meaning that > > task work will be correctly handled. > > > > Signed-off-by: Oliver Upton > > --- > > arch/arm64/kvm/Kconfig | 1 + > > arch/arm64/kvm/arm.c | 27 ++++++++++++++------------- > > 2 files changed, 15 insertions(+), 13 deletions(-) > > > > diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig > > index a4eba0908bfa..8bc1fac5fa26 100644 > > --- a/arch/arm64/kvm/Kconfig > > +++ b/arch/arm64/kvm/Kconfig > > @@ -26,6 +26,7 @@ menuconfig KVM > > select HAVE_KVM_ARCH_TLB_FLUSH_ALL > > select KVM_MMIO > > select KVM_GENERIC_DIRTYLOG_READ_PROTECT > > + select KVM_XFER_TO_GUEST_WORK > > select SRCU > > select KVM_VFIO > > select HAVE_KVM_EVENTFD > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > index 60d0a546d7fd..9762e2129813 100644 > > --- a/arch/arm64/kvm/arm.c > > +++ b/arch/arm64/kvm/arm.c > > @@ -6,6 +6,7 @@ > > > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -714,6 +715,13 @@ static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu) > > static_branch_unlikely(&arm64_mismatched_32bit_el0); > > } > > > > +static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu) > > +{ > > + return kvm_request_pending(vcpu) || > > + need_new_vmid_gen(&vcpu->arch.hw_mmu->vmid) || > > + xfer_to_guest_mode_work_pending(); > > Here's what xfer_to_guest_mode_work_pending() says: > > > * Has to be invoked with interrupts disabled before the transition to > * guest mode. > > > At the point where you call this, we already are in guest mode, at > least in the KVM sense. I believe the comment is suggestive of guest mode in the hardware sense, not KVM's vcpu->mode designation. I got this from arch/x86/kvm/x86.c:vcpu_enter_guest() to infer the author's intentions. > > > +} > > + > > /** > > * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code > > * @vcpu: The VCPU pointer > > @@ -757,7 +765,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > > /* > > * Check conditions before entering the guest > > */ > > - cond_resched(); > > + if (__xfer_to_guest_mode_work_pending()) { > > + ret = xfer_to_guest_mode_handle_work(vcpu); > > xfer_to_guest_mode_handle_work() already does the exact equivalent of > __xfer_to_guest_mode_work_pending(). Why do we need to do it twice? Right, there's no need to do the check twice. > > > + if (!ret) > > + ret = 1; > > + } > > > > update_vmid(&vcpu->arch.hw_mmu->vmid); > > > > @@ -776,16 +788,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > > > > kvm_vgic_flush_hwstate(vcpu); > > > > - /* > > - * Exit if we have a signal pending so that we can deliver the > > - * signal to user space. > > - */ > > - if (signal_pending(current)) { > > - ret = -EINTR; > > - run->exit_reason = KVM_EXIT_INTR; > > - ++vcpu->stat.signal_exits; > > - } > > - > > /* > > * If we're using a userspace irqchip, then check if we need > > * to tell a userspace irqchip about timer or PMU level > > @@ -809,8 +811,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > > */ > > smp_store_mb(vcpu->mode, IN_GUEST_MODE); > > > > - if (ret <= 0 || need_new_vmid_gen(&vcpu->arch.hw_mmu->vmid) || > > - kvm_request_pending(vcpu)) { > > + if (ret <= 0 || kvm_vcpu_exit_request(vcpu)) { > > If you are doing this, please move the userspace irqchip handling into > the helper as well, so that we have a single function dealing with > collecting exit reasons. Sure thing. Thanks for the quick review, Marc! -- Best, Oliver