From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 321E5C433FE for ; Fri, 22 Oct 2021 15:04:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0DD06611C3 for ; Fri, 22 Oct 2021 15:04:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233296AbhJVPGd (ORCPT ); Fri, 22 Oct 2021 11:06:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233069AbhJVPGb (ORCPT ); Fri, 22 Oct 2021 11:06:31 -0400 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A173C061766 for ; Fri, 22 Oct 2021 08:04:13 -0700 (PDT) Received: by mail-wm1-x331.google.com with SMTP id 193-20020a1c01ca000000b00327775075f7so3700683wmb.5 for ; Fri, 22 Oct 2021 08:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=T30FlJ5UKlkIRPt0AISR9G+L/3Q8enwrJccctT6p1g8=; b=MBtmLDJBwdIuSEGHsWnq13enuOMwfZw4/18uAeEnchCMIre5XYu/Vt8p/tQCYzsMop COAHxIk3w5JxNWzKs0XdOypZWDIIC5sFJ7ESkL1QmlSMivIp/SjRWF4do7Qy5ZEEpQot 2zKwDwHd4DXKkSbzrAcJDcQ0n8+XwO6A2RydmoMn16Ya4tX1ZQ4jT+vP9y1HntYmUdWb uOg+NVoX7R+xMb3qTatR3G99+5P77NeNif+FnN9FZgOaIzhfFWIzrR1Gp+ZYjJLlI3Ow tPdMCc3fKgsHCPGC2/0di1W4P7SRM2YCIKidlaJVgJ5bQTcDh2TawkTyUJQS93SrKM7S +t/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=T30FlJ5UKlkIRPt0AISR9G+L/3Q8enwrJccctT6p1g8=; b=dezUmh980NEloCW0r1m/C4nsV/RuZ3IU8C8TDc37A/fgoUQqm84Mv6sh9Sm3X+pbVG uz+vJhDvJV3QHbztFuaCg/HVqncwT+jLD4486kTRkUQSfARpFAA6FQvnXDxg7woAXDzI pJhA/G1QgS5W+V3Skro9doIhLgOjRLlO0xaanAh95gOfdNS8FntV8B7boMPef01E4PDT 5WJO1wec+ThVPNDXITTd7n3BD0xBky9KRx4fOYQNRlUm/hBMU6+o9HuxNzVpbVSd5WGm i0S2OMcl5U8xuLrf481Fn74zMUKU5zgncIrrw4YQv/EKD5bfEml1uqnf+I5sU/O600Re jlvQ== X-Gm-Message-State: AOAM5302P2Ox2eqo/4JjsEpYkS1vhQH5hRObKXmXtdptTFiFV97CpMl7 ku3w+rUUc6d770vEDq4AqvL1tUvKgy6nYGnHjLdPDw== X-Google-Smtp-Source: ABdhPJztGBg8weOQ78E8KVd7ncPE7BUNSaOVfTMsmKjCcFD2swST5/UMCgrVa+WC8Aj7pEdSctUqBqmByiVW+amI3bY= X-Received: by 2002:a05:600c:354c:: with SMTP id i12mr277022wmq.59.1634915051483; Fri, 22 Oct 2021 08:04:11 -0700 (PDT) MIME-Version: 1.0 References: <20211009021236.4122790-1-seanjc@google.com> <20211009021236.4122790-14-seanjc@google.com> In-Reply-To: <20211009021236.4122790-14-seanjc@google.com> From: Anup Patel Date: Fri, 22 Oct 2021 20:34:00 +0530 Message-ID: Subject: Re: [PATCH v2 13/43] KVM: Rename kvm_vcpu_block() => kvm_vcpu_halt() To: Sean Christopherson Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini , Wanpeng Li , KVM General , David Hildenbrand , "linux-kernel@vger.kernel.org List" , Atish Patra , linux-riscv , Claudio Imbrenda , kvmarm@lists.cs.columbia.edu, Joerg Roedel , kvm-ppc@vger.kernel.org, David Matlack , linux-arm-kernel , Jim Mattson , Cornelia Huck , linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, Vitaly Kuznetsov Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Oct 9, 2021 at 7:43 AM Sean Christopherson wrote: > > Rename kvm_vcpu_block() to kvm_vcpu_halt() in preparation for splitting > the actual "block" sequences into a separate helper (to be named > kvm_vcpu_block()). x86 will use the standalone block-only path to handle > non-halt cases where the vCPU is not runnable. > > Rename block_ns to halt_ns to match the new function name. > > No functional change intended. > > Reviewed-by: David Matlack > Reviewed-by: Christian Borntraeger > Signed-off-by: Sean Christopherson > --- For KVM RISC-V: Reviewed-by: Anup Patel Regards, Anup > arch/arm64/kvm/arch_timer.c | 2 +- > arch/arm64/kvm/arm.c | 2 +- > arch/arm64/kvm/handle_exit.c | 2 +- > arch/arm64/kvm/psci.c | 2 +- > arch/mips/kvm/emulate.c | 2 +- > arch/powerpc/kvm/book3s_pr.c | 2 +- > arch/powerpc/kvm/book3s_pr_papr.c | 2 +- > arch/powerpc/kvm/booke.c | 2 +- > arch/powerpc/kvm/powerpc.c | 2 +- > arch/riscv/kvm/vcpu_exit.c | 2 +- > arch/s390/kvm/interrupt.c | 2 +- > arch/x86/kvm/x86.c | 11 +++++++++-- > include/linux/kvm_host.h | 2 +- > virt/kvm/kvm_main.c | 20 +++++++++----------- > 14 files changed, 30 insertions(+), 25 deletions(-) > > diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c > index 3df67c127489..7e8396f74010 100644 > --- a/arch/arm64/kvm/arch_timer.c > +++ b/arch/arm64/kvm/arch_timer.c > @@ -467,7 +467,7 @@ static void timer_save_state(struct arch_timer_context *ctx) > } > > /* > - * Schedule the background timer before calling kvm_vcpu_block, so that this > + * Schedule the background timer before calling kvm_vcpu_halt, so that this > * thread is removed from its waitqueue and made runnable when there's a timer > * interrupt to handle. > */ > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 1346f81b34df..268b1e7bf700 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -672,7 +672,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu) > vgic_v4_put(vcpu, true); > preempt_enable(); > > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > preempt_disable(); > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c > index 4794563a506b..6d0baf71aa67 100644 > --- a/arch/arm64/kvm/handle_exit.c > +++ b/arch/arm64/kvm/handle_exit.c > @@ -82,7 +82,7 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu) > * > * WFE: Yield the CPU and come back to this vcpu when the scheduler > * decides to. > - * WFI: Simply call kvm_vcpu_block(), which will halt execution of > + * WFI: Simply call kvm_vcpu_halt(), which will halt execution of > * world-switches and schedule other host processes until there is an > * incoming IRQ or FIQ to the VM. > */ > diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c > index 74c47d420253..e275b2ca08b9 100644 > --- a/arch/arm64/kvm/psci.c > +++ b/arch/arm64/kvm/psci.c > @@ -46,7 +46,7 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu) > * specification (ARM DEN 0022A). This means all suspend states > * for KVM will preserve the register state. > */ > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > return PSCI_RET_SUCCESS; > diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c > index 22e745e49b0a..b494d8d39290 100644 > --- a/arch/mips/kvm/emulate.c > +++ b/arch/mips/kvm/emulate.c > @@ -952,7 +952,7 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu) > if (!vcpu->arch.pending_exceptions) { > kvm_vz_lose_htimer(vcpu); > vcpu->arch.wait = 1; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > > /* > * We we are runnable, then definitely go off to user space to > diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c > index 6bc9425acb32..0ced1b16f0e5 100644 > --- a/arch/powerpc/kvm/book3s_pr.c > +++ b/arch/powerpc/kvm/book3s_pr.c > @@ -492,7 +492,7 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) > > if (msr & MSR_POW) { > if (!vcpu->arch.pending_exceptions) { > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > > diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c > index ac14239f3424..1f10e7dfcdd0 100644 > --- a/arch/powerpc/kvm/book3s_pr_papr.c > +++ b/arch/powerpc/kvm/book3s_pr_papr.c > @@ -376,7 +376,7 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) > return kvmppc_h_pr_stuff_tce(vcpu); > case H_CEDE: > kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > return EMULATE_DONE; > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c > index 977801c83aff..12abffa40cd9 100644 > --- a/arch/powerpc/kvm/booke.c > +++ b/arch/powerpc/kvm/booke.c > @@ -718,7 +718,7 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) > > if (vcpu->arch.shared->msr & MSR_WE) { > local_irq_enable(); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > hard_irq_disable(); > > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c > index be22da157569..6a94545b99fc 100644 > --- a/arch/powerpc/kvm/powerpc.c > +++ b/arch/powerpc/kvm/powerpc.c > @@ -236,7 +236,7 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu) > break; > case EV_HCALL_TOKEN(EV_IDLE): > r = EV_SUCCESS; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > break; > default: > diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c > index 13bbc3f73713..949bb9828aa5 100644 > --- a/arch/riscv/kvm/vcpu_exit.c > +++ b/arch/riscv/kvm/vcpu_exit.c > @@ -146,7 +146,7 @@ static int system_opcode_insn(struct kvm_vcpu *vcpu, > vcpu->stat.wfi_exit_stat++; > if (!kvm_arch_vcpu_runnable(vcpu)) { > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > } > diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c > index 520450a7956f..10bd648170b7 100644 > --- a/arch/s390/kvm/interrupt.c > +++ b/arch/s390/kvm/interrupt.c > @@ -1335,7 +1335,7 @@ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu) > VCPU_EVENT(vcpu, 4, "enabled wait: %llu ns", sltime); > no_timer: > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->valid_wakeup = false; > __unset_cpu_idle(vcpu); > vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 9c23ae1d483d..e6c17bbed25c 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -8651,6 +8651,13 @@ void kvm_arch_exit(void) > > static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason) > { > + /* > + * The vCPU has halted, e.g. executed HLT. Update the run state if the > + * local APIC is in-kernel, the run loop will detect the non-runnable > + * state and halt the vCPU. Exit to userspace if the local APIC is > + * managed by userspace, in which case userspace is responsible for > + * handling wake events. > + */ > ++vcpu->stat.halt_exits; > if (lapic_in_kernel(vcpu)) { > vcpu->arch.mp_state = state; > @@ -9892,7 +9899,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) > if (!kvm_arch_vcpu_runnable(vcpu) && > (!kvm_x86_ops.pre_block || static_call(kvm_x86_pre_block)(vcpu) == 0)) { > srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); > > if (kvm_x86_ops.post_block) > @@ -10126,7 +10133,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > r = -EINTR; > goto out; > } > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > if (kvm_apic_accept_events(vcpu) < 0) { > r = 0; > goto out; > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 1ced2914d9ca..c2ea4004553a 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -967,7 +967,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); > void kvm_sigset_activate(struct kvm_vcpu *vcpu); > void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); > > -void kvm_vcpu_block(struct kvm_vcpu *vcpu); > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu); > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu); > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 227f6bbe0716..c13bf3367fda 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -3223,17 +3223,14 @@ static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start, > } > } > > -/* > - * The vCPU has executed a HLT instruction with in-kernel mode enabled. > - */ > -void kvm_vcpu_block(struct kvm_vcpu *vcpu) > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu) > { > struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); > bool halt_poll_allowed = !kvm_arch_no_poll(vcpu); > bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns; > ktime_t start, cur, poll_end; > bool waited = false; > - u64 block_ns; > + u64 halt_ns; > > start = cur = poll_end = ktime_get(); > if (do_halt_poll) { > @@ -3275,7 +3272,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > ktime_to_ns(cur) - ktime_to_ns(poll_end)); > } > out: > - block_ns = ktime_to_ns(cur) - ktime_to_ns(start); > + /* The total time the vCPU was "halted", including polling time. */ > + halt_ns = ktime_to_ns(cur) - ktime_to_ns(start); > > /* > * Note, halt-polling is considered successful so long as the vCPU was > @@ -3289,24 +3287,24 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > if (!vcpu_valid_wakeup(vcpu)) { > shrink_halt_poll_ns(vcpu); > } else if (vcpu->kvm->max_halt_poll_ns) { > - if (block_ns <= vcpu->halt_poll_ns) > + if (halt_ns <= vcpu->halt_poll_ns) > ; > /* we had a long block, shrink polling */ > else if (vcpu->halt_poll_ns && > - block_ns > vcpu->kvm->max_halt_poll_ns) > + halt_ns > vcpu->kvm->max_halt_poll_ns) > shrink_halt_poll_ns(vcpu); > /* we had a short halt and our poll time is too small */ > else if (vcpu->halt_poll_ns < vcpu->kvm->max_halt_poll_ns && > - block_ns < vcpu->kvm->max_halt_poll_ns) > + halt_ns < vcpu->kvm->max_halt_poll_ns) > grow_halt_poll_ns(vcpu); > } else { > vcpu->halt_poll_ns = 0; > } > } > > - trace_kvm_vcpu_wakeup(block_ns, waited, vcpu_valid_wakeup(vcpu)); > + trace_kvm_vcpu_wakeup(halt_ns, waited, vcpu_valid_wakeup(vcpu)); > } > -EXPORT_SYMBOL_GPL(kvm_vcpu_block); > +EXPORT_SYMBOL_GPL(kvm_vcpu_halt); > > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) > { > -- > 2.33.0.882.g93a45727a2-goog > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69316C433EF for ; Fri, 22 Oct 2021 15:04:18 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id E978561205 for ; Fri, 22 Oct 2021 15:04:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E978561205 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=brainfault.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 5C92140762; Fri, 22 Oct 2021 11:04:17 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@brainfault-org.20210112.gappssmtp.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Tx5K+OS3yHRh; Fri, 22 Oct 2021 11:04:15 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 7DAF04B10D; Fri, 22 Oct 2021 11:04:15 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 0084F40762 for ; Fri, 22 Oct 2021 11:04:15 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RPbvHf5560tt for ; Fri, 22 Oct 2021 11:04:13 -0400 (EDT) Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id D80A2405A6 for ; Fri, 22 Oct 2021 11:04:12 -0400 (EDT) Received: by mail-wm1-f53.google.com with SMTP id v127so3159815wme.5 for ; Fri, 22 Oct 2021 08:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=T30FlJ5UKlkIRPt0AISR9G+L/3Q8enwrJccctT6p1g8=; b=MBtmLDJBwdIuSEGHsWnq13enuOMwfZw4/18uAeEnchCMIre5XYu/Vt8p/tQCYzsMop COAHxIk3w5JxNWzKs0XdOypZWDIIC5sFJ7ESkL1QmlSMivIp/SjRWF4do7Qy5ZEEpQot 2zKwDwHd4DXKkSbzrAcJDcQ0n8+XwO6A2RydmoMn16Ya4tX1ZQ4jT+vP9y1HntYmUdWb uOg+NVoX7R+xMb3qTatR3G99+5P77NeNif+FnN9FZgOaIzhfFWIzrR1Gp+ZYjJLlI3Ow tPdMCc3fKgsHCPGC2/0di1W4P7SRM2YCIKidlaJVgJ5bQTcDh2TawkTyUJQS93SrKM7S +t/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=T30FlJ5UKlkIRPt0AISR9G+L/3Q8enwrJccctT6p1g8=; b=RxG03Mxfi2QMoQmNArTr9caAEcdmIQHMWZq1VAtzSFoSq1r/9uYUkoifi4u7qg7Ybm KNPV9cv8LS/GpWjkqFifsZ4SlvhgxBOub6oT0iuhjRGwUf2V8+6c312W9MbLuoSWga/P b0K6LaMeoVEGigIVE7haSM4THVakmdW7FzhLqpXeBbmKgYChEw05lUs/Tk+9b9oXx/Rw lDGctQUfFbFHxMJBFU27O0tI+heKOYdUFye4PEGeCVGbPtSHya8+B7TeZ7H4SKowWGRQ 4UiAHetDzRcNzpXlq4NKPG4RUiNqAxjMzt3zrryc1ouMaICQYOyoSG/8E5k460kfh5wX lYOA== X-Gm-Message-State: AOAM533Vd84Xoq8lVd66nLHhracedHxzwXISQJaB5mc+gx+Wp3budY8H 8JSml2pO1NFSfdCgmf5OpeLIFAf7lrXYsE3GssF2EA== X-Google-Smtp-Source: ABdhPJztGBg8weOQ78E8KVd7ncPE7BUNSaOVfTMsmKjCcFD2swST5/UMCgrVa+WC8Aj7pEdSctUqBqmByiVW+amI3bY= X-Received: by 2002:a05:600c:354c:: with SMTP id i12mr277022wmq.59.1634915051483; Fri, 22 Oct 2021 08:04:11 -0700 (PDT) MIME-Version: 1.0 References: <20211009021236.4122790-1-seanjc@google.com> <20211009021236.4122790-14-seanjc@google.com> In-Reply-To: <20211009021236.4122790-14-seanjc@google.com> From: Anup Patel Date: Fri, 22 Oct 2021 20:34:00 +0530 Message-ID: Subject: Re: [PATCH v2 13/43] KVM: Rename kvm_vcpu_block() => kvm_vcpu_halt() To: Sean Christopherson Cc: Cornelia Huck , Wanpeng Li , KVM General , David Hildenbrand , linux-mips@vger.kernel.org, Paul Mackerras , Atish Patra , linux-riscv , Claudio Imbrenda , kvmarm@lists.cs.columbia.edu, Janosch Frank , Marc Zyngier , Joerg Roedel , Huacai Chen , Christian Borntraeger , Aleksandar Markovic , Albert Ou , kvm-ppc@vger.kernel.org, Paul Walmsley , David Matlack , linux-arm-kernel , Jim Mattson , Anup Patel , "linux-kernel@vger.kernel.org List" , Palmer Dabbelt , kvm-riscv@lists.infradead.org, Paolo Bonzini , Vitaly Kuznetsov X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu On Sat, Oct 9, 2021 at 7:43 AM Sean Christopherson wrote: > > Rename kvm_vcpu_block() to kvm_vcpu_halt() in preparation for splitting > the actual "block" sequences into a separate helper (to be named > kvm_vcpu_block()). x86 will use the standalone block-only path to handle > non-halt cases where the vCPU is not runnable. > > Rename block_ns to halt_ns to match the new function name. > > No functional change intended. > > Reviewed-by: David Matlack > Reviewed-by: Christian Borntraeger > Signed-off-by: Sean Christopherson > --- For KVM RISC-V: Reviewed-by: Anup Patel Regards, Anup > arch/arm64/kvm/arch_timer.c | 2 +- > arch/arm64/kvm/arm.c | 2 +- > arch/arm64/kvm/handle_exit.c | 2 +- > arch/arm64/kvm/psci.c | 2 +- > arch/mips/kvm/emulate.c | 2 +- > arch/powerpc/kvm/book3s_pr.c | 2 +- > arch/powerpc/kvm/book3s_pr_papr.c | 2 +- > arch/powerpc/kvm/booke.c | 2 +- > arch/powerpc/kvm/powerpc.c | 2 +- > arch/riscv/kvm/vcpu_exit.c | 2 +- > arch/s390/kvm/interrupt.c | 2 +- > arch/x86/kvm/x86.c | 11 +++++++++-- > include/linux/kvm_host.h | 2 +- > virt/kvm/kvm_main.c | 20 +++++++++----------- > 14 files changed, 30 insertions(+), 25 deletions(-) > > diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c > index 3df67c127489..7e8396f74010 100644 > --- a/arch/arm64/kvm/arch_timer.c > +++ b/arch/arm64/kvm/arch_timer.c > @@ -467,7 +467,7 @@ static void timer_save_state(struct arch_timer_context *ctx) > } > > /* > - * Schedule the background timer before calling kvm_vcpu_block, so that this > + * Schedule the background timer before calling kvm_vcpu_halt, so that this > * thread is removed from its waitqueue and made runnable when there's a timer > * interrupt to handle. > */ > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 1346f81b34df..268b1e7bf700 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -672,7 +672,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu) > vgic_v4_put(vcpu, true); > preempt_enable(); > > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > preempt_disable(); > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c > index 4794563a506b..6d0baf71aa67 100644 > --- a/arch/arm64/kvm/handle_exit.c > +++ b/arch/arm64/kvm/handle_exit.c > @@ -82,7 +82,7 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu) > * > * WFE: Yield the CPU and come back to this vcpu when the scheduler > * decides to. > - * WFI: Simply call kvm_vcpu_block(), which will halt execution of > + * WFI: Simply call kvm_vcpu_halt(), which will halt execution of > * world-switches and schedule other host processes until there is an > * incoming IRQ or FIQ to the VM. > */ > diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c > index 74c47d420253..e275b2ca08b9 100644 > --- a/arch/arm64/kvm/psci.c > +++ b/arch/arm64/kvm/psci.c > @@ -46,7 +46,7 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu) > * specification (ARM DEN 0022A). This means all suspend states > * for KVM will preserve the register state. > */ > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > return PSCI_RET_SUCCESS; > diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c > index 22e745e49b0a..b494d8d39290 100644 > --- a/arch/mips/kvm/emulate.c > +++ b/arch/mips/kvm/emulate.c > @@ -952,7 +952,7 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu) > if (!vcpu->arch.pending_exceptions) { > kvm_vz_lose_htimer(vcpu); > vcpu->arch.wait = 1; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > > /* > * We we are runnable, then definitely go off to user space to > diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c > index 6bc9425acb32..0ced1b16f0e5 100644 > --- a/arch/powerpc/kvm/book3s_pr.c > +++ b/arch/powerpc/kvm/book3s_pr.c > @@ -492,7 +492,7 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) > > if (msr & MSR_POW) { > if (!vcpu->arch.pending_exceptions) { > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > > diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c > index ac14239f3424..1f10e7dfcdd0 100644 > --- a/arch/powerpc/kvm/book3s_pr_papr.c > +++ b/arch/powerpc/kvm/book3s_pr_papr.c > @@ -376,7 +376,7 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) > return kvmppc_h_pr_stuff_tce(vcpu); > case H_CEDE: > kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > return EMULATE_DONE; > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c > index 977801c83aff..12abffa40cd9 100644 > --- a/arch/powerpc/kvm/booke.c > +++ b/arch/powerpc/kvm/booke.c > @@ -718,7 +718,7 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) > > if (vcpu->arch.shared->msr & MSR_WE) { > local_irq_enable(); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > hard_irq_disable(); > > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c > index be22da157569..6a94545b99fc 100644 > --- a/arch/powerpc/kvm/powerpc.c > +++ b/arch/powerpc/kvm/powerpc.c > @@ -236,7 +236,7 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu) > break; > case EV_HCALL_TOKEN(EV_IDLE): > r = EV_SUCCESS; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > break; > default: > diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c > index 13bbc3f73713..949bb9828aa5 100644 > --- a/arch/riscv/kvm/vcpu_exit.c > +++ b/arch/riscv/kvm/vcpu_exit.c > @@ -146,7 +146,7 @@ static int system_opcode_insn(struct kvm_vcpu *vcpu, > vcpu->stat.wfi_exit_stat++; > if (!kvm_arch_vcpu_runnable(vcpu)) { > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > } > diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c > index 520450a7956f..10bd648170b7 100644 > --- a/arch/s390/kvm/interrupt.c > +++ b/arch/s390/kvm/interrupt.c > @@ -1335,7 +1335,7 @@ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu) > VCPU_EVENT(vcpu, 4, "enabled wait: %llu ns", sltime); > no_timer: > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->valid_wakeup = false; > __unset_cpu_idle(vcpu); > vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 9c23ae1d483d..e6c17bbed25c 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -8651,6 +8651,13 @@ void kvm_arch_exit(void) > > static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason) > { > + /* > + * The vCPU has halted, e.g. executed HLT. Update the run state if the > + * local APIC is in-kernel, the run loop will detect the non-runnable > + * state and halt the vCPU. Exit to userspace if the local APIC is > + * managed by userspace, in which case userspace is responsible for > + * handling wake events. > + */ > ++vcpu->stat.halt_exits; > if (lapic_in_kernel(vcpu)) { > vcpu->arch.mp_state = state; > @@ -9892,7 +9899,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) > if (!kvm_arch_vcpu_runnable(vcpu) && > (!kvm_x86_ops.pre_block || static_call(kvm_x86_pre_block)(vcpu) == 0)) { > srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); > > if (kvm_x86_ops.post_block) > @@ -10126,7 +10133,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > r = -EINTR; > goto out; > } > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > if (kvm_apic_accept_events(vcpu) < 0) { > r = 0; > goto out; > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 1ced2914d9ca..c2ea4004553a 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -967,7 +967,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); > void kvm_sigset_activate(struct kvm_vcpu *vcpu); > void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); > > -void kvm_vcpu_block(struct kvm_vcpu *vcpu); > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu); > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu); > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 227f6bbe0716..c13bf3367fda 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -3223,17 +3223,14 @@ static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start, > } > } > > -/* > - * The vCPU has executed a HLT instruction with in-kernel mode enabled. > - */ > -void kvm_vcpu_block(struct kvm_vcpu *vcpu) > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu) > { > struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); > bool halt_poll_allowed = !kvm_arch_no_poll(vcpu); > bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns; > ktime_t start, cur, poll_end; > bool waited = false; > - u64 block_ns; > + u64 halt_ns; > > start = cur = poll_end = ktime_get(); > if (do_halt_poll) { > @@ -3275,7 +3272,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > ktime_to_ns(cur) - ktime_to_ns(poll_end)); > } > out: > - block_ns = ktime_to_ns(cur) - ktime_to_ns(start); > + /* The total time the vCPU was "halted", including polling time. */ > + halt_ns = ktime_to_ns(cur) - ktime_to_ns(start); > > /* > * Note, halt-polling is considered successful so long as the vCPU was > @@ -3289,24 +3287,24 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > if (!vcpu_valid_wakeup(vcpu)) { > shrink_halt_poll_ns(vcpu); > } else if (vcpu->kvm->max_halt_poll_ns) { > - if (block_ns <= vcpu->halt_poll_ns) > + if (halt_ns <= vcpu->halt_poll_ns) > ; > /* we had a long block, shrink polling */ > else if (vcpu->halt_poll_ns && > - block_ns > vcpu->kvm->max_halt_poll_ns) > + halt_ns > vcpu->kvm->max_halt_poll_ns) > shrink_halt_poll_ns(vcpu); > /* we had a short halt and our poll time is too small */ > else if (vcpu->halt_poll_ns < vcpu->kvm->max_halt_poll_ns && > - block_ns < vcpu->kvm->max_halt_poll_ns) > + halt_ns < vcpu->kvm->max_halt_poll_ns) > grow_halt_poll_ns(vcpu); > } else { > vcpu->halt_poll_ns = 0; > } > } > > - trace_kvm_vcpu_wakeup(block_ns, waited, vcpu_valid_wakeup(vcpu)); > + trace_kvm_vcpu_wakeup(halt_ns, waited, vcpu_valid_wakeup(vcpu)); > } > -EXPORT_SYMBOL_GPL(kvm_vcpu_block); > +EXPORT_SYMBOL_GPL(kvm_vcpu_halt); > > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) > { > -- > 2.33.0.882.g93a45727a2-goog > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6E2EC433F5 for ; Fri, 22 Oct 2021 15:04:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8B4916112F for ; Fri, 22 Oct 2021 15:04:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8B4916112F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=brainfault.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bDBXJ7ax1yw+vsisuOYjI6lPFmLIdZR/9q2LzJpCDjA=; b=o9xbFFL44yqBV9 n3eDwYPu8k1v3j6LftREfay+AnGh+a8bReaMDg8U/YigH3d8iaVP5HvsvjbmQj/nYHzO1nJO8fKn9 FfqFSfHDQl7NXdQ3I2OVC+yFlKPLmTgmXWr7V14eg4hw8UzIZqME277h+6xrX2G4QBfM+9egYvLGp fx4eiDbaBM5ZHFWXUaM1Qs4v4QPjfcUqk9uPwoUmk0X+XSUfsIWf5K9L0bBclLC4XNCJbFliJkvnt e1/87jVeSJE1KT2nOfHlPbaC4Fzy15IUvZprYB/lx4bKoEhvOnP/OY5aBHNJOpdOtyLbmI46Wf5ec twKZW9Qq/2hcvPsFtIdg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdw5t-00BLGe-CZ; Fri, 22 Oct 2021 15:04:29 +0000 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdw5d-00BL9t-N5 for linux-riscv@lists.infradead.org; Fri, 22 Oct 2021 15:04:16 +0000 Received: by mail-wm1-x32f.google.com with SMTP id 193-20020a1c01ca000000b00327775075f7so3700681wmb.5 for ; Fri, 22 Oct 2021 08:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=T30FlJ5UKlkIRPt0AISR9G+L/3Q8enwrJccctT6p1g8=; b=MBtmLDJBwdIuSEGHsWnq13enuOMwfZw4/18uAeEnchCMIre5XYu/Vt8p/tQCYzsMop COAHxIk3w5JxNWzKs0XdOypZWDIIC5sFJ7ESkL1QmlSMivIp/SjRWF4do7Qy5ZEEpQot 2zKwDwHd4DXKkSbzrAcJDcQ0n8+XwO6A2RydmoMn16Ya4tX1ZQ4jT+vP9y1HntYmUdWb uOg+NVoX7R+xMb3qTatR3G99+5P77NeNif+FnN9FZgOaIzhfFWIzrR1Gp+ZYjJLlI3Ow tPdMCc3fKgsHCPGC2/0di1W4P7SRM2YCIKidlaJVgJ5bQTcDh2TawkTyUJQS93SrKM7S +t/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=T30FlJ5UKlkIRPt0AISR9G+L/3Q8enwrJccctT6p1g8=; b=GfM9yTX9OdFRVza9ZcvOrjI11w2C2HMl8RPklViQsMFCvygYZId9ByCItAnkio6IHI MGOGHtUR2NaxykmA+AUq/RlHDk/yyNckGm9FWljh/AYkH2FbR7xxsod1lXERkXe1QVzB vVAnryxlY7iAyQ8SWSGeXl/N2eIL6n5Ucm7Ur10NnxlPU/aIKNwsg7jE2/Y15+HPutWj 9JBGUFpSbLrIBTOY74Nw4F7IM25113j5ndCYppMekzayScQrv8PTye8qmVKvM2p0MNJk es3N/OLqlj2XONGRGJxPZuSf/j4ZeuAwsEMqVf6plsCHTT3lfFtkf1XHhWXZElbV/4ve r+fw== X-Gm-Message-State: AOAM533DScnQAskoZAAmHPSAojIugxBVYhj+iz9ZHslLWpofvCLGdXQT HcIEnKq+ZMGo44xdHFtMDWSJYzq5wEoDu8JZ/Z81FA== X-Google-Smtp-Source: ABdhPJztGBg8weOQ78E8KVd7ncPE7BUNSaOVfTMsmKjCcFD2swST5/UMCgrVa+WC8Aj7pEdSctUqBqmByiVW+amI3bY= X-Received: by 2002:a05:600c:354c:: with SMTP id i12mr277022wmq.59.1634915051483; Fri, 22 Oct 2021 08:04:11 -0700 (PDT) MIME-Version: 1.0 References: <20211009021236.4122790-1-seanjc@google.com> <20211009021236.4122790-14-seanjc@google.com> In-Reply-To: <20211009021236.4122790-14-seanjc@google.com> From: Anup Patel Date: Fri, 22 Oct 2021 20:34:00 +0530 Message-ID: Subject: Re: [PATCH v2 13/43] KVM: Rename kvm_vcpu_block() => kvm_vcpu_halt() To: Sean Christopherson Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini , Wanpeng Li , KVM General , David Hildenbrand , "linux-kernel@vger.kernel.org List" , Atish Patra , linux-riscv , Claudio Imbrenda , kvmarm@lists.cs.columbia.edu, Joerg Roedel , kvm-ppc@vger.kernel.org, David Matlack , linux-arm-kernel , Jim Mattson , Cornelia Huck , linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, Vitaly Kuznetsov X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211022_080413_788269_2FD94ED8 X-CRM114-Status: GOOD ( 32.09 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On Sat, Oct 9, 2021 at 7:43 AM Sean Christopherson wrote: > > Rename kvm_vcpu_block() to kvm_vcpu_halt() in preparation for splitting > the actual "block" sequences into a separate helper (to be named > kvm_vcpu_block()). x86 will use the standalone block-only path to handle > non-halt cases where the vCPU is not runnable. > > Rename block_ns to halt_ns to match the new function name. > > No functional change intended. > > Reviewed-by: David Matlack > Reviewed-by: Christian Borntraeger > Signed-off-by: Sean Christopherson > --- For KVM RISC-V: Reviewed-by: Anup Patel Regards, Anup > arch/arm64/kvm/arch_timer.c | 2 +- > arch/arm64/kvm/arm.c | 2 +- > arch/arm64/kvm/handle_exit.c | 2 +- > arch/arm64/kvm/psci.c | 2 +- > arch/mips/kvm/emulate.c | 2 +- > arch/powerpc/kvm/book3s_pr.c | 2 +- > arch/powerpc/kvm/book3s_pr_papr.c | 2 +- > arch/powerpc/kvm/booke.c | 2 +- > arch/powerpc/kvm/powerpc.c | 2 +- > arch/riscv/kvm/vcpu_exit.c | 2 +- > arch/s390/kvm/interrupt.c | 2 +- > arch/x86/kvm/x86.c | 11 +++++++++-- > include/linux/kvm_host.h | 2 +- > virt/kvm/kvm_main.c | 20 +++++++++----------- > 14 files changed, 30 insertions(+), 25 deletions(-) > > diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c > index 3df67c127489..7e8396f74010 100644 > --- a/arch/arm64/kvm/arch_timer.c > +++ b/arch/arm64/kvm/arch_timer.c > @@ -467,7 +467,7 @@ static void timer_save_state(struct arch_timer_context *ctx) > } > > /* > - * Schedule the background timer before calling kvm_vcpu_block, so that this > + * Schedule the background timer before calling kvm_vcpu_halt, so that this > * thread is removed from its waitqueue and made runnable when there's a timer > * interrupt to handle. > */ > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 1346f81b34df..268b1e7bf700 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -672,7 +672,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu) > vgic_v4_put(vcpu, true); > preempt_enable(); > > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > preempt_disable(); > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c > index 4794563a506b..6d0baf71aa67 100644 > --- a/arch/arm64/kvm/handle_exit.c > +++ b/arch/arm64/kvm/handle_exit.c > @@ -82,7 +82,7 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu) > * > * WFE: Yield the CPU and come back to this vcpu when the scheduler > * decides to. > - * WFI: Simply call kvm_vcpu_block(), which will halt execution of > + * WFI: Simply call kvm_vcpu_halt(), which will halt execution of > * world-switches and schedule other host processes until there is an > * incoming IRQ or FIQ to the VM. > */ > diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c > index 74c47d420253..e275b2ca08b9 100644 > --- a/arch/arm64/kvm/psci.c > +++ b/arch/arm64/kvm/psci.c > @@ -46,7 +46,7 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu) > * specification (ARM DEN 0022A). This means all suspend states > * for KVM will preserve the register state. > */ > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > return PSCI_RET_SUCCESS; > diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c > index 22e745e49b0a..b494d8d39290 100644 > --- a/arch/mips/kvm/emulate.c > +++ b/arch/mips/kvm/emulate.c > @@ -952,7 +952,7 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu) > if (!vcpu->arch.pending_exceptions) { > kvm_vz_lose_htimer(vcpu); > vcpu->arch.wait = 1; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > > /* > * We we are runnable, then definitely go off to user space to > diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c > index 6bc9425acb32..0ced1b16f0e5 100644 > --- a/arch/powerpc/kvm/book3s_pr.c > +++ b/arch/powerpc/kvm/book3s_pr.c > @@ -492,7 +492,7 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) > > if (msr & MSR_POW) { > if (!vcpu->arch.pending_exceptions) { > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > > diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c > index ac14239f3424..1f10e7dfcdd0 100644 > --- a/arch/powerpc/kvm/book3s_pr_papr.c > +++ b/arch/powerpc/kvm/book3s_pr_papr.c > @@ -376,7 +376,7 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) > return kvmppc_h_pr_stuff_tce(vcpu); > case H_CEDE: > kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > return EMULATE_DONE; > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c > index 977801c83aff..12abffa40cd9 100644 > --- a/arch/powerpc/kvm/booke.c > +++ b/arch/powerpc/kvm/booke.c > @@ -718,7 +718,7 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) > > if (vcpu->arch.shared->msr & MSR_WE) { > local_irq_enable(); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > hard_irq_disable(); > > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c > index be22da157569..6a94545b99fc 100644 > --- a/arch/powerpc/kvm/powerpc.c > +++ b/arch/powerpc/kvm/powerpc.c > @@ -236,7 +236,7 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu) > break; > case EV_HCALL_TOKEN(EV_IDLE): > r = EV_SUCCESS; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > break; > default: > diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c > index 13bbc3f73713..949bb9828aa5 100644 > --- a/arch/riscv/kvm/vcpu_exit.c > +++ b/arch/riscv/kvm/vcpu_exit.c > @@ -146,7 +146,7 @@ static int system_opcode_insn(struct kvm_vcpu *vcpu, > vcpu->stat.wfi_exit_stat++; > if (!kvm_arch_vcpu_runnable(vcpu)) { > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > } > diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c > index 520450a7956f..10bd648170b7 100644 > --- a/arch/s390/kvm/interrupt.c > +++ b/arch/s390/kvm/interrupt.c > @@ -1335,7 +1335,7 @@ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu) > VCPU_EVENT(vcpu, 4, "enabled wait: %llu ns", sltime); > no_timer: > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->valid_wakeup = false; > __unset_cpu_idle(vcpu); > vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 9c23ae1d483d..e6c17bbed25c 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -8651,6 +8651,13 @@ void kvm_arch_exit(void) > > static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason) > { > + /* > + * The vCPU has halted, e.g. executed HLT. Update the run state if the > + * local APIC is in-kernel, the run loop will detect the non-runnable > + * state and halt the vCPU. Exit to userspace if the local APIC is > + * managed by userspace, in which case userspace is responsible for > + * handling wake events. > + */ > ++vcpu->stat.halt_exits; > if (lapic_in_kernel(vcpu)) { > vcpu->arch.mp_state = state; > @@ -9892,7 +9899,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) > if (!kvm_arch_vcpu_runnable(vcpu) && > (!kvm_x86_ops.pre_block || static_call(kvm_x86_pre_block)(vcpu) == 0)) { > srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); > > if (kvm_x86_ops.post_block) > @@ -10126,7 +10133,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > r = -EINTR; > goto out; > } > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > if (kvm_apic_accept_events(vcpu) < 0) { > r = 0; > goto out; > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 1ced2914d9ca..c2ea4004553a 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -967,7 +967,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); > void kvm_sigset_activate(struct kvm_vcpu *vcpu); > void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); > > -void kvm_vcpu_block(struct kvm_vcpu *vcpu); > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu); > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu); > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 227f6bbe0716..c13bf3367fda 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -3223,17 +3223,14 @@ static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start, > } > } > > -/* > - * The vCPU has executed a HLT instruction with in-kernel mode enabled. > - */ > -void kvm_vcpu_block(struct kvm_vcpu *vcpu) > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu) > { > struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); > bool halt_poll_allowed = !kvm_arch_no_poll(vcpu); > bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns; > ktime_t start, cur, poll_end; > bool waited = false; > - u64 block_ns; > + u64 halt_ns; > > start = cur = poll_end = ktime_get(); > if (do_halt_poll) { > @@ -3275,7 +3272,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > ktime_to_ns(cur) - ktime_to_ns(poll_end)); > } > out: > - block_ns = ktime_to_ns(cur) - ktime_to_ns(start); > + /* The total time the vCPU was "halted", including polling time. */ > + halt_ns = ktime_to_ns(cur) - ktime_to_ns(start); > > /* > * Note, halt-polling is considered successful so long as the vCPU was > @@ -3289,24 +3287,24 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > if (!vcpu_valid_wakeup(vcpu)) { > shrink_halt_poll_ns(vcpu); > } else if (vcpu->kvm->max_halt_poll_ns) { > - if (block_ns <= vcpu->halt_poll_ns) > + if (halt_ns <= vcpu->halt_poll_ns) > ; > /* we had a long block, shrink polling */ > else if (vcpu->halt_poll_ns && > - block_ns > vcpu->kvm->max_halt_poll_ns) > + halt_ns > vcpu->kvm->max_halt_poll_ns) > shrink_halt_poll_ns(vcpu); > /* we had a short halt and our poll time is too small */ > else if (vcpu->halt_poll_ns < vcpu->kvm->max_halt_poll_ns && > - block_ns < vcpu->kvm->max_halt_poll_ns) > + halt_ns < vcpu->kvm->max_halt_poll_ns) > grow_halt_poll_ns(vcpu); > } else { > vcpu->halt_poll_ns = 0; > } > } > > - trace_kvm_vcpu_wakeup(block_ns, waited, vcpu_valid_wakeup(vcpu)); > + trace_kvm_vcpu_wakeup(halt_ns, waited, vcpu_valid_wakeup(vcpu)); > } > -EXPORT_SYMBOL_GPL(kvm_vcpu_block); > +EXPORT_SYMBOL_GPL(kvm_vcpu_halt); > > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) > { > -- > 2.33.0.882.g93a45727a2-goog > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EEB9C433EF for ; Fri, 22 Oct 2021 15:05:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 196D16112F for ; Fri, 22 Oct 2021 15:05:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 196D16112F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=brainfault.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XTct3+x/nYl2UpWSwO70huJKcNq8LZf5NamUO2MlHms=; b=YKouA2tHipQ1Hi NxDkWN33HVs0gYd0O/XpZtga6MoWLJtd4WfYYZDVqH3fP1m3pVS0c18J+423/rsSyYTrP+RqyMwfJ Xhx001u0/JveOYzcGgusBjUcMz46X/I9Z5IYoz9ACLNWTRx8guBEcjjsdwpcC+fbj4xGX7MOZT53A FtcfWcxEwRZ/fXpIV2IiTzsCI3CmoqsuBlXmm/konwl+tcu2dRnm2bB/+Z3MeYVE2mirBO3YzMdt+ t40BbJLUgRiUEq9quYzW+aV7fqW+SHFOqRFNJbYAXXANF8RQVOkHQY1E8aNPOygA6gFJZZdiokBqV G6JnS8eIiruFJktfj5sQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdw5i-00BLBp-8A; Fri, 22 Oct 2021 15:04:18 +0000 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mdw5d-00BL9u-Ny for linux-arm-kernel@lists.infradead.org; Fri, 22 Oct 2021 15:04:16 +0000 Received: by mail-wm1-x332.google.com with SMTP id a20-20020a1c7f14000000b003231d13ee3cso3710242wmd.3 for ; Fri, 22 Oct 2021 08:04:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=T30FlJ5UKlkIRPt0AISR9G+L/3Q8enwrJccctT6p1g8=; b=MBtmLDJBwdIuSEGHsWnq13enuOMwfZw4/18uAeEnchCMIre5XYu/Vt8p/tQCYzsMop COAHxIk3w5JxNWzKs0XdOypZWDIIC5sFJ7ESkL1QmlSMivIp/SjRWF4do7Qy5ZEEpQot 2zKwDwHd4DXKkSbzrAcJDcQ0n8+XwO6A2RydmoMn16Ya4tX1ZQ4jT+vP9y1HntYmUdWb uOg+NVoX7R+xMb3qTatR3G99+5P77NeNif+FnN9FZgOaIzhfFWIzrR1Gp+ZYjJLlI3Ow tPdMCc3fKgsHCPGC2/0di1W4P7SRM2YCIKidlaJVgJ5bQTcDh2TawkTyUJQS93SrKM7S +t/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=T30FlJ5UKlkIRPt0AISR9G+L/3Q8enwrJccctT6p1g8=; b=DIqW4pT2mTAoO0tUpmXao3T7kpQSdjqLnHUHF/MfUYpu8jx3h/+/T+w1NNRreyliiy 2Wjr16dg9Jcsve1dKhz3CNd2VvMYgJh8XOHWGsR03TpwdKiOqw+bpuL1BWgs4FumW0pb lQSZ7caF7atB1hgC2U85IheDYwI4DX71pXMAZuqH+gLkt1nm/IcVwwt5p2z3LlPfhLMi T6x4tl5oTZeczxijg89Lo+bLiND97ikq3bvMQ9P7Gq09bwsSBMMxjHld/tarKyHHILAr SDDcbaspZ/9jefn9piSPTIHF9DGk949vrS+atbSF523JdSDoIp0h7kRwZAqSsMr02Ef6 jQQA== X-Gm-Message-State: AOAM530Cy9UieHEucBlI4fJoXxPDWrIVRJl0PKSowJ90HugXmucwOCEe VOUVvDaV7RRDmuGe0ZMtrzrWfyzBQa+zHM34jXGXnQ== X-Google-Smtp-Source: ABdhPJztGBg8weOQ78E8KVd7ncPE7BUNSaOVfTMsmKjCcFD2swST5/UMCgrVa+WC8Aj7pEdSctUqBqmByiVW+amI3bY= X-Received: by 2002:a05:600c:354c:: with SMTP id i12mr277022wmq.59.1634915051483; Fri, 22 Oct 2021 08:04:11 -0700 (PDT) MIME-Version: 1.0 References: <20211009021236.4122790-1-seanjc@google.com> <20211009021236.4122790-14-seanjc@google.com> In-Reply-To: <20211009021236.4122790-14-seanjc@google.com> From: Anup Patel Date: Fri, 22 Oct 2021 20:34:00 +0530 Message-ID: Subject: Re: [PATCH v2 13/43] KVM: Rename kvm_vcpu_block() => kvm_vcpu_halt() To: Sean Christopherson Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini , Wanpeng Li , KVM General , David Hildenbrand , "linux-kernel@vger.kernel.org List" , Atish Patra , linux-riscv , Claudio Imbrenda , kvmarm@lists.cs.columbia.edu, Joerg Roedel , kvm-ppc@vger.kernel.org, David Matlack , linux-arm-kernel , Jim Mattson , Cornelia Huck , linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, Vitaly Kuznetsov X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211022_080413_797372_8C9D1141 X-CRM114-Status: GOOD ( 33.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Sat, Oct 9, 2021 at 7:43 AM Sean Christopherson wrote: > > Rename kvm_vcpu_block() to kvm_vcpu_halt() in preparation for splitting > the actual "block" sequences into a separate helper (to be named > kvm_vcpu_block()). x86 will use the standalone block-only path to handle > non-halt cases where the vCPU is not runnable. > > Rename block_ns to halt_ns to match the new function name. > > No functional change intended. > > Reviewed-by: David Matlack > Reviewed-by: Christian Borntraeger > Signed-off-by: Sean Christopherson > --- For KVM RISC-V: Reviewed-by: Anup Patel Regards, Anup > arch/arm64/kvm/arch_timer.c | 2 +- > arch/arm64/kvm/arm.c | 2 +- > arch/arm64/kvm/handle_exit.c | 2 +- > arch/arm64/kvm/psci.c | 2 +- > arch/mips/kvm/emulate.c | 2 +- > arch/powerpc/kvm/book3s_pr.c | 2 +- > arch/powerpc/kvm/book3s_pr_papr.c | 2 +- > arch/powerpc/kvm/booke.c | 2 +- > arch/powerpc/kvm/powerpc.c | 2 +- > arch/riscv/kvm/vcpu_exit.c | 2 +- > arch/s390/kvm/interrupt.c | 2 +- > arch/x86/kvm/x86.c | 11 +++++++++-- > include/linux/kvm_host.h | 2 +- > virt/kvm/kvm_main.c | 20 +++++++++----------- > 14 files changed, 30 insertions(+), 25 deletions(-) > > diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c > index 3df67c127489..7e8396f74010 100644 > --- a/arch/arm64/kvm/arch_timer.c > +++ b/arch/arm64/kvm/arch_timer.c > @@ -467,7 +467,7 @@ static void timer_save_state(struct arch_timer_context *ctx) > } > > /* > - * Schedule the background timer before calling kvm_vcpu_block, so that this > + * Schedule the background timer before calling kvm_vcpu_halt, so that this > * thread is removed from its waitqueue and made runnable when there's a timer > * interrupt to handle. > */ > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 1346f81b34df..268b1e7bf700 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -672,7 +672,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu) > vgic_v4_put(vcpu, true); > preempt_enable(); > > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > preempt_disable(); > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c > index 4794563a506b..6d0baf71aa67 100644 > --- a/arch/arm64/kvm/handle_exit.c > +++ b/arch/arm64/kvm/handle_exit.c > @@ -82,7 +82,7 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu) > * > * WFE: Yield the CPU and come back to this vcpu when the scheduler > * decides to. > - * WFI: Simply call kvm_vcpu_block(), which will halt execution of > + * WFI: Simply call kvm_vcpu_halt(), which will halt execution of > * world-switches and schedule other host processes until there is an > * incoming IRQ or FIQ to the VM. > */ > diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c > index 74c47d420253..e275b2ca08b9 100644 > --- a/arch/arm64/kvm/psci.c > +++ b/arch/arm64/kvm/psci.c > @@ -46,7 +46,7 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu) > * specification (ARM DEN 0022A). This means all suspend states > * for KVM will preserve the register state. > */ > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > return PSCI_RET_SUCCESS; > diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c > index 22e745e49b0a..b494d8d39290 100644 > --- a/arch/mips/kvm/emulate.c > +++ b/arch/mips/kvm/emulate.c > @@ -952,7 +952,7 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu) > if (!vcpu->arch.pending_exceptions) { > kvm_vz_lose_htimer(vcpu); > vcpu->arch.wait = 1; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > > /* > * We we are runnable, then definitely go off to user space to > diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c > index 6bc9425acb32..0ced1b16f0e5 100644 > --- a/arch/powerpc/kvm/book3s_pr.c > +++ b/arch/powerpc/kvm/book3s_pr.c > @@ -492,7 +492,7 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) > > if (msr & MSR_POW) { > if (!vcpu->arch.pending_exceptions) { > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > > diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c > index ac14239f3424..1f10e7dfcdd0 100644 > --- a/arch/powerpc/kvm/book3s_pr_papr.c > +++ b/arch/powerpc/kvm/book3s_pr_papr.c > @@ -376,7 +376,7 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) > return kvmppc_h_pr_stuff_tce(vcpu); > case H_CEDE: > kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > return EMULATE_DONE; > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c > index 977801c83aff..12abffa40cd9 100644 > --- a/arch/powerpc/kvm/booke.c > +++ b/arch/powerpc/kvm/booke.c > @@ -718,7 +718,7 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) > > if (vcpu->arch.shared->msr & MSR_WE) { > local_irq_enable(); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > hard_irq_disable(); > > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c > index be22da157569..6a94545b99fc 100644 > --- a/arch/powerpc/kvm/powerpc.c > +++ b/arch/powerpc/kvm/powerpc.c > @@ -236,7 +236,7 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu) > break; > case EV_HCALL_TOKEN(EV_IDLE): > r = EV_SUCCESS; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > break; > default: > diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c > index 13bbc3f73713..949bb9828aa5 100644 > --- a/arch/riscv/kvm/vcpu_exit.c > +++ b/arch/riscv/kvm/vcpu_exit.c > @@ -146,7 +146,7 @@ static int system_opcode_insn(struct kvm_vcpu *vcpu, > vcpu->stat.wfi_exit_stat++; > if (!kvm_arch_vcpu_runnable(vcpu)) { > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > } > diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c > index 520450a7956f..10bd648170b7 100644 > --- a/arch/s390/kvm/interrupt.c > +++ b/arch/s390/kvm/interrupt.c > @@ -1335,7 +1335,7 @@ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu) > VCPU_EVENT(vcpu, 4, "enabled wait: %llu ns", sltime); > no_timer: > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->valid_wakeup = false; > __unset_cpu_idle(vcpu); > vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 9c23ae1d483d..e6c17bbed25c 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -8651,6 +8651,13 @@ void kvm_arch_exit(void) > > static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason) > { > + /* > + * The vCPU has halted, e.g. executed HLT. Update the run state if the > + * local APIC is in-kernel, the run loop will detect the non-runnable > + * state and halt the vCPU. Exit to userspace if the local APIC is > + * managed by userspace, in which case userspace is responsible for > + * handling wake events. > + */ > ++vcpu->stat.halt_exits; > if (lapic_in_kernel(vcpu)) { > vcpu->arch.mp_state = state; > @@ -9892,7 +9899,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) > if (!kvm_arch_vcpu_runnable(vcpu) && > (!kvm_x86_ops.pre_block || static_call(kvm_x86_pre_block)(vcpu) == 0)) { > srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); > > if (kvm_x86_ops.post_block) > @@ -10126,7 +10133,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > r = -EINTR; > goto out; > } > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > if (kvm_apic_accept_events(vcpu) < 0) { > r = 0; > goto out; > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 1ced2914d9ca..c2ea4004553a 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -967,7 +967,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); > void kvm_sigset_activate(struct kvm_vcpu *vcpu); > void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); > > -void kvm_vcpu_block(struct kvm_vcpu *vcpu); > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu); > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu); > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 227f6bbe0716..c13bf3367fda 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -3223,17 +3223,14 @@ static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start, > } > } > > -/* > - * The vCPU has executed a HLT instruction with in-kernel mode enabled. > - */ > -void kvm_vcpu_block(struct kvm_vcpu *vcpu) > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu) > { > struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); > bool halt_poll_allowed = !kvm_arch_no_poll(vcpu); > bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns; > ktime_t start, cur, poll_end; > bool waited = false; > - u64 block_ns; > + u64 halt_ns; > > start = cur = poll_end = ktime_get(); > if (do_halt_poll) { > @@ -3275,7 +3272,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > ktime_to_ns(cur) - ktime_to_ns(poll_end)); > } > out: > - block_ns = ktime_to_ns(cur) - ktime_to_ns(start); > + /* The total time the vCPU was "halted", including polling time. */ > + halt_ns = ktime_to_ns(cur) - ktime_to_ns(start); > > /* > * Note, halt-polling is considered successful so long as the vCPU was > @@ -3289,24 +3287,24 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > if (!vcpu_valid_wakeup(vcpu)) { > shrink_halt_poll_ns(vcpu); > } else if (vcpu->kvm->max_halt_poll_ns) { > - if (block_ns <= vcpu->halt_poll_ns) > + if (halt_ns <= vcpu->halt_poll_ns) > ; > /* we had a long block, shrink polling */ > else if (vcpu->halt_poll_ns && > - block_ns > vcpu->kvm->max_halt_poll_ns) > + halt_ns > vcpu->kvm->max_halt_poll_ns) > shrink_halt_poll_ns(vcpu); > /* we had a short halt and our poll time is too small */ > else if (vcpu->halt_poll_ns < vcpu->kvm->max_halt_poll_ns && > - block_ns < vcpu->kvm->max_halt_poll_ns) > + halt_ns < vcpu->kvm->max_halt_poll_ns) > grow_halt_poll_ns(vcpu); > } else { > vcpu->halt_poll_ns = 0; > } > } > > - trace_kvm_vcpu_wakeup(block_ns, waited, vcpu_valid_wakeup(vcpu)); > + trace_kvm_vcpu_wakeup(halt_ns, waited, vcpu_valid_wakeup(vcpu)); > } > -EXPORT_SYMBOL_GPL(kvm_vcpu_block); > +EXPORT_SYMBOL_GPL(kvm_vcpu_halt); > > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) > { > -- > 2.33.0.882.g93a45727a2-goog > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anup Patel Date: Fri, 22 Oct 2021 15:16:00 +0000 Subject: Re: [PATCH v2 13/43] KVM: Rename kvm_vcpu_block() => kvm_vcpu_halt() Message-Id: List-Id: References: <20211009021236.4122790-1-seanjc@google.com> <20211009021236.4122790-14-seanjc@google.com> In-Reply-To: <20211009021236.4122790-14-seanjc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Sean Christopherson Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini , Wanpeng Li , KVM General , David Hildenbrand , "linux-kernel@vger.kernel.org List" , Atish Patra , linux-riscv , Claudio Imbrenda , kvmarm@lists.cs.columbia.edu, Joerg Roedel , kvm-ppc@vger.kernel.org, David Matlack , linux-arm-kernel , Jim Mattson , Cornelia Huck , linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, Vitaly Kuznetsov On Sat, Oct 9, 2021 at 7:43 AM Sean Christopherson wrote: > > Rename kvm_vcpu_block() to kvm_vcpu_halt() in preparation for splitting > the actual "block" sequences into a separate helper (to be named > kvm_vcpu_block()). x86 will use the standalone block-only path to handle > non-halt cases where the vCPU is not runnable. > > Rename block_ns to halt_ns to match the new function name. > > No functional change intended. > > Reviewed-by: David Matlack > Reviewed-by: Christian Borntraeger > Signed-off-by: Sean Christopherson > --- For KVM RISC-V: Reviewed-by: Anup Patel Regards, Anup > arch/arm64/kvm/arch_timer.c | 2 +- > arch/arm64/kvm/arm.c | 2 +- > arch/arm64/kvm/handle_exit.c | 2 +- > arch/arm64/kvm/psci.c | 2 +- > arch/mips/kvm/emulate.c | 2 +- > arch/powerpc/kvm/book3s_pr.c | 2 +- > arch/powerpc/kvm/book3s_pr_papr.c | 2 +- > arch/powerpc/kvm/booke.c | 2 +- > arch/powerpc/kvm/powerpc.c | 2 +- > arch/riscv/kvm/vcpu_exit.c | 2 +- > arch/s390/kvm/interrupt.c | 2 +- > arch/x86/kvm/x86.c | 11 +++++++++-- > include/linux/kvm_host.h | 2 +- > virt/kvm/kvm_main.c | 20 +++++++++----------- > 14 files changed, 30 insertions(+), 25 deletions(-) > > diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c > index 3df67c127489..7e8396f74010 100644 > --- a/arch/arm64/kvm/arch_timer.c > +++ b/arch/arm64/kvm/arch_timer.c > @@ -467,7 +467,7 @@ static void timer_save_state(struct arch_timer_context *ctx) > } > > /* > - * Schedule the background timer before calling kvm_vcpu_block, so that this > + * Schedule the background timer before calling kvm_vcpu_halt, so that this > * thread is removed from its waitqueue and made runnable when there's a timer > * interrupt to handle. > */ > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 1346f81b34df..268b1e7bf700 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -672,7 +672,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu) > vgic_v4_put(vcpu, true); > preempt_enable(); > > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > preempt_disable(); > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c > index 4794563a506b..6d0baf71aa67 100644 > --- a/arch/arm64/kvm/handle_exit.c > +++ b/arch/arm64/kvm/handle_exit.c > @@ -82,7 +82,7 @@ static int handle_no_fpsimd(struct kvm_vcpu *vcpu) > * > * WFE: Yield the CPU and come back to this vcpu when the scheduler > * decides to. > - * WFI: Simply call kvm_vcpu_block(), which will halt execution of > + * WFI: Simply call kvm_vcpu_halt(), which will halt execution of > * world-switches and schedule other host processes until there is an > * incoming IRQ or FIQ to the VM. > */ > diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c > index 74c47d420253..e275b2ca08b9 100644 > --- a/arch/arm64/kvm/psci.c > +++ b/arch/arm64/kvm/psci.c > @@ -46,7 +46,7 @@ static unsigned long kvm_psci_vcpu_suspend(struct kvm_vcpu *vcpu) > * specification (ARM DEN 0022A). This means all suspend states > * for KVM will preserve the register state. > */ > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > > return PSCI_RET_SUCCESS; > diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c > index 22e745e49b0a..b494d8d39290 100644 > --- a/arch/mips/kvm/emulate.c > +++ b/arch/mips/kvm/emulate.c > @@ -952,7 +952,7 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu *vcpu) > if (!vcpu->arch.pending_exceptions) { > kvm_vz_lose_htimer(vcpu); > vcpu->arch.wait = 1; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > > /* > * We we are runnable, then definitely go off to user space to > diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c > index 6bc9425acb32..0ced1b16f0e5 100644 > --- a/arch/powerpc/kvm/book3s_pr.c > +++ b/arch/powerpc/kvm/book3s_pr.c > @@ -492,7 +492,7 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) > > if (msr & MSR_POW) { > if (!vcpu->arch.pending_exceptions) { > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > > diff --git a/arch/powerpc/kvm/book3s_pr_papr.c b/arch/powerpc/kvm/book3s_pr_papr.c > index ac14239f3424..1f10e7dfcdd0 100644 > --- a/arch/powerpc/kvm/book3s_pr_papr.c > +++ b/arch/powerpc/kvm/book3s_pr_papr.c > @@ -376,7 +376,7 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd) > return kvmppc_h_pr_stuff_tce(vcpu); > case H_CEDE: > kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > vcpu->stat.generic.halt_wakeup++; > return EMULATE_DONE; > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c > index 977801c83aff..12abffa40cd9 100644 > --- a/arch/powerpc/kvm/booke.c > +++ b/arch/powerpc/kvm/booke.c > @@ -718,7 +718,7 @@ int kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) > > if (vcpu->arch.shared->msr & MSR_WE) { > local_irq_enable(); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > hard_irq_disable(); > > diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c > index be22da157569..6a94545b99fc 100644 > --- a/arch/powerpc/kvm/powerpc.c > +++ b/arch/powerpc/kvm/powerpc.c > @@ -236,7 +236,7 @@ int kvmppc_kvm_pv(struct kvm_vcpu *vcpu) > break; > case EV_HCALL_TOKEN(EV_IDLE): > r = EV_SUCCESS; > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > break; > default: > diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c > index 13bbc3f73713..949bb9828aa5 100644 > --- a/arch/riscv/kvm/vcpu_exit.c > +++ b/arch/riscv/kvm/vcpu_exit.c > @@ -146,7 +146,7 @@ static int system_opcode_insn(struct kvm_vcpu *vcpu, > vcpu->stat.wfi_exit_stat++; > if (!kvm_arch_vcpu_runnable(vcpu)) { > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->arch.srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > kvm_clear_request(KVM_REQ_UNHALT, vcpu); > } > diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c > index 520450a7956f..10bd648170b7 100644 > --- a/arch/s390/kvm/interrupt.c > +++ b/arch/s390/kvm/interrupt.c > @@ -1335,7 +1335,7 @@ int kvm_s390_handle_wait(struct kvm_vcpu *vcpu) > VCPU_EVENT(vcpu, 4, "enabled wait: %llu ns", sltime); > no_timer: > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->valid_wakeup = false; > __unset_cpu_idle(vcpu); > vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 9c23ae1d483d..e6c17bbed25c 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -8651,6 +8651,13 @@ void kvm_arch_exit(void) > > static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason) > { > + /* > + * The vCPU has halted, e.g. executed HLT. Update the run state if the > + * local APIC is in-kernel, the run loop will detect the non-runnable > + * state and halt the vCPU. Exit to userspace if the local APIC is > + * managed by userspace, in which case userspace is responsible for > + * handling wake events. > + */ > ++vcpu->stat.halt_exits; > if (lapic_in_kernel(vcpu)) { > vcpu->arch.mp_state = state; > @@ -9892,7 +9899,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) > if (!kvm_arch_vcpu_runnable(vcpu) && > (!kvm_x86_ops.pre_block || static_call(kvm_x86_pre_block)(vcpu) = 0)) { > srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); > > if (kvm_x86_ops.post_block) > @@ -10126,7 +10133,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) > r = -EINTR; > goto out; > } > - kvm_vcpu_block(vcpu); > + kvm_vcpu_halt(vcpu); > if (kvm_apic_accept_events(vcpu) < 0) { > r = 0; > goto out; > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 1ced2914d9ca..c2ea4004553a 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -967,7 +967,7 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn); > void kvm_sigset_activate(struct kvm_vcpu *vcpu); > void kvm_sigset_deactivate(struct kvm_vcpu *vcpu); > > -void kvm_vcpu_block(struct kvm_vcpu *vcpu); > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu); > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu); > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 227f6bbe0716..c13bf3367fda 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -3223,17 +3223,14 @@ static inline void update_halt_poll_stats(struct kvm_vcpu *vcpu, ktime_t start, > } > } > > -/* > - * The vCPU has executed a HLT instruction with in-kernel mode enabled. > - */ > -void kvm_vcpu_block(struct kvm_vcpu *vcpu) > +void kvm_vcpu_halt(struct kvm_vcpu *vcpu) > { > struct rcuwait *wait = kvm_arch_vcpu_get_wait(vcpu); > bool halt_poll_allowed = !kvm_arch_no_poll(vcpu); > bool do_halt_poll = halt_poll_allowed && vcpu->halt_poll_ns; > ktime_t start, cur, poll_end; > bool waited = false; > - u64 block_ns; > + u64 halt_ns; > > start = cur = poll_end = ktime_get(); > if (do_halt_poll) { > @@ -3275,7 +3272,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > ktime_to_ns(cur) - ktime_to_ns(poll_end)); > } > out: > - block_ns = ktime_to_ns(cur) - ktime_to_ns(start); > + /* The total time the vCPU was "halted", including polling time. */ > + halt_ns = ktime_to_ns(cur) - ktime_to_ns(start); > > /* > * Note, halt-polling is considered successful so long as the vCPU was > @@ -3289,24 +3287,24 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > if (!vcpu_valid_wakeup(vcpu)) { > shrink_halt_poll_ns(vcpu); > } else if (vcpu->kvm->max_halt_poll_ns) { > - if (block_ns <= vcpu->halt_poll_ns) > + if (halt_ns <= vcpu->halt_poll_ns) > ; > /* we had a long block, shrink polling */ > else if (vcpu->halt_poll_ns && > - block_ns > vcpu->kvm->max_halt_poll_ns) > + halt_ns > vcpu->kvm->max_halt_poll_ns) > shrink_halt_poll_ns(vcpu); > /* we had a short halt and our poll time is too small */ > else if (vcpu->halt_poll_ns < vcpu->kvm->max_halt_poll_ns && > - block_ns < vcpu->kvm->max_halt_poll_ns) > + halt_ns < vcpu->kvm->max_halt_poll_ns) > grow_halt_poll_ns(vcpu); > } else { > vcpu->halt_poll_ns = 0; > } > } > > - trace_kvm_vcpu_wakeup(block_ns, waited, vcpu_valid_wakeup(vcpu)); > + trace_kvm_vcpu_wakeup(halt_ns, waited, vcpu_valid_wakeup(vcpu)); > } > -EXPORT_SYMBOL_GPL(kvm_vcpu_block); > +EXPORT_SYMBOL_GPL(kvm_vcpu_halt); > > bool kvm_vcpu_wake_up(struct kvm_vcpu *vcpu) > { > -- > 2.33.0.882.g93a45727a2-goog > > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm