From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39635CA9EB7 for ; Tue, 22 Oct 2019 11:36:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 105BD207FC for ; Tue, 22 Oct 2019 11:36:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388717AbfJVLgk (ORCPT ); Tue, 22 Oct 2019 07:36:40 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49892 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388705AbfJVLgj (ORCPT ); Tue, 22 Oct 2019 07:36:39 -0400 Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A32B93688E for ; Tue, 22 Oct 2019 11:36:38 +0000 (UTC) Received: by mail-wr1-f72.google.com with SMTP id a6so6163430wru.1 for ; Tue, 22 Oct 2019 04:36:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:in-reply-to:references:date :message-id:mime-version; bh=FTSbOqc/cP0++TLaNzktd3vhmDx93fKNtqalIV8xNj4=; b=K/hLQUH/IFvxazJNog+HGOwIJXRg/f+m2oXk1wZuInjE6ZuKlNoWsOjujh3dQYP+nH vOghgWY7Z7pPdXOrIG/s74x2hrSU8gsBBO5ZRGXRQga7Hf+s1e6Kpo6B8u9HL6OlvHJI zhGOa0uP8trv+XY9l6rg3EySxJacBDcAU6T4LirnNZHJyvP7LpRd0aKP0oIZ+8+zcbAx Fqk12LMJpwiiRKHiJ2cIZqKaD2mXYZB/IrQVooET7l0L4RGQEV+c2w8dynVP2KtxPSpm SGKE93dulEi8mg6wfXTa9aYO09TespXeV4FQcL4tofZ2uNIoB+UG4qRMviC0QDBtsJGP 5GYg== X-Gm-Message-State: APjAAAWo0O5hejFrhOdu8t//RR2AnDvMFh5cQ7S4QiJgtb1V4yiEp2uu WCl53Eh3HIRRPxB+avnedduvTUxl2JiYTjUJZnJOLwq95E2Jqv+2TIWmqGZZD6ejpJ76Co7YR28 ZsL9YFy7SNBCxYx2jXgF+Ys/P X-Received: by 2002:a1c:814b:: with SMTP id c72mr2717182wmd.167.1571744197227; Tue, 22 Oct 2019 04:36:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqw9oRJXXcWY8MVsiwOOdUQqcZhl+/UqiW9ZlmC6hc407IYb6Lv06C/pih3i1cpDDOvdsx5uGw== X-Received: by 2002:a1c:814b:: with SMTP id c72mr2717136wmd.167.1571744196883; Tue, 22 Oct 2019 04:36:36 -0700 (PDT) Received: from vitty.brq.redhat.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id p10sm20019762wrx.2.2019.10.22.04.36.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2019 04:36:36 -0700 (PDT) From: Vitaly Kuznetsov To: Zhenzhong Duan , linux-kernel@vger.kernel.org Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, pbonzini@redhat.com, rkrcmar@redhat.com, sean.j.christopherson@intel.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, boris.ostrovsky@oracle.com, jgross@suse.com, peterz@infradead.org, will@kernel.org, linux-hyperv@vger.kernel.org, kvm@vger.kernel.org, mikelley@microsoft.com, kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, sashal@kernel.org, Zhenzhong Duan , Jonathan Corbet , "H. Peter Anvin" Subject: Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks In-Reply-To: <1571649076-2421-4-git-send-email-zhenzhong.duan@oracle.com> References: <1571649076-2421-1-git-send-email-zhenzhong.duan@oracle.com> <1571649076-2421-4-git-send-email-zhenzhong.duan@oracle.com> Date: Tue, 22 Oct 2019 13:36:34 +0200 Message-ID: <8736fl1071.fsf@vitty.brq.redhat.com> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Zhenzhong Duan writes: > There are cases where a guest tries to switch spinlocks to bare metal > behavior (e.g. by setting "xen_nopvspin" on XEN platform and > "hv_nopvspin" on HYPER_V). > > That feature is missed on KVM, add a new parameter "nopvspin" to disable > PV spinlocks for KVM guest. > > The new 'nopvspin' parameter will also replace Xen and Hyper-V specific > parameters in future patches. > > Define variable nopvsin as global because it will be used in future > patches as above. > > Signed-off-by: Zhenzhong Duan > Reviewed-by: Vitaly Kuznetsov > Cc: Jonathan Corbet > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: Borislav Petkov > Cc: "H. Peter Anvin" > Cc: Paolo Bonzini > Cc: Radim Krcmar > Cc: Sean Christopherson > Cc: Vitaly Kuznetsov > Cc: Wanpeng Li > Cc: Jim Mattson > Cc: Joerg Roedel > Cc: Peter Zijlstra > Cc: Will Deacon > --- > Documentation/admin-guide/kernel-parameters.txt | 5 ++++ > arch/x86/include/asm/qspinlock.h | 1 + > arch/x86/kernel/kvm.c | 34 ++++++++++++++++++++----- > kernel/locking/qspinlock.c | 7 +++++ > 4 files changed, 40 insertions(+), 7 deletions(-) > > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt > index a84a83f..bd49ed2 100644 > --- a/Documentation/admin-guide/kernel-parameters.txt > +++ b/Documentation/admin-guide/kernel-parameters.txt > @@ -5334,6 +5334,11 @@ > as generic guest with no PV drivers. Currently support > XEN HVM, KVM, HYPER_V and VMWARE guest. > > + nopvspin [X86,KVM] > + Disables the qspinlock slow path using PV optimizations > + which allow the hypervisor to 'idle' the guest on lock > + contention. > + > xirc2ps_cs= [NET,PCMCIA] > Format: > ,,,,,[,[,[,]]] > diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h > index 444d6fd..d86ab94 100644 > --- a/arch/x86/include/asm/qspinlock.h > +++ b/arch/x86/include/asm/qspinlock.h > @@ -32,6 +32,7 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo > extern void __pv_init_lock_hash(void); > extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); > extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock); > +extern bool nopvspin; > > #define queued_spin_unlock queued_spin_unlock > /** > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 249f14a..3945aa5 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu) > */ > void __init kvm_spinlock_init(void) > { > - /* Does host kernel support KVM_FEATURE_PV_UNHALT? */ > - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) > + /* > + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an > + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is > + * preferred over native qspinlock when vCPU is preempted. > + */ > + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) { > + pr_info("PV spinlocks disabled, no host support.\n"); > return; > + } > > + /* > + * Disable PV qspinlock and use native qspinlock when dedicated pCPUs > + * are available. > + */ > if (kvm_para_has_hint(KVM_HINTS_REALTIME)) { > - static_branch_disable(&virt_spin_lock_key); > - return; > + pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n"); > + goto out; > } > > - /* Don't use the pvqspinlock code if there is only 1 vCPU. */ > - if (num_possible_cpus() == 1) > - return; > + if (num_possible_cpus() == 1) { > + pr_info("PV spinlocks disabled, single CPU.\n"); > + goto out; > + } > + > + if (nopvspin) { > + pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n"); > + goto out; > + } > + > + pr_info("PV spinlocks enabled\n"); > > __pv_init_lock_hash(); > pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; > @@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void) > pv_ops.lock.vcpu_is_preempted = > PV_CALLEE_SAVE(__kvm_vcpu_is_preempted); > } > +out: > + static_branch_disable(&virt_spin_lock_key); You probably need to add 'return' before 'out:' as it seems you're disabling virt_spin_lock_key in all cases now). > } > > #endif /* CONFIG_PARAVIRT_SPINLOCKS */ > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index 2473f10..75193d6 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -580,4 +580,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > #include "qspinlock_paravirt.h" > #include "qspinlock.c" > > +bool nopvspin __initdata; > +static __init int parse_nopvspin(char *arg) > +{ > + nopvspin = true; > + return 0; > +} > +early_param("nopvspin", parse_nopvspin); > #endif -- Vitaly