From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69CCAC4740A for ; Mon, 9 Sep 2019 11:06:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 35ED52084D for ; Mon, 9 Sep 2019 11:06:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730794AbfIILGM (ORCPT ); Mon, 9 Sep 2019 07:06:12 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41506 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730736AbfIILGL (ORCPT ); Mon, 9 Sep 2019 07:06:11 -0400 Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 16405765BF for ; Mon, 9 Sep 2019 11:06:11 +0000 (UTC) Received: by mail-wm1-f69.google.com with SMTP id r187so4411799wme.0 for ; Mon, 09 Sep 2019 04:06:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:openpgp:message-id :date:user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=7P1L82Hvd2CHZtV3y3+DuSRQWJRg8V37be+Beu7AiBw=; b=UHtq+oTTBnhyLbjPPZv5oI8gNuLirPDd3P+4MIUTe5mIpqk/E1gPK67icjfm/Hfh8m L1Td5JHNSqLiiWYYCuGsx4CtVAfR3Jm340KIq9shUqGR0zdR485aIaU6oT1iaRmgo+TI 7BPDGInvYuD79264iB6ssyMmBP1LFyx27f+EkTqIQutdKHklL4BSse3Gasx9mr+TFSV4 qElPhFE6DqjIC2JYZZlKC9a6pmQEF4ZHE/KPcbMykHBHzbV8CjqhVRfYC8XjbFbZOUOk kkjs+yWTmzfTHt8ZRBDuPJ7XPO3EQq/2s9InYVLN7d0DhtvtfFno2NY6HFcLutNzvUig xr8Q== X-Gm-Message-State: APjAAAWIOJpay6HAd4T2ie9URmQs6UcCbM9pMt+vVm1i8D4EDYlpaKej KohykwBLLpNfzR2biqjOT6owKSmwrX42D14fVNYs3RHQOM83PjLbJ9L8bBdGf8f0/OLVVm3eTZQ MwT5m0fmTEympQhkvoaCDLB0j X-Received: by 2002:adf:f284:: with SMTP id k4mr3118596wro.294.1568027169600; Mon, 09 Sep 2019 04:06:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqwpJObjRb41D20LhnQ/9bC6pyQl1DWX1ZWggr3xyk6OHo4dXPNZEDXNgLaeWrw8SqEtwG9mSQ== X-Received: by 2002:adf:f284:: with SMTP id k4mr3118569wro.294.1568027169308; Mon, 09 Sep 2019 04:06:09 -0700 (PDT) Received: from ?IPv6:2001:b07:6468:f312:8dc1:ce1e:4b83:3ab7? ([2001:b07:6468:f312:8dc1:ce1e:4b83:3ab7]) by smtp.gmail.com with ESMTPSA id x5sm19207918wrg.69.2019.09.09.04.06.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 09 Sep 2019 04:06:08 -0700 (PDT) Subject: Re: [PATCH] Revert "locking/pvqspinlock: Don't wait if vCPU is preempted" To: Waiman Long , Wanpeng Li , linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , loobinliu@tencent.com, stable@vger.kernel.org References: <1567993228-23668-1-git-send-email-wanpengli@tencent.com> <29d04ee4-60e7-4df9-0c4f-fc29f2b0c6a8@redhat.com> From: Paolo Bonzini Openpgp: preference=signencrypt Message-ID: Date: Mon, 9 Sep 2019 13:06:01 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <29d04ee4-60e7-4df9-0c4f-fc29f2b0c6a8@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/09/19 12:56, Waiman Long wrote: > On 9/9/19 2:40 AM, Wanpeng Li wrote: >> From: Wanpeng Li >> >> This patch reverts commit 75437bb304b20 (locking/pvqspinlock: Don't wait if >> vCPU is preempted), we found great regression caused by this commit. >> >> Xeon Skylake box, 2 sockets, 40 cores, 80 threads, three VMs, each is 80 vCPUs. >> The score of ebizzy -M can reduce from 13000-14000 records/s to 1700-1800 >> records/s with this commit. >> >> Host Guest score >> >> vanilla + w/o kvm optimizes vanilla 1700-1800 records/s >> vanilla + w/o kvm optimizes vanilla + revert 13000-14000 records/s >> vanilla + w/ kvm optimizes vanilla 4500-5000 records/s >> vanilla + w/ kvm optimizes vanilla + revert 14000-15500 records/s >> >> Exit from aggressive wait-early mechanism can result in yield premature and >> incur extra scheduling latency in over-subscribe scenario. >> >> kvm optimizes: >> [1] commit d73eb57b80b (KVM: Boost vCPUs that are delivering interrupts) >> [2] commit 266e85a5ec9 (KVM: X86: Boost queue head vCPU to mitigate lock waiter preemption) >> >> Tested-by: loobinliu@tencent.com >> Cc: Peter Zijlstra >> Cc: Thomas Gleixner >> Cc: Ingo Molnar >> Cc: Waiman Long >> Cc: Paolo Bonzini >> Cc: Radim Krčmář >> Cc: loobinliu@tencent.com >> Cc: stable@vger.kernel.org >> Fixes: 75437bb304b20 (locking/pvqspinlock: Don't wait if vCPU is preempted) >> Signed-off-by: Wanpeng Li >> --- >> kernel/locking/qspinlock_paravirt.h | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h >> index 89bab07..e84d21a 100644 >> --- a/kernel/locking/qspinlock_paravirt.h >> +++ b/kernel/locking/qspinlock_paravirt.h >> @@ -269,7 +269,7 @@ pv_wait_early(struct pv_node *prev, int loop) >> if ((loop & PV_PREV_CHECK_MASK) != 0) >> return false; >> >> - return READ_ONCE(prev->state) != vcpu_running || vcpu_is_preempted(prev->cpu); >> + return READ_ONCE(prev->state) != vcpu_running; >> } >> >> /* > > There are several possibilities for this performance regression: > > 1) Multiple vcpus calling vcpu_is_preempted() repeatedly may cause some > cacheline contention issue depending on how that callback is implemented. Unlikely, it is a single percpu read. > 2) KVM may set the preempt flag for a short period whenver an vmexit > happens even if a vmenter is executed shortly after. In this case, we > may want to use a more durable vcpu suspend flag that indicates the vcpu > won't get a real vcpu back for a longer period of time. It sets it for exits to userspace, but they shouldn't really happen on a properly-configured system. However, it's easy to test this theory: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2e302e977dac..feb6c75a7a88 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3368,26 +3368,28 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { int idx; - if (vcpu->preempted) + if (vcpu->preempted) { vcpu->arch.preempted_in_kernel = !kvm_x86_ops->get_cpl(vcpu); - /* - * Disable page faults because we're in atomic context here. - * kvm_write_guest_offset_cached() would call might_fault() - * that relies on pagefault_disable() to tell if there's a - * bug. NOTE: the write to guest memory may not go through if - * during postcopy live migration or if there's heavy guest - * paging. - */ - pagefault_disable(); - /* - * kvm_memslots() will be called by - * kvm_write_guest_offset_cached() so take the srcu lock. - */ - idx = srcu_read_lock(&vcpu->kvm->srcu); - kvm_steal_time_set_preempted(vcpu); - srcu_read_unlock(&vcpu->kvm->srcu, idx); - pagefault_enable(); + /* + * Disable page faults because we're in atomic context here. + * kvm_write_guest_offset_cached() would call might_fault() + * that relies on pagefault_disable() to tell if there's a + * bug. NOTE: the write to guest memory may not go through if + * during postcopy live migration or if there's heavy guest + * paging. + */ + pagefault_disable(); + /* + * kvm_memslots() will be called by + * kvm_write_guest_offset_cached() so take the srcu lock. + */ + idx = srcu_read_lock(&vcpu->kvm->srcu); + kvm_steal_time_set_preempted(vcpu); + srcu_read_unlock(&vcpu->kvm->srcu, idx); + pagefault_enable(); + } + kvm_x86_ops->vcpu_put(vcpu); vcpu->arch.last_host_tsc = rdtsc(); /* Wanpeng, can you try? Paolo > Perhaps you can add a lock event counter to count the number of > wait_early events caused by vcpu_is_preempted() being true to see if it > really cause a lot more wait_early than without the vcpu_is_preempted() > call. > > I have no objection to this, I just want to find out the root cause of it. > > Cheers, > Longman >