From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20718C63798 for ; Mon, 19 Jul 2021 21:20:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 39CEA611C1 for ; Mon, 19 Jul 2021 21:20:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1387862AbhGSUfp (ORCPT ); Mon, 19 Jul 2021 16:35:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383738AbhGSSJZ (ORCPT ); Mon, 19 Jul 2021 14:09:25 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F0E6C0613E3 for ; Mon, 19 Jul 2021 11:38:08 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id b20-20020a0566023314b0290523c137a6a4so13326173ioz.8 for ; Mon, 19 Jul 2021 11:50:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GhOMvXrghMgrA5cJUDKp3N5lyADxOv1r/pHquF+79AY=; b=D/aYlVxo4gF+ngT2EhsxuBs4dLxMpsc59UY0ea+aZF7kV6yb0h6VJuouAJFhYN/W2V q67XWG04FbHwgm5fOnbL/VXz5mp4tx5GDfjliini9B8ijChsqmUhjZ+plrU/zmqfrx+3 Au91AOJPy2VNvDdARpgsaAt7/h2+A1wxSiqW8VNqqvSmb5Cx6kbqFFiD8m64G7G+mRq5 MZTaYqlzfxaS5qHKar7AkJnWiUZlVkVpnxfDOXBlmlYJBOcs5V2jE4ar1G1IsW/P/ehB zUn5sPy9uUJ1PCJcvTqOLbh9W52HkCfHG42kw4oiidiOsn4m4fz431A7RoIX8b4UZePK Tn/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GhOMvXrghMgrA5cJUDKp3N5lyADxOv1r/pHquF+79AY=; b=eIHeNQYFQ0Ae8nIkSuaD8Fyw8KKzRH9192sz0TtAxUg2DBdgxTV36XA8lzr9tJTc98 TUcaEmwJQG+FnK9E8FBc8yjIRkpEduWL/SEVAlTl6qi96uBkCupaMF9Pa9pjFrnpx/wV x2jytRipX8rw1AVPKGR8gPJBlMrrLeGYk95PkUqIsoV1GJV2QQYAp0+Yf7UxWYSRlNOU m6pYAI8i8+MQSUlwpC7rc0z0ny6Cs1IlSyHhJlYAC96sVWtHmpoZw81hhzRMd602jfBg METo15CIF9dkbrkXAxFdFcBtLurCRXhHJA0iSzqBwdJXn8kPNXtItPuDFsFbZBMHw5OF ptOA== X-Gm-Message-State: AOAM532CdVqb3YjspryI/mF6/Bjfe10cxZ877TeKJ5ng2Hns5o85Oap9 ZO0w8S3TmB3+ZchJtM7mKs01MwPdcgbMXmNw0Ed/WZfv6redCy/y08g3stcSRzbrgklAwcOviHt K32YE0vTM+2invP+rqo4A2ndN9kTlnbvcWraO0/s9qpaY8zBAEs2O+rGjCg== X-Google-Smtp-Source: ABdhPJxhjeeV517I8G9SDrwYXwQAF4gak5x9fd2mxwZ6Dy0ItSFxpEA3vLF5SwT8/R+AMo1D7Kj5J29roqY= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a92:5409:: with SMTP id i9mr17763897ilb.138.1626720602384; Mon, 19 Jul 2021 11:50:02 -0700 (PDT) Date: Mon, 19 Jul 2021 18:49:39 +0000 In-Reply-To: <20210719184949.1385910-1-oupton@google.com> Message-Id: <20210719184949.1385910-3-oupton@google.com> Mime-Version: 1.0 References: <20210719184949.1385910-1-oupton@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 02/12] KVM: x86: Refactor tsc synchronization code From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Paolo Bonzini , Sean Christopherson , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Oliver Upton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor kvm_synchronize_tsc to make a new function that allows callers to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly for the sake of participating in TSC synchronization. This changes the locking semantics around TSC writes. Writes to the TSC will now take the pvclock gtod lock while holding the tsc write lock, whereas before these locks were disjoint. Reviewed-by: David Matlack Signed-off-by: Oliver Upton --- Documentation/virt/kvm/locking.rst | 11 +++ arch/x86/kvm/x86.c | 106 +++++++++++++++++------------ 2 files changed, 74 insertions(+), 43 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 35eca377543d..ac62e1c76694 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -30,6 +30,9 @@ On x86: holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise there's no need to take kvm->arch.tdp_mmu_pages_lock at all). +- kvm->arch.tsc_write_lock is taken outside + kvm->arch.pvclock_gtod_sync_lock + Everything else is a leaf: no other lock is taken inside the critical sections. @@ -216,6 +219,14 @@ time it will be set using the Dirty tracking mechanism described above. :Comment: 'raw' because hardware enabling/disabling must be atomic /wrt migration. +:Name: kvm_arch::pvclock_gtod_sync_lock +:Type: raw_spinlock_t +:Arch: x86 +:Protects: kvm_arch::{cur_tsc_generation,cur_tsc_nsec,cur_tsc_write, + cur_tsc_offset,nr_vcpus_matched_tsc} +:Comment: 'raw' because updating the kvm master clock must not be + preempted. + :Name: kvm_arch::tsc_write_lock :Type: raw_spinlock :Arch: x86 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index bff78168d4a2..580ba0e86687 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2441,13 +2441,73 @@ static inline bool kvm_check_tsc_unstable(void) return check_tsc_unstable(); } +/* + * Infers attempts to synchronize the guest's tsc from host writes. Sets the + * offset for the vcpu and tracks the TSC matching generation that the vcpu + * participates in. + * + * Must hold kvm->arch.tsc_write_lock to call this function. + */ +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, + u64 ns, bool matched) +{ + struct kvm *kvm = vcpu->kvm; + bool already_matched; + unsigned long flags; + + lockdep_assert_held(&kvm->arch.tsc_write_lock); + + already_matched = + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); + + /* + * We track the most recent recorded KHZ, write and time to + * allow the matching interval to be extended at each write. + */ + kvm->arch.last_tsc_nsec = ns; + kvm->arch.last_tsc_write = tsc; + kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; + + vcpu->arch.last_guest_tsc = tsc; + + /* Keep track of which generation this VCPU has synchronized to */ + vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; + vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; + vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; + + kvm_vcpu_write_tsc_offset(vcpu, offset); + + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); + if (!matched) { + /* + * We split periods of matched TSC writes into generations. + * For each generation, we track the original measured + * nanosecond time, offset, and write, so if TSCs are in + * sync, we can match exact offset, and if not, we can match + * exact software computation in compute_guest_tsc() + * + * These values are tracked in kvm->arch.cur_xxx variables. + */ + kvm->arch.nr_vcpus_matched_tsc = 0; + kvm->arch.cur_tsc_generation++; + kvm->arch.cur_tsc_nsec = ns; + kvm->arch.cur_tsc_write = tsc; + kvm->arch.cur_tsc_offset = offset; + matched = false; + } else if (!already_matched) { + kvm->arch.nr_vcpus_matched_tsc++; + } + + kvm_track_tsc_matching(vcpu); + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); +} + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) { struct kvm *kvm = vcpu->kvm; u64 offset, ns, elapsed; unsigned long flags; - bool matched; - bool already_matched; + bool matched = false; bool synchronizing = false; raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); @@ -2493,51 +2553,11 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) offset = kvm_compute_l1_tsc_offset(vcpu, data); } matched = true; - already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); - } else { - /* - * We split periods of matched TSC writes into generations. - * For each generation, we track the original measured - * nanosecond time, offset, and write, so if TSCs are in - * sync, we can match exact offset, and if not, we can match - * exact software computation in compute_guest_tsc() - * - * These values are tracked in kvm->arch.cur_xxx variables. - */ - kvm->arch.cur_tsc_generation++; - kvm->arch.cur_tsc_nsec = ns; - kvm->arch.cur_tsc_write = data; - kvm->arch.cur_tsc_offset = offset; - matched = false; } - /* - * We also track th most recent recorded KHZ, write and time to - * allow the matching interval to be extended at each write. - */ - kvm->arch.last_tsc_nsec = ns; - kvm->arch.last_tsc_write = data; - kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; - - vcpu->arch.last_guest_tsc = data; + __kvm_synchronize_tsc(vcpu, offset, data, ns, matched); - /* Keep track of which generation this VCPU has synchronized to */ - vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; - vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; - vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; - - kvm_vcpu_write_tsc_offset(vcpu, offset); raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); - - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); - if (!matched) { - kvm->arch.nr_vcpus_matched_tsc = 0; - } else if (!already_matched) { - kvm->arch.nr_vcpus_matched_tsc++; - } - - kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); } static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, -- 2.32.0.402.g57bb445576-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7A1DC07E9B for ; Mon, 19 Jul 2021 18:50:09 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 535BC6112D for ; Mon, 19 Jul 2021 18:50:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 535BC6112D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id E8D004B0CE; Mon, 19 Jul 2021 14:50:08 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id v-55tINniQRm; Mon, 19 Jul 2021 14:50:07 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 185084B0CA; Mon, 19 Jul 2021 14:50:06 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 60DCE4B087 for ; Mon, 19 Jul 2021 14:50:04 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id gxX3Df2hG5GR for ; Mon, 19 Jul 2021 14:50:03 -0400 (EDT) Received: from mail-io1-f74.google.com (mail-io1-f74.google.com [209.85.166.74]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id EE6094B0B1 for ; Mon, 19 Jul 2021 14:50:02 -0400 (EDT) Received: by mail-io1-f74.google.com with SMTP id p7-20020a6b63070000b029050017e563a6so13343621iog.4 for ; Mon, 19 Jul 2021 11:50:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GhOMvXrghMgrA5cJUDKp3N5lyADxOv1r/pHquF+79AY=; b=D/aYlVxo4gF+ngT2EhsxuBs4dLxMpsc59UY0ea+aZF7kV6yb0h6VJuouAJFhYN/W2V q67XWG04FbHwgm5fOnbL/VXz5mp4tx5GDfjliini9B8ijChsqmUhjZ+plrU/zmqfrx+3 Au91AOJPy2VNvDdARpgsaAt7/h2+A1wxSiqW8VNqqvSmb5Cx6kbqFFiD8m64G7G+mRq5 MZTaYqlzfxaS5qHKar7AkJnWiUZlVkVpnxfDOXBlmlYJBOcs5V2jE4ar1G1IsW/P/ehB zUn5sPy9uUJ1PCJcvTqOLbh9W52HkCfHG42kw4oiidiOsn4m4fz431A7RoIX8b4UZePK Tn/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GhOMvXrghMgrA5cJUDKp3N5lyADxOv1r/pHquF+79AY=; b=gi5r73X43THxgpipAcq+HhP8mm5cpcXMivCKvRrpLpNNMoCE/5ueOMCKlXH2m06CtU w+OKBBElpGGFVuhwe780W8SzHo/mOyoXIKU7K5jOzftt9xGmGjXVPwBnmxF+JPoUEso1 HcnybBZHjE+Ncjn9p0I1Zyh2wFN3IJgd7ieO1vKncCTZ3KHS+1g3+ejkyVssH9h6Xz2g hyPpCIYSR1KRjIuqlq/nGv7ueyEOqL2tqh5NKtoRFu8f1SAC9jA/+ZAhkSfxDMMfH61R RrgwqheetnYF930DHOmZO2QNMImdOncPyULwXTy4bClUIF9pDomjEfxNMvf3wL298OQK sbTA== X-Gm-Message-State: AOAM5331uq8FjAw2fZeHJNX11+g9j7+Bt1+7pllsBkIBdLRpWebJQ1ZB 1FoLX+GZn1Pjn5lARxvtt41uTjBWtuE= X-Google-Smtp-Source: ABdhPJxhjeeV517I8G9SDrwYXwQAF4gak5x9fd2mxwZ6Dy0ItSFxpEA3vLF5SwT8/R+AMo1D7Kj5J29roqY= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a92:5409:: with SMTP id i9mr17763897ilb.138.1626720602384; Mon, 19 Jul 2021 11:50:02 -0700 (PDT) Date: Mon, 19 Jul 2021 18:49:39 +0000 In-Reply-To: <20210719184949.1385910-1-oupton@google.com> Message-Id: <20210719184949.1385910-3-oupton@google.com> Mime-Version: 1.0 References: <20210719184949.1385910-1-oupton@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 02/12] KVM: x86: Refactor tsc synchronization code From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Marc Zyngier , Raghavendra Rao Anata , Peter Shier , Sean Christopherson , David Matlack , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, Jim Mattson X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Refactor kvm_synchronize_tsc to make a new function that allows callers to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly for the sake of participating in TSC synchronization. This changes the locking semantics around TSC writes. Writes to the TSC will now take the pvclock gtod lock while holding the tsc write lock, whereas before these locks were disjoint. Reviewed-by: David Matlack Signed-off-by: Oliver Upton --- Documentation/virt/kvm/locking.rst | 11 +++ arch/x86/kvm/x86.c | 106 +++++++++++++++++------------ 2 files changed, 74 insertions(+), 43 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 35eca377543d..ac62e1c76694 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -30,6 +30,9 @@ On x86: holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise there's no need to take kvm->arch.tdp_mmu_pages_lock at all). +- kvm->arch.tsc_write_lock is taken outside + kvm->arch.pvclock_gtod_sync_lock + Everything else is a leaf: no other lock is taken inside the critical sections. @@ -216,6 +219,14 @@ time it will be set using the Dirty tracking mechanism described above. :Comment: 'raw' because hardware enabling/disabling must be atomic /wrt migration. +:Name: kvm_arch::pvclock_gtod_sync_lock +:Type: raw_spinlock_t +:Arch: x86 +:Protects: kvm_arch::{cur_tsc_generation,cur_tsc_nsec,cur_tsc_write, + cur_tsc_offset,nr_vcpus_matched_tsc} +:Comment: 'raw' because updating the kvm master clock must not be + preempted. + :Name: kvm_arch::tsc_write_lock :Type: raw_spinlock :Arch: x86 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index bff78168d4a2..580ba0e86687 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2441,13 +2441,73 @@ static inline bool kvm_check_tsc_unstable(void) return check_tsc_unstable(); } +/* + * Infers attempts to synchronize the guest's tsc from host writes. Sets the + * offset for the vcpu and tracks the TSC matching generation that the vcpu + * participates in. + * + * Must hold kvm->arch.tsc_write_lock to call this function. + */ +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, + u64 ns, bool matched) +{ + struct kvm *kvm = vcpu->kvm; + bool already_matched; + unsigned long flags; + + lockdep_assert_held(&kvm->arch.tsc_write_lock); + + already_matched = + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); + + /* + * We track the most recent recorded KHZ, write and time to + * allow the matching interval to be extended at each write. + */ + kvm->arch.last_tsc_nsec = ns; + kvm->arch.last_tsc_write = tsc; + kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; + + vcpu->arch.last_guest_tsc = tsc; + + /* Keep track of which generation this VCPU has synchronized to */ + vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; + vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; + vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; + + kvm_vcpu_write_tsc_offset(vcpu, offset); + + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); + if (!matched) { + /* + * We split periods of matched TSC writes into generations. + * For each generation, we track the original measured + * nanosecond time, offset, and write, so if TSCs are in + * sync, we can match exact offset, and if not, we can match + * exact software computation in compute_guest_tsc() + * + * These values are tracked in kvm->arch.cur_xxx variables. + */ + kvm->arch.nr_vcpus_matched_tsc = 0; + kvm->arch.cur_tsc_generation++; + kvm->arch.cur_tsc_nsec = ns; + kvm->arch.cur_tsc_write = tsc; + kvm->arch.cur_tsc_offset = offset; + matched = false; + } else if (!already_matched) { + kvm->arch.nr_vcpus_matched_tsc++; + } + + kvm_track_tsc_matching(vcpu); + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); +} + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) { struct kvm *kvm = vcpu->kvm; u64 offset, ns, elapsed; unsigned long flags; - bool matched; - bool already_matched; + bool matched = false; bool synchronizing = false; raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); @@ -2493,51 +2553,11 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) offset = kvm_compute_l1_tsc_offset(vcpu, data); } matched = true; - already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); - } else { - /* - * We split periods of matched TSC writes into generations. - * For each generation, we track the original measured - * nanosecond time, offset, and write, so if TSCs are in - * sync, we can match exact offset, and if not, we can match - * exact software computation in compute_guest_tsc() - * - * These values are tracked in kvm->arch.cur_xxx variables. - */ - kvm->arch.cur_tsc_generation++; - kvm->arch.cur_tsc_nsec = ns; - kvm->arch.cur_tsc_write = data; - kvm->arch.cur_tsc_offset = offset; - matched = false; } - /* - * We also track th most recent recorded KHZ, write and time to - * allow the matching interval to be extended at each write. - */ - kvm->arch.last_tsc_nsec = ns; - kvm->arch.last_tsc_write = data; - kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; - - vcpu->arch.last_guest_tsc = data; + __kvm_synchronize_tsc(vcpu, offset, data, ns, matched); - /* Keep track of which generation this VCPU has synchronized to */ - vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; - vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; - vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; - - kvm_vcpu_write_tsc_offset(vcpu, offset); raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); - - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); - if (!matched) { - kvm->arch.nr_vcpus_matched_tsc = 0; - } else if (!already_matched) { - kvm->arch.nr_vcpus_matched_tsc++; - } - - kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); } static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, -- 2.32.0.402.g57bb445576-goog _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 209B9C07E9B for ; Mon, 19 Jul 2021 18:52:46 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DAE2E60232 for ; Mon, 19 Jul 2021 18:52:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DAE2E60232 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=jG6tx4ETYfApy15OhAAJ0oJRv9O/src8AMxvY+sN02M=; b=FDLueUodmtsNwKry0hltJIVqGP a2RDT44t5JRp+lMf181uKRtf13FWre1YTZ9gkgVm++e5+ErvjZEh+pxbDhhzROAqYAD+j6LEOi1DJ NQBPEDIoTz8dVz6UaKjSq6lZNZSjiKzWlHsfjaCZvN/X6cv6C052DOKUo00+QomVaV3t0i/SS8hau 5TAwZgaNi+8ZaKP4huykG1Np1H5qOSg4A5FBYPsjew8WsnwkyWO83upjCETRtdtiXyyERINjVl9Py 6dv2rdTgFiWyPPEKigH7qKJTZTpuZ9XdoFmEVxn9gTT2qjXD48Qi5CvV0Rf1YvRLO4TiAo2Y2XYUZ y0XubgEQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YLB-00AwO4-68; Mon, 19 Jul 2021 18:50:09 +0000 Received: from mail-io1-xd4a.google.com ([2607:f8b0:4864:20::d4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5YL6-00AwM9-B3 for linux-arm-kernel@lists.infradead.org; Mon, 19 Jul 2021 18:50:05 +0000 Received: by mail-io1-xd4a.google.com with SMTP id i13-20020a5d88cd0000b02904e5ab8bdc6cso13302214iol.22 for ; Mon, 19 Jul 2021 11:50:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GhOMvXrghMgrA5cJUDKp3N5lyADxOv1r/pHquF+79AY=; b=D/aYlVxo4gF+ngT2EhsxuBs4dLxMpsc59UY0ea+aZF7kV6yb0h6VJuouAJFhYN/W2V q67XWG04FbHwgm5fOnbL/VXz5mp4tx5GDfjliini9B8ijChsqmUhjZ+plrU/zmqfrx+3 Au91AOJPy2VNvDdARpgsaAt7/h2+A1wxSiqW8VNqqvSmb5Cx6kbqFFiD8m64G7G+mRq5 MZTaYqlzfxaS5qHKar7AkJnWiUZlVkVpnxfDOXBlmlYJBOcs5V2jE4ar1G1IsW/P/ehB zUn5sPy9uUJ1PCJcvTqOLbh9W52HkCfHG42kw4oiidiOsn4m4fz431A7RoIX8b4UZePK Tn/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GhOMvXrghMgrA5cJUDKp3N5lyADxOv1r/pHquF+79AY=; b=uTxIrspTut1SUozNo03GLPoF3hugScOBWyMVwGsOsoP9GeiB7KIp+YXUfXWxS3H4W4 uIQScC8rdJkDW0dsYbvQg3QsK1Ju6JcWiVBENbVsWfYOTG77Xfwpn64ghveuFn8I5QQa uaLTg3qUj+J9HHyecVAjOxVbA3iW1HuqftTyXfT7lLJ+0wAD5S72rHyEdJxwPjKAYae9 4hj91RsfV7f+BjzwexfZJo/ESZc9/AwILUyqW1cb9C0KLIyM3qdAuV1bJEgRRd+S0igP Vhj3DZzHV9N1vRInVu/wpYGU4Izol1I7Kxt65NMwswxzJyxWc7dkx6zroKvg1n5ksOA3 i/YA== X-Gm-Message-State: AOAM531vCy0JDFYdjiP6q//jvpomQGAwM/JVMGvdebgukTz1bQoqGdsj 7aRlZaGqovDhMpH3qaiOoDXiX+1OvN4= X-Google-Smtp-Source: ABdhPJxhjeeV517I8G9SDrwYXwQAF4gak5x9fd2mxwZ6Dy0ItSFxpEA3vLF5SwT8/R+AMo1D7Kj5J29roqY= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a92:5409:: with SMTP id i9mr17763897ilb.138.1626720602384; Mon, 19 Jul 2021 11:50:02 -0700 (PDT) Date: Mon, 19 Jul 2021 18:49:39 +0000 In-Reply-To: <20210719184949.1385910-1-oupton@google.com> Message-Id: <20210719184949.1385910-3-oupton@google.com> Mime-Version: 1.0 References: <20210719184949.1385910-1-oupton@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH v3 02/12] KVM: x86: Refactor tsc synchronization code From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Paolo Bonzini , Sean Christopherson , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Oliver Upton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210719_115004_433918_6F32C5EC X-CRM114-Status: GOOD ( 18.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Refactor kvm_synchronize_tsc to make a new function that allows callers to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly for the sake of participating in TSC synchronization. This changes the locking semantics around TSC writes. Writes to the TSC will now take the pvclock gtod lock while holding the tsc write lock, whereas before these locks were disjoint. Reviewed-by: David Matlack Signed-off-by: Oliver Upton --- Documentation/virt/kvm/locking.rst | 11 +++ arch/x86/kvm/x86.c | 106 +++++++++++++++++------------ 2 files changed, 74 insertions(+), 43 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 35eca377543d..ac62e1c76694 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -30,6 +30,9 @@ On x86: holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise there's no need to take kvm->arch.tdp_mmu_pages_lock at all). +- kvm->arch.tsc_write_lock is taken outside + kvm->arch.pvclock_gtod_sync_lock + Everything else is a leaf: no other lock is taken inside the critical sections. @@ -216,6 +219,14 @@ time it will be set using the Dirty tracking mechanism described above. :Comment: 'raw' because hardware enabling/disabling must be atomic /wrt migration. +:Name: kvm_arch::pvclock_gtod_sync_lock +:Type: raw_spinlock_t +:Arch: x86 +:Protects: kvm_arch::{cur_tsc_generation,cur_tsc_nsec,cur_tsc_write, + cur_tsc_offset,nr_vcpus_matched_tsc} +:Comment: 'raw' because updating the kvm master clock must not be + preempted. + :Name: kvm_arch::tsc_write_lock :Type: raw_spinlock :Arch: x86 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index bff78168d4a2..580ba0e86687 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2441,13 +2441,73 @@ static inline bool kvm_check_tsc_unstable(void) return check_tsc_unstable(); } +/* + * Infers attempts to synchronize the guest's tsc from host writes. Sets the + * offset for the vcpu and tracks the TSC matching generation that the vcpu + * participates in. + * + * Must hold kvm->arch.tsc_write_lock to call this function. + */ +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, + u64 ns, bool matched) +{ + struct kvm *kvm = vcpu->kvm; + bool already_matched; + unsigned long flags; + + lockdep_assert_held(&kvm->arch.tsc_write_lock); + + already_matched = + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); + + /* + * We track the most recent recorded KHZ, write and time to + * allow the matching interval to be extended at each write. + */ + kvm->arch.last_tsc_nsec = ns; + kvm->arch.last_tsc_write = tsc; + kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; + + vcpu->arch.last_guest_tsc = tsc; + + /* Keep track of which generation this VCPU has synchronized to */ + vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; + vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; + vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; + + kvm_vcpu_write_tsc_offset(vcpu, offset); + + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); + if (!matched) { + /* + * We split periods of matched TSC writes into generations. + * For each generation, we track the original measured + * nanosecond time, offset, and write, so if TSCs are in + * sync, we can match exact offset, and if not, we can match + * exact software computation in compute_guest_tsc() + * + * These values are tracked in kvm->arch.cur_xxx variables. + */ + kvm->arch.nr_vcpus_matched_tsc = 0; + kvm->arch.cur_tsc_generation++; + kvm->arch.cur_tsc_nsec = ns; + kvm->arch.cur_tsc_write = tsc; + kvm->arch.cur_tsc_offset = offset; + matched = false; + } else if (!already_matched) { + kvm->arch.nr_vcpus_matched_tsc++; + } + + kvm_track_tsc_matching(vcpu); + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); +} + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) { struct kvm *kvm = vcpu->kvm; u64 offset, ns, elapsed; unsigned long flags; - bool matched; - bool already_matched; + bool matched = false; bool synchronizing = false; raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); @@ -2493,51 +2553,11 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) offset = kvm_compute_l1_tsc_offset(vcpu, data); } matched = true; - already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); - } else { - /* - * We split periods of matched TSC writes into generations. - * For each generation, we track the original measured - * nanosecond time, offset, and write, so if TSCs are in - * sync, we can match exact offset, and if not, we can match - * exact software computation in compute_guest_tsc() - * - * These values are tracked in kvm->arch.cur_xxx variables. - */ - kvm->arch.cur_tsc_generation++; - kvm->arch.cur_tsc_nsec = ns; - kvm->arch.cur_tsc_write = data; - kvm->arch.cur_tsc_offset = offset; - matched = false; } - /* - * We also track th most recent recorded KHZ, write and time to - * allow the matching interval to be extended at each write. - */ - kvm->arch.last_tsc_nsec = ns; - kvm->arch.last_tsc_write = data; - kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; - - vcpu->arch.last_guest_tsc = data; + __kvm_synchronize_tsc(vcpu, offset, data, ns, matched); - /* Keep track of which generation this VCPU has synchronized to */ - vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; - vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; - vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; - - kvm_vcpu_write_tsc_offset(vcpu, offset); raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); - - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); - if (!matched) { - kvm->arch.nr_vcpus_matched_tsc = 0; - } else if (!already_matched) { - kvm->arch.nr_vcpus_matched_tsc++; - } - - kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); } static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, -- 2.32.0.402.g57bb445576-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel