From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83AAEC432BE for ; Thu, 29 Jul 2021 17:33:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 664E560F5E for ; Thu, 29 Jul 2021 17:33:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231829AbhG2RdL (ORCPT ); Thu, 29 Jul 2021 13:33:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231127AbhG2RdL (ORCPT ); Thu, 29 Jul 2021 13:33:11 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8EC0C061765 for ; Thu, 29 Jul 2021 10:33:06 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id o7-20020ac87c470000b029025f8084df09so3062342qtv.6 for ; Thu, 29 Jul 2021 10:33:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XNyGfZ2Z9WQ1sVAL8fhBnwiD8pzYXS25rkapNfjiWHg=; b=X9NoV0tPQO1FPHoG7F5pNg5IOhDPI3oWZJd0e9pkrCOv+K/3Kp27gGUeC+K6eYkAJV njHndqYigqSVEWkWrXNiBTTP2VicAL1hSI3fEI200974RbuyDVZejMiFkS1ZJfJfjfBr UWw6n8GP1+kIop2jyYLsfhNbJr8NVPdUyaLflLmP+8w1SMdgbBriY5yv++Kh7yu0kQYa qPbqf1xAAgEYFWvHxaVemj9dEt6ea3wxLI53BEmP5qySwwfvC1EITZnDNSwAzAWDuXdO tYLSa9rT3KdiSPiYVPbxKS5wfewN3JIJWClTQ+23B0ZuuUCqsEXzpBiNBzmtbiMl6OTG nBqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XNyGfZ2Z9WQ1sVAL8fhBnwiD8pzYXS25rkapNfjiWHg=; b=IBynW0XgtBqt2WYAapvtSd68h7Zlj0bcqq0PvuipG9LLJzeK9V+QEOUmkO2q67VzLp AbNb3He1hsP+W3U8HWzf4N/Y82nwWUHuZCRnaVVuem7sUGSvY+76zwK5jBQKMattgaHk xmnFfzkeEsrwZobsFIx3c94A0GbppA7xpAtVH1a/PWy5QZXJkTx41+xz5YMpDcPljCO8 V9e8XWFEON2h0BcIDXJGKV1rF4326xOClYcXeSnxH1Agcbwv6n62SN5XaU23QTlWF71D GuSmqRiLbJ+aCeEBvz1S6S3J70yb4Wxslb0Wzre/eEC0mXYo6q9ZFDXeZwqSc6addyZL zKcA== X-Gm-Message-State: AOAM533IoTpgZWbFmZVG1lsffQB7gb/zx2RcOCWZ9xPQJjpHLYzCxjof i1j1G2JcIYkLEqO3nyqMUcgmnaK13dkgOi2sV9+2yi+5irLM0nsDiMdNuqs7kbAkWgH1qYDeZ63 P0ONjgoCijIOo5S03WowFuObvovjA/sAp2vEqsxAsK4f1iexmVmRKV7H26w== X-Google-Smtp-Source: ABdhPJwlHmI8h/oCHt3VQhC86Gu09FzNCxszWrZ+oGYXxyLWfX3DKgRcbs4aBp4tyzEetx6m5VCmYd/rS1c= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a0c:ee43:: with SMTP id m3mr6389676qvs.34.1627579986026; Thu, 29 Jul 2021 10:33:06 -0700 (PDT) Date: Thu, 29 Jul 2021 17:32:49 +0000 In-Reply-To: <20210729173300.181775-1-oupton@google.com> Message-Id: <20210729173300.181775-3-oupton@google.com> Mime-Version: 1.0 References: <20210729173300.181775-1-oupton@google.com> X-Mailer: git-send-email 2.32.0.432.gabb21c7263-goog Subject: [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Paolo Bonzini , Sean Christopherson , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Andrew Jones , Oliver Upton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor kvm_synchronize_tsc to make a new function that allows callers to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly for the sake of participating in TSC synchronization. This changes the locking semantics around TSC writes. Writes to the TSC will now take the pvclock gtod lock while holding the tsc write lock, whereas before these locks were disjoint. Reviewed-by: David Matlack Signed-off-by: Oliver Upton --- Documentation/virt/kvm/locking.rst | 11 +++ arch/x86/kvm/x86.c | 106 +++++++++++++++++------------ 2 files changed, 74 insertions(+), 43 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 8138201efb09..0bf346adac2a 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -36,6 +36,9 @@ On x86: holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise there's no need to take kvm->arch.tdp_mmu_pages_lock at all). +- kvm->arch.tsc_write_lock is taken outside + kvm->arch.pvclock_gtod_sync_lock + Everything else is a leaf: no other lock is taken inside the critical sections. @@ -222,6 +225,14 @@ time it will be set using the Dirty tracking mechanism described above. :Comment: 'raw' because hardware enabling/disabling must be atomic /wrt migration. +:Name: kvm_arch::pvclock_gtod_sync_lock +:Type: raw_spinlock_t +:Arch: x86 +:Protects: kvm_arch::{cur_tsc_generation,cur_tsc_nsec,cur_tsc_write, + cur_tsc_offset,nr_vcpus_matched_tsc} +:Comment: 'raw' because updating the kvm master clock must not be + preempted. + :Name: kvm_arch::tsc_write_lock :Type: raw_spinlock :Arch: x86 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e052c7afaac4..27435a07fb46 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2443,13 +2443,73 @@ static inline bool kvm_check_tsc_unstable(void) return check_tsc_unstable(); } +/* + * Infers attempts to synchronize the guest's tsc from host writes. Sets the + * offset for the vcpu and tracks the TSC matching generation that the vcpu + * participates in. + * + * Must hold kvm->arch.tsc_write_lock to call this function. + */ +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, + u64 ns, bool matched) +{ + struct kvm *kvm = vcpu->kvm; + bool already_matched; + unsigned long flags; + + lockdep_assert_held(&kvm->arch.tsc_write_lock); + + already_matched = + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); + + /* + * We track the most recent recorded KHZ, write and time to + * allow the matching interval to be extended at each write. + */ + kvm->arch.last_tsc_nsec = ns; + kvm->arch.last_tsc_write = tsc; + kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; + + vcpu->arch.last_guest_tsc = tsc; + + /* Keep track of which generation this VCPU has synchronized to */ + vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; + vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; + vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; + + kvm_vcpu_write_tsc_offset(vcpu, offset); + + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); + if (!matched) { + /* + * We split periods of matched TSC writes into generations. + * For each generation, we track the original measured + * nanosecond time, offset, and write, so if TSCs are in + * sync, we can match exact offset, and if not, we can match + * exact software computation in compute_guest_tsc() + * + * These values are tracked in kvm->arch.cur_xxx variables. + */ + kvm->arch.nr_vcpus_matched_tsc = 0; + kvm->arch.cur_tsc_generation++; + kvm->arch.cur_tsc_nsec = ns; + kvm->arch.cur_tsc_write = tsc; + kvm->arch.cur_tsc_offset = offset; + matched = false; + } else if (!already_matched) { + kvm->arch.nr_vcpus_matched_tsc++; + } + + kvm_track_tsc_matching(vcpu); + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); +} + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) { struct kvm *kvm = vcpu->kvm; u64 offset, ns, elapsed; unsigned long flags; - bool matched; - bool already_matched; + bool matched = false; bool synchronizing = false; raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); @@ -2495,51 +2555,11 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) offset = kvm_compute_l1_tsc_offset(vcpu, data); } matched = true; - already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); - } else { - /* - * We split periods of matched TSC writes into generations. - * For each generation, we track the original measured - * nanosecond time, offset, and write, so if TSCs are in - * sync, we can match exact offset, and if not, we can match - * exact software computation in compute_guest_tsc() - * - * These values are tracked in kvm->arch.cur_xxx variables. - */ - kvm->arch.cur_tsc_generation++; - kvm->arch.cur_tsc_nsec = ns; - kvm->arch.cur_tsc_write = data; - kvm->arch.cur_tsc_offset = offset; - matched = false; } - /* - * We also track th most recent recorded KHZ, write and time to - * allow the matching interval to be extended at each write. - */ - kvm->arch.last_tsc_nsec = ns; - kvm->arch.last_tsc_write = data; - kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; - - vcpu->arch.last_guest_tsc = data; + __kvm_synchronize_tsc(vcpu, offset, data, ns, matched); - /* Keep track of which generation this VCPU has synchronized to */ - vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; - vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; - vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; - - kvm_vcpu_write_tsc_offset(vcpu, offset); raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); - - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); - if (!matched) { - kvm->arch.nr_vcpus_matched_tsc = 0; - } else if (!already_matched) { - kvm->arch.nr_vcpus_matched_tsc++; - } - - kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); } static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, -- 2.32.0.432.gabb21c7263-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1671CC432BE for ; Thu, 29 Jul 2021 17:33:13 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 9E2EC60F43 for ; Thu, 29 Jul 2021 17:33:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9E2EC60F43 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 415644B10C; Thu, 29 Jul 2021 13:33:12 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bTwcbTCh1L2u; Thu, 29 Jul 2021 13:33:11 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id C67194B0F2; Thu, 29 Jul 2021 13:33:08 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 8ECCF4B0D7 for ; Thu, 29 Jul 2021 13:33:07 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 15g6KY9ggqrK for ; Thu, 29 Jul 2021 13:33:06 -0400 (EDT) Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 776DA4A023 for ; Thu, 29 Jul 2021 13:33:06 -0400 (EDT) Received: by mail-qt1-f202.google.com with SMTP id h18-20020ac856920000b029025eb726dd9bso3065752qta.8 for ; Thu, 29 Jul 2021 10:33:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XNyGfZ2Z9WQ1sVAL8fhBnwiD8pzYXS25rkapNfjiWHg=; b=X9NoV0tPQO1FPHoG7F5pNg5IOhDPI3oWZJd0e9pkrCOv+K/3Kp27gGUeC+K6eYkAJV njHndqYigqSVEWkWrXNiBTTP2VicAL1hSI3fEI200974RbuyDVZejMiFkS1ZJfJfjfBr UWw6n8GP1+kIop2jyYLsfhNbJr8NVPdUyaLflLmP+8w1SMdgbBriY5yv++Kh7yu0kQYa qPbqf1xAAgEYFWvHxaVemj9dEt6ea3wxLI53BEmP5qySwwfvC1EITZnDNSwAzAWDuXdO tYLSa9rT3KdiSPiYVPbxKS5wfewN3JIJWClTQ+23B0ZuuUCqsEXzpBiNBzmtbiMl6OTG nBqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XNyGfZ2Z9WQ1sVAL8fhBnwiD8pzYXS25rkapNfjiWHg=; b=fhpoYJsxEY1RmqkBHrsGB0V0/J1K1neg4K8xW7z/BG4latulSKRAa3g9lStkk510Fm 9DyBkkW8pNcqc31UsC3xg+VfLw7GwtMHM12WX/K6egHlYCG1yiThGelzwTu0YceVYqwI PjuSazpKOifv0ZvPwg2C9nTK7zsAGZBaBWDayoocBfgPbcG/43ln1dTTC9XhsXZ+9UNW vYhwPA5E54EKKi+bjvFxwoj/efMp1Tj7vetFFtFglN8CDqdX1TmV/9KjP/wQlmIVV9kf 5r4nBd5bA6K6AlD2vS4Lf06zt2c2SrZlwipw5ScAJm0ld1/XMBtPdK29HDE71AQkYKHx mRnw== X-Gm-Message-State: AOAM530YfFjztJYYF6/f7csizG048JPxHfNLjgYsloFUcqZD1MWhV14u rgkNm3ML5jJ31qiG5Fk5vdo9I0LELc8= X-Google-Smtp-Source: ABdhPJwlHmI8h/oCHt3VQhC86Gu09FzNCxszWrZ+oGYXxyLWfX3DKgRcbs4aBp4tyzEetx6m5VCmYd/rS1c= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a0c:ee43:: with SMTP id m3mr6389676qvs.34.1627579986026; Thu, 29 Jul 2021 10:33:06 -0700 (PDT) Date: Thu, 29 Jul 2021 17:32:49 +0000 In-Reply-To: <20210729173300.181775-1-oupton@google.com> Message-Id: <20210729173300.181775-3-oupton@google.com> Mime-Version: 1.0 References: <20210729173300.181775-1-oupton@google.com> X-Mailer: git-send-email 2.32.0.432.gabb21c7263-goog Subject: [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Marc Zyngier , Raghavendra Rao Anata , Peter Shier , Sean Christopherson , David Matlack , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, Jim Mattson X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Refactor kvm_synchronize_tsc to make a new function that allows callers to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly for the sake of participating in TSC synchronization. This changes the locking semantics around TSC writes. Writes to the TSC will now take the pvclock gtod lock while holding the tsc write lock, whereas before these locks were disjoint. Reviewed-by: David Matlack Signed-off-by: Oliver Upton --- Documentation/virt/kvm/locking.rst | 11 +++ arch/x86/kvm/x86.c | 106 +++++++++++++++++------------ 2 files changed, 74 insertions(+), 43 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 8138201efb09..0bf346adac2a 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -36,6 +36,9 @@ On x86: holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise there's no need to take kvm->arch.tdp_mmu_pages_lock at all). +- kvm->arch.tsc_write_lock is taken outside + kvm->arch.pvclock_gtod_sync_lock + Everything else is a leaf: no other lock is taken inside the critical sections. @@ -222,6 +225,14 @@ time it will be set using the Dirty tracking mechanism described above. :Comment: 'raw' because hardware enabling/disabling must be atomic /wrt migration. +:Name: kvm_arch::pvclock_gtod_sync_lock +:Type: raw_spinlock_t +:Arch: x86 +:Protects: kvm_arch::{cur_tsc_generation,cur_tsc_nsec,cur_tsc_write, + cur_tsc_offset,nr_vcpus_matched_tsc} +:Comment: 'raw' because updating the kvm master clock must not be + preempted. + :Name: kvm_arch::tsc_write_lock :Type: raw_spinlock :Arch: x86 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e052c7afaac4..27435a07fb46 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2443,13 +2443,73 @@ static inline bool kvm_check_tsc_unstable(void) return check_tsc_unstable(); } +/* + * Infers attempts to synchronize the guest's tsc from host writes. Sets the + * offset for the vcpu and tracks the TSC matching generation that the vcpu + * participates in. + * + * Must hold kvm->arch.tsc_write_lock to call this function. + */ +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, + u64 ns, bool matched) +{ + struct kvm *kvm = vcpu->kvm; + bool already_matched; + unsigned long flags; + + lockdep_assert_held(&kvm->arch.tsc_write_lock); + + already_matched = + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); + + /* + * We track the most recent recorded KHZ, write and time to + * allow the matching interval to be extended at each write. + */ + kvm->arch.last_tsc_nsec = ns; + kvm->arch.last_tsc_write = tsc; + kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; + + vcpu->arch.last_guest_tsc = tsc; + + /* Keep track of which generation this VCPU has synchronized to */ + vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; + vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; + vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; + + kvm_vcpu_write_tsc_offset(vcpu, offset); + + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); + if (!matched) { + /* + * We split periods of matched TSC writes into generations. + * For each generation, we track the original measured + * nanosecond time, offset, and write, so if TSCs are in + * sync, we can match exact offset, and if not, we can match + * exact software computation in compute_guest_tsc() + * + * These values are tracked in kvm->arch.cur_xxx variables. + */ + kvm->arch.nr_vcpus_matched_tsc = 0; + kvm->arch.cur_tsc_generation++; + kvm->arch.cur_tsc_nsec = ns; + kvm->arch.cur_tsc_write = tsc; + kvm->arch.cur_tsc_offset = offset; + matched = false; + } else if (!already_matched) { + kvm->arch.nr_vcpus_matched_tsc++; + } + + kvm_track_tsc_matching(vcpu); + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); +} + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) { struct kvm *kvm = vcpu->kvm; u64 offset, ns, elapsed; unsigned long flags; - bool matched; - bool already_matched; + bool matched = false; bool synchronizing = false; raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); @@ -2495,51 +2555,11 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) offset = kvm_compute_l1_tsc_offset(vcpu, data); } matched = true; - already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); - } else { - /* - * We split periods of matched TSC writes into generations. - * For each generation, we track the original measured - * nanosecond time, offset, and write, so if TSCs are in - * sync, we can match exact offset, and if not, we can match - * exact software computation in compute_guest_tsc() - * - * These values are tracked in kvm->arch.cur_xxx variables. - */ - kvm->arch.cur_tsc_generation++; - kvm->arch.cur_tsc_nsec = ns; - kvm->arch.cur_tsc_write = data; - kvm->arch.cur_tsc_offset = offset; - matched = false; } - /* - * We also track th most recent recorded KHZ, write and time to - * allow the matching interval to be extended at each write. - */ - kvm->arch.last_tsc_nsec = ns; - kvm->arch.last_tsc_write = data; - kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; - - vcpu->arch.last_guest_tsc = data; + __kvm_synchronize_tsc(vcpu, offset, data, ns, matched); - /* Keep track of which generation this VCPU has synchronized to */ - vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; - vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; - vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; - - kvm_vcpu_write_tsc_offset(vcpu, offset); raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); - - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); - if (!matched) { - kvm->arch.nr_vcpus_matched_tsc = 0; - } else if (!already_matched) { - kvm->arch.nr_vcpus_matched_tsc++; - } - - kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); } static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, -- 2.32.0.432.gabb21c7263-goog _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DC3AC4338F for ; Thu, 29 Jul 2021 17:36:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CF35A60EBB for ; Thu, 29 Jul 2021 17:36:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org CF35A60EBB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=8uQ7ZfnBGTI1MriZFdErapfTVF6eNEXtiI7sPJztYps=; b=x2PMXpd/ZdxhfWPPMQ64RNhcGv 7VEP+eDz1O36PwgUxC6CfYG6zBdBLnX7mWf7I/cbthTYK2fKrldWhevdZV6xmlPWT9DSx/IYOwNRF V/Y3LjAl54IodOfAKUvKN/DmD0ihalIv0Ayr3x0/HRwy+Hn9mpaFcEnrHLua5lCxrwLlsJ9drzXJr 5iJgUTW6wMT6mU0q7PPbBGzrnArOIoIvW46YTUX7jq7u281w+yLVZDqXpe6Wnnt4VYNcGUjM5tWav 70oHOyIuXdatTwe6E4vNACOaVaxHv+GuM+GdpI19ldR9SGNBNoiDe4LNxnjlujKabHrwKVwU/4sEA +9JqnraQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m99ud-005KUm-2W; Thu, 29 Jul 2021 17:33:39 +0000 Received: from mail-qt1-x84a.google.com ([2607:f8b0:4864:20::84a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m99u7-005KMx-Rf for linux-arm-kernel@lists.infradead.org; Thu, 29 Jul 2021 17:33:09 +0000 Received: by mail-qt1-x84a.google.com with SMTP id q10-20020a05622a04cab0290267bc0213a9so3090564qtx.0 for ; Thu, 29 Jul 2021 10:33:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XNyGfZ2Z9WQ1sVAL8fhBnwiD8pzYXS25rkapNfjiWHg=; b=X9NoV0tPQO1FPHoG7F5pNg5IOhDPI3oWZJd0e9pkrCOv+K/3Kp27gGUeC+K6eYkAJV njHndqYigqSVEWkWrXNiBTTP2VicAL1hSI3fEI200974RbuyDVZejMiFkS1ZJfJfjfBr UWw6n8GP1+kIop2jyYLsfhNbJr8NVPdUyaLflLmP+8w1SMdgbBriY5yv++Kh7yu0kQYa qPbqf1xAAgEYFWvHxaVemj9dEt6ea3wxLI53BEmP5qySwwfvC1EITZnDNSwAzAWDuXdO tYLSa9rT3KdiSPiYVPbxKS5wfewN3JIJWClTQ+23B0ZuuUCqsEXzpBiNBzmtbiMl6OTG nBqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XNyGfZ2Z9WQ1sVAL8fhBnwiD8pzYXS25rkapNfjiWHg=; b=uhLg13dVbQxL3ke9NS8SERKeuOlnks5a6QCm8RqxLF3kvvnleWcIvfDEHbzwkIrHN0 13J8SleC/64UepPrJqor3tgOECopvHwnkyOK1LhIKBfML+l9i5EgkXGSp1TcBBd06CNA kzRWh2iFNz/5Gm6vH5BAuBXeflWtGSmZxinAklVnq2DRBBo0iSzO91lQWZIzWiXn9V2W zGKxrj/PilXU81SKMv82IrPq42BzU22uXqXbFJwb7tyazkz1Zx9Y4c5kECzZvhNemsTP LFXw4Scl/70+4199ZQJRoY46bcB3PXyvLw4k6t+6y8TVbk04XZgBeF4ScEP+05HTOE49 Q36Q== X-Gm-Message-State: AOAM533vVbowGS50ukHGMjsFES0LG6zH9uvxRbJc4jIA883lcR9qb76x 5rsG4pSOXPwY1ihIgJH/UlJAABmCDB0= X-Google-Smtp-Source: ABdhPJwlHmI8h/oCHt3VQhC86Gu09FzNCxszWrZ+oGYXxyLWfX3DKgRcbs4aBp4tyzEetx6m5VCmYd/rS1c= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a0c:ee43:: with SMTP id m3mr6389676qvs.34.1627579986026; Thu, 29 Jul 2021 10:33:06 -0700 (PDT) Date: Thu, 29 Jul 2021 17:32:49 +0000 In-Reply-To: <20210729173300.181775-1-oupton@google.com> Message-Id: <20210729173300.181775-3-oupton@google.com> Mime-Version: 1.0 References: <20210729173300.181775-1-oupton@google.com> X-Mailer: git-send-email 2.32.0.432.gabb21c7263-goog Subject: [PATCH v5 02/13] KVM: x86: Refactor tsc synchronization code From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Paolo Bonzini , Sean Christopherson , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Andrew Jones , Oliver Upton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210729_103307_971295_7FC89CED X-CRM114-Status: GOOD ( 17.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Refactor kvm_synchronize_tsc to make a new function that allows callers to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly for the sake of participating in TSC synchronization. This changes the locking semantics around TSC writes. Writes to the TSC will now take the pvclock gtod lock while holding the tsc write lock, whereas before these locks were disjoint. Reviewed-by: David Matlack Signed-off-by: Oliver Upton --- Documentation/virt/kvm/locking.rst | 11 +++ arch/x86/kvm/x86.c | 106 +++++++++++++++++------------ 2 files changed, 74 insertions(+), 43 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 8138201efb09..0bf346adac2a 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -36,6 +36,9 @@ On x86: holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise there's no need to take kvm->arch.tdp_mmu_pages_lock at all). +- kvm->arch.tsc_write_lock is taken outside + kvm->arch.pvclock_gtod_sync_lock + Everything else is a leaf: no other lock is taken inside the critical sections. @@ -222,6 +225,14 @@ time it will be set using the Dirty tracking mechanism described above. :Comment: 'raw' because hardware enabling/disabling must be atomic /wrt migration. +:Name: kvm_arch::pvclock_gtod_sync_lock +:Type: raw_spinlock_t +:Arch: x86 +:Protects: kvm_arch::{cur_tsc_generation,cur_tsc_nsec,cur_tsc_write, + cur_tsc_offset,nr_vcpus_matched_tsc} +:Comment: 'raw' because updating the kvm master clock must not be + preempted. + :Name: kvm_arch::tsc_write_lock :Type: raw_spinlock :Arch: x86 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e052c7afaac4..27435a07fb46 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2443,13 +2443,73 @@ static inline bool kvm_check_tsc_unstable(void) return check_tsc_unstable(); } +/* + * Infers attempts to synchronize the guest's tsc from host writes. Sets the + * offset for the vcpu and tracks the TSC matching generation that the vcpu + * participates in. + * + * Must hold kvm->arch.tsc_write_lock to call this function. + */ +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, + u64 ns, bool matched) +{ + struct kvm *kvm = vcpu->kvm; + bool already_matched; + unsigned long flags; + + lockdep_assert_held(&kvm->arch.tsc_write_lock); + + already_matched = + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); + + /* + * We track the most recent recorded KHZ, write and time to + * allow the matching interval to be extended at each write. + */ + kvm->arch.last_tsc_nsec = ns; + kvm->arch.last_tsc_write = tsc; + kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; + + vcpu->arch.last_guest_tsc = tsc; + + /* Keep track of which generation this VCPU has synchronized to */ + vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; + vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; + vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; + + kvm_vcpu_write_tsc_offset(vcpu, offset); + + spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); + if (!matched) { + /* + * We split periods of matched TSC writes into generations. + * For each generation, we track the original measured + * nanosecond time, offset, and write, so if TSCs are in + * sync, we can match exact offset, and if not, we can match + * exact software computation in compute_guest_tsc() + * + * These values are tracked in kvm->arch.cur_xxx variables. + */ + kvm->arch.nr_vcpus_matched_tsc = 0; + kvm->arch.cur_tsc_generation++; + kvm->arch.cur_tsc_nsec = ns; + kvm->arch.cur_tsc_write = tsc; + kvm->arch.cur_tsc_offset = offset; + matched = false; + } else if (!already_matched) { + kvm->arch.nr_vcpus_matched_tsc++; + } + + kvm_track_tsc_matching(vcpu); + spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); +} + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) { struct kvm *kvm = vcpu->kvm; u64 offset, ns, elapsed; unsigned long flags; - bool matched; - bool already_matched; + bool matched = false; bool synchronizing = false; raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); @@ -2495,51 +2555,11 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) offset = kvm_compute_l1_tsc_offset(vcpu, data); } matched = true; - already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); - } else { - /* - * We split periods of matched TSC writes into generations. - * For each generation, we track the original measured - * nanosecond time, offset, and write, so if TSCs are in - * sync, we can match exact offset, and if not, we can match - * exact software computation in compute_guest_tsc() - * - * These values are tracked in kvm->arch.cur_xxx variables. - */ - kvm->arch.cur_tsc_generation++; - kvm->arch.cur_tsc_nsec = ns; - kvm->arch.cur_tsc_write = data; - kvm->arch.cur_tsc_offset = offset; - matched = false; } - /* - * We also track th most recent recorded KHZ, write and time to - * allow the matching interval to be extended at each write. - */ - kvm->arch.last_tsc_nsec = ns; - kvm->arch.last_tsc_write = data; - kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; - - vcpu->arch.last_guest_tsc = data; + __kvm_synchronize_tsc(vcpu, offset, data, ns, matched); - /* Keep track of which generation this VCPU has synchronized to */ - vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; - vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; - vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; - - kvm_vcpu_write_tsc_offset(vcpu, offset); raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); - - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); - if (!matched) { - kvm->arch.nr_vcpus_matched_tsc = 0; - } else if (!already_matched) { - kvm->arch.nr_vcpus_matched_tsc++; - } - - kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); } static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, -- 2.32.0.432.gabb21c7263-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel