From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B017C4320A for ; Wed, 4 Aug 2021 08:58:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6CDF661037 for ; Wed, 4 Aug 2021 08:58:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236816AbhHDI6w (ORCPT ); Wed, 4 Aug 2021 04:58:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235421AbhHDI6w (ORCPT ); Wed, 4 Aug 2021 04:58:52 -0400 Received: from mail-il1-x14a.google.com (mail-il1-x14a.google.com [IPv6:2607:f8b0:4864:20::14a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCC2FC061798 for ; Wed, 4 Aug 2021 01:58:39 -0700 (PDT) Received: by mail-il1-x14a.google.com with SMTP id c20-20020a9294140000b02902141528bc7cso624017ili.3 for ; Wed, 04 Aug 2021 01:58:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uloEpvDC1x4KyjcHtp66tRD4W6eFcIbVLp7KXwjdRug=; b=wWYNfr0/AgyWtA0xf6ApJcYYykZozz9WwRpQ2OdZvKK6Nt5sdWxU7SXTd0G4pRobA3 +iF6vlIXzYig3jrnte3drl3CENUMZinaVvksioh27z6Ierrhvj2stjn6tx5duzEGuwuS yprkj51j3QUNbZF2ZFzngnBUD1Gr6Ph5TxERYyOUfzqWsNTutgKpCpPgdX28ctqVcJk7 LJjiO42DBcrzWRLU4GB2GWmPqKgL7AYzmO5hk54ey9J6OoFr96x/KSAF1JHTyQtOZSJh mPHrSOeRaBFaCWmL1z7NLA5uURwVEfaz5ccXJRo10iLCVu+DzEqBDvrjAnY1N68Fsyui /wEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uloEpvDC1x4KyjcHtp66tRD4W6eFcIbVLp7KXwjdRug=; b=d6If2TEOCqass2eCzR6j2Hbz1VVncrzDaDxuHcu2ek9KJk1v4wlBvoyq5biXJ3at9/ zLkTm3I994HDHFSqEE9AFw1g29JBsGaAmd27c/VqEyhB+RwMXve1VU29qzECrq5f1r1x 6hC98wVpzidMlPX0Vp7Z1X02WG/i+sF+6ZitwaEvFWuEajAAU91ld36/llb4hT0C8ibJ xk82ioRJonbAkaArqoVCvJ+5yoAUpjX7yCA9M2d1X6KPzVEUPA/1m2jb3bPIML7kBmgj ogVHIOqInhCzUXVXno60ABDp0afbW/pT9n7mbdGy4dfFGLDAzIRveEDd4kafSfi1dQ7y FTMQ== X-Gm-Message-State: AOAM530Ulla0gHpl1Jg83DMJgdmTVT8+qyzJC0/ec4g/Lkrl75GMn3fx ZzIiEKuKJfrNzUCoR3ao/R89O8yDiPb+J93nrysvY8OJ3QPkG/VWBaYGn7MLVptzao/5QEhFyZW MdK4iv67nn0JN0QzfjqpIs/18junCwCEokyRn3iQ5hnwrWHTIvaQgmHXuVw== X-Google-Smtp-Source: ABdhPJwtVZyUWoHKFbiNQ2h2t9X7r0uhYjdaRL+9QGByuYY9K8DV9KrSpRwCNZjrwBRQxEPOiczXInUogDA= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a05:6638:381e:: with SMTP id i30mr23084938jav.17.1628067519135; Wed, 04 Aug 2021 01:58:39 -0700 (PDT) Date: Wed, 4 Aug 2021 08:58:02 +0000 In-Reply-To: <20210804085819.846610-1-oupton@google.com> Message-Id: <20210804085819.846610-5-oupton@google.com> Mime-Version: 1.0 References: <20210804085819.846610-1-oupton@google.com> X-Mailer: git-send-email 2.32.0.605.g8dce9f2422-goog Subject: [PATCH v6 04/21] KVM: x86: Refactor tsc synchronization code From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Paolo Bonzini , Sean Christopherson , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Andrew Jones , Will Deacon , Catalin Marinas , Oliver Upton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Refactor kvm_synchronize_tsc to make a new function that allows callers to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly for the sake of participating in TSC synchronization. Signed-off-by: Oliver Upton --- arch/x86/kvm/x86.c | 105 ++++++++++++++++++++++++++------------------- 1 file changed, 61 insertions(+), 44 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 93b449761fbe..91aea751d621 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2443,13 +2443,71 @@ static inline bool kvm_check_tsc_unstable(void) return check_tsc_unstable(); } +/* + * Infers attempts to synchronize the guest's tsc from host writes. Sets the + * offset for the vcpu and tracks the TSC matching generation that the vcpu + * participates in. + */ +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, + u64 ns, bool matched) +{ + struct kvm *kvm = vcpu->kvm; + bool already_matched; + + lockdep_assert_held(&kvm->arch.tsc_write_lock); + + already_matched = + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); + + /* + * We track the most recent recorded KHZ, write and time to + * allow the matching interval to be extended at each write. + */ + kvm->arch.last_tsc_nsec = ns; + kvm->arch.last_tsc_write = tsc; + kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; + + vcpu->arch.last_guest_tsc = tsc; + + /* Keep track of which generation this VCPU has synchronized to */ + vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; + vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; + vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; + + kvm_vcpu_write_tsc_offset(vcpu, offset); + + if (!matched) { + /* + * We split periods of matched TSC writes into generations. + * For each generation, we track the original measured + * nanosecond time, offset, and write, so if TSCs are in + * sync, we can match exact offset, and if not, we can match + * exact software computation in compute_guest_tsc() + * + * These values are tracked in kvm->arch.cur_xxx variables. + */ + kvm->arch.cur_tsc_generation++; + kvm->arch.cur_tsc_nsec = ns; + kvm->arch.cur_tsc_write = tsc; + kvm->arch.cur_tsc_offset = offset; + + spin_lock(&kvm->arch.pvclock_gtod_sync_lock); + kvm->arch.nr_vcpus_matched_tsc = 0; + } else if (!already_matched) { + spin_lock(&kvm->arch.pvclock_gtod_sync_lock); + kvm->arch.nr_vcpus_matched_tsc++; + } + + kvm_track_tsc_matching(vcpu); + spin_unlock(&kvm->arch.pvclock_gtod_sync_lock); +} + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) { struct kvm *kvm = vcpu->kvm; u64 offset, ns, elapsed; unsigned long flags; - bool matched; - bool already_matched; + bool matched = false; bool synchronizing = false; raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); @@ -2495,50 +2553,9 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) offset = kvm_compute_l1_tsc_offset(vcpu, data); } matched = true; - already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); - } else { - /* - * We split periods of matched TSC writes into generations. - * For each generation, we track the original measured - * nanosecond time, offset, and write, so if TSCs are in - * sync, we can match exact offset, and if not, we can match - * exact software computation in compute_guest_tsc() - * - * These values are tracked in kvm->arch.cur_xxx variables. - */ - kvm->arch.cur_tsc_generation++; - kvm->arch.cur_tsc_nsec = ns; - kvm->arch.cur_tsc_write = data; - kvm->arch.cur_tsc_offset = offset; - matched = false; } - /* - * We also track th most recent recorded KHZ, write and time to - * allow the matching interval to be extended at each write. - */ - kvm->arch.last_tsc_nsec = ns; - kvm->arch.last_tsc_write = data; - kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; - - vcpu->arch.last_guest_tsc = data; - - /* Keep track of which generation this VCPU has synchronized to */ - vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; - vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; - vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; - - kvm_vcpu_write_tsc_offset(vcpu, offset); - - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); - if (!matched) { - kvm->arch.nr_vcpus_matched_tsc = 0; - } else if (!already_matched) { - kvm->arch.nr_vcpus_matched_tsc++; - } - - kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); + __kvm_synchronize_tsc(vcpu, offset, data, ns, matched); raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); } -- 2.32.0.605.g8dce9f2422-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFC29C4338F for ; Wed, 4 Aug 2021 08:58:46 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 6612D60F8F for ; Wed, 4 Aug 2021 08:58:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6612D60F8F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 1A2934B088; Wed, 4 Aug 2021 04:58:46 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0VBdd8djKSG4; Wed, 4 Aug 2021 04:58:44 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id B4CD64A522; Wed, 4 Aug 2021 04:58:43 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 3F36840825 for ; Wed, 4 Aug 2021 04:58:42 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PqU1czULLbOI for ; Wed, 4 Aug 2021 04:58:41 -0400 (EDT) Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 98B824A4C0 for ; Wed, 4 Aug 2021 04:58:39 -0400 (EDT) Received: by mail-il1-f202.google.com with SMTP id u2-20020a056e021a42b0290221b4e6b2c8so600971ilv.23 for ; Wed, 04 Aug 2021 01:58:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uloEpvDC1x4KyjcHtp66tRD4W6eFcIbVLp7KXwjdRug=; b=wWYNfr0/AgyWtA0xf6ApJcYYykZozz9WwRpQ2OdZvKK6Nt5sdWxU7SXTd0G4pRobA3 +iF6vlIXzYig3jrnte3drl3CENUMZinaVvksioh27z6Ierrhvj2stjn6tx5duzEGuwuS yprkj51j3QUNbZF2ZFzngnBUD1Gr6Ph5TxERYyOUfzqWsNTutgKpCpPgdX28ctqVcJk7 LJjiO42DBcrzWRLU4GB2GWmPqKgL7AYzmO5hk54ey9J6OoFr96x/KSAF1JHTyQtOZSJh mPHrSOeRaBFaCWmL1z7NLA5uURwVEfaz5ccXJRo10iLCVu+DzEqBDvrjAnY1N68Fsyui /wEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uloEpvDC1x4KyjcHtp66tRD4W6eFcIbVLp7KXwjdRug=; b=Q2eeiC6VL+2+m+ijEzxpqUtD3P5fYf3LrUE3sj7lA1w70yb2Qf2Z8gDEQA3KUDjRwy Vkw+hUm8jPFKx2mVNLzS+tASiPCj8wJqxLYQiLr3TdHugXOMTzAfD5vCU/+qMR5gq5HO yxN9NZeAVIVstKkbv/Oe3v03XxEzuGNLQNxJR9ECsTAyF7g6KYrOn4NBe312VkVfF8XI KBDRqgU8RQydW7JJzaN4nTMukri7uds7sewnJx/t/lA7upCg9RPKTs+7h9QXGPHd+xAd QBl2qq4zIf4uYBtGv54ycn++vSlXAIukV08tKl3Ptwa/FRcDQT+1ilR/gAUMZQB8Imaj XTnw== X-Gm-Message-State: AOAM531xsMXRD2x7vCQcEN23dy5rIg9u/UjwHusnPHOTVwQDyzRVqM4l QEmi5nikWCx3vem+q48iD7XAJ4+QIDY= X-Google-Smtp-Source: ABdhPJwtVZyUWoHKFbiNQ2h2t9X7r0uhYjdaRL+9QGByuYY9K8DV9KrSpRwCNZjrwBRQxEPOiczXInUogDA= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a05:6638:381e:: with SMTP id i30mr23084938jav.17.1628067519135; Wed, 04 Aug 2021 01:58:39 -0700 (PDT) Date: Wed, 4 Aug 2021 08:58:02 +0000 In-Reply-To: <20210804085819.846610-1-oupton@google.com> Message-Id: <20210804085819.846610-5-oupton@google.com> Mime-Version: 1.0 References: <20210804085819.846610-1-oupton@google.com> X-Mailer: git-send-email 2.32.0.605.g8dce9f2422-goog Subject: [PATCH v6 04/21] KVM: x86: Refactor tsc synchronization code From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Will Deacon , Marc Zyngier , Raghavendra Rao Anata , Peter Shier , Sean Christopherson , David Matlack , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, Jim Mattson X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Refactor kvm_synchronize_tsc to make a new function that allows callers to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly for the sake of participating in TSC synchronization. Signed-off-by: Oliver Upton --- arch/x86/kvm/x86.c | 105 ++++++++++++++++++++++++++------------------- 1 file changed, 61 insertions(+), 44 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 93b449761fbe..91aea751d621 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2443,13 +2443,71 @@ static inline bool kvm_check_tsc_unstable(void) return check_tsc_unstable(); } +/* + * Infers attempts to synchronize the guest's tsc from host writes. Sets the + * offset for the vcpu and tracks the TSC matching generation that the vcpu + * participates in. + */ +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, + u64 ns, bool matched) +{ + struct kvm *kvm = vcpu->kvm; + bool already_matched; + + lockdep_assert_held(&kvm->arch.tsc_write_lock); + + already_matched = + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); + + /* + * We track the most recent recorded KHZ, write and time to + * allow the matching interval to be extended at each write. + */ + kvm->arch.last_tsc_nsec = ns; + kvm->arch.last_tsc_write = tsc; + kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; + + vcpu->arch.last_guest_tsc = tsc; + + /* Keep track of which generation this VCPU has synchronized to */ + vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; + vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; + vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; + + kvm_vcpu_write_tsc_offset(vcpu, offset); + + if (!matched) { + /* + * We split periods of matched TSC writes into generations. + * For each generation, we track the original measured + * nanosecond time, offset, and write, so if TSCs are in + * sync, we can match exact offset, and if not, we can match + * exact software computation in compute_guest_tsc() + * + * These values are tracked in kvm->arch.cur_xxx variables. + */ + kvm->arch.cur_tsc_generation++; + kvm->arch.cur_tsc_nsec = ns; + kvm->arch.cur_tsc_write = tsc; + kvm->arch.cur_tsc_offset = offset; + + spin_lock(&kvm->arch.pvclock_gtod_sync_lock); + kvm->arch.nr_vcpus_matched_tsc = 0; + } else if (!already_matched) { + spin_lock(&kvm->arch.pvclock_gtod_sync_lock); + kvm->arch.nr_vcpus_matched_tsc++; + } + + kvm_track_tsc_matching(vcpu); + spin_unlock(&kvm->arch.pvclock_gtod_sync_lock); +} + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) { struct kvm *kvm = vcpu->kvm; u64 offset, ns, elapsed; unsigned long flags; - bool matched; - bool already_matched; + bool matched = false; bool synchronizing = false; raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); @@ -2495,50 +2553,9 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) offset = kvm_compute_l1_tsc_offset(vcpu, data); } matched = true; - already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); - } else { - /* - * We split periods of matched TSC writes into generations. - * For each generation, we track the original measured - * nanosecond time, offset, and write, so if TSCs are in - * sync, we can match exact offset, and if not, we can match - * exact software computation in compute_guest_tsc() - * - * These values are tracked in kvm->arch.cur_xxx variables. - */ - kvm->arch.cur_tsc_generation++; - kvm->arch.cur_tsc_nsec = ns; - kvm->arch.cur_tsc_write = data; - kvm->arch.cur_tsc_offset = offset; - matched = false; } - /* - * We also track th most recent recorded KHZ, write and time to - * allow the matching interval to be extended at each write. - */ - kvm->arch.last_tsc_nsec = ns; - kvm->arch.last_tsc_write = data; - kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; - - vcpu->arch.last_guest_tsc = data; - - /* Keep track of which generation this VCPU has synchronized to */ - vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; - vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; - vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; - - kvm_vcpu_write_tsc_offset(vcpu, offset); - - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); - if (!matched) { - kvm->arch.nr_vcpus_matched_tsc = 0; - } else if (!already_matched) { - kvm->arch.nr_vcpus_matched_tsc++; - } - - kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); + __kvm_synchronize_tsc(vcpu, offset, data, ns, matched); raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); } -- 2.32.0.605.g8dce9f2422-goog _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8691C4338F for ; Wed, 4 Aug 2021 09:01:17 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8D13B60E52 for ; Wed, 4 Aug 2021 09:01:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8D13B60E52 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=a0KtqnpRRqnybgGWWTo6fN7xbHonDKC47FdpJ8RoIP0=; b=a745m1jMc6DV+gx6+3WnCg9DkO 0GW7omHrwm8YBCu6+xG+oQD5YO2G/9mOjTwkQJzrBDqkvRqn6jBm15Po72oK/BshgZFoQPwkOadiK 8wv590i6vcendaeSdMgw879jXTwKCllxiXcOfn7UAxU5R8oZaNfVmOFLbMeFAxewR4xrWayvpfrb3 KMNlakwLBvwdr4Y6gGRUEIronD+BvkBNf6ZV78W3TdqGNohfgW2YT/embEvtmxMm+Dhn2Y4Amb90G wRNUt1CtnOik0+Ws36ca5o4yzvbOQ+whYpRuz/oWqis765nkD5eGRu9+Q6uKtPvSzdHrDD39tzuxM wM4goseg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBCkH-005FkS-KK; Wed, 04 Aug 2021 08:59:25 +0000 Received: from mail-il1-x149.google.com ([2607:f8b0:4864:20::149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBCjZ-005FYE-1p for linux-arm-kernel@lists.infradead.org; Wed, 04 Aug 2021 08:58:42 +0000 Received: by mail-il1-x149.google.com with SMTP id o11-20020a92d38b0000b0290222c57c8ca8so609544ilo.16 for ; Wed, 04 Aug 2021 01:58:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uloEpvDC1x4KyjcHtp66tRD4W6eFcIbVLp7KXwjdRug=; b=wWYNfr0/AgyWtA0xf6ApJcYYykZozz9WwRpQ2OdZvKK6Nt5sdWxU7SXTd0G4pRobA3 +iF6vlIXzYig3jrnte3drl3CENUMZinaVvksioh27z6Ierrhvj2stjn6tx5duzEGuwuS yprkj51j3QUNbZF2ZFzngnBUD1Gr6Ph5TxERYyOUfzqWsNTutgKpCpPgdX28ctqVcJk7 LJjiO42DBcrzWRLU4GB2GWmPqKgL7AYzmO5hk54ey9J6OoFr96x/KSAF1JHTyQtOZSJh mPHrSOeRaBFaCWmL1z7NLA5uURwVEfaz5ccXJRo10iLCVu+DzEqBDvrjAnY1N68Fsyui /wEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uloEpvDC1x4KyjcHtp66tRD4W6eFcIbVLp7KXwjdRug=; b=TtisbcfIG5sGzkydap13h4n+a901JhEf/84dK9tljKlh+7D6Pf9Jw7Jznw5BCvR8Sx HV/4kSmlmtciqbBiD1bvK7UypJhfXw57DLB5akOBZQKC+qFfdehdMYZZ5QILoinRMMAe 88rQ7cgQvR6V+EQbwicWweK4+HqXmvOlAA1x83D6uFP8+cMfloHMIfE1CC60w4aJko94 sZu4kj/3s6E3HJpTPoaXKZWADJw8Vp99D6iaomvPWMGqaA3RJG99VTu++qTuj7liFdJB iGz65E3DWnSOkhsu7+WSKEgylICJS/OVRloZEf3n/MRiuLZXVQlO7pxUFqN6vvXs1Cde qvXg== X-Gm-Message-State: AOAM531btota7FafSjvNwde922BeixBkfuNyGPxQfHDT6+zj3xFuM/oC j5LGkbMM7Tt+f/28+pHpYanCbz8clAA= X-Google-Smtp-Source: ABdhPJwtVZyUWoHKFbiNQ2h2t9X7r0uhYjdaRL+9QGByuYY9K8DV9KrSpRwCNZjrwBRQxEPOiczXInUogDA= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a05:6638:381e:: with SMTP id i30mr23084938jav.17.1628067519135; Wed, 04 Aug 2021 01:58:39 -0700 (PDT) Date: Wed, 4 Aug 2021 08:58:02 +0000 In-Reply-To: <20210804085819.846610-1-oupton@google.com> Message-Id: <20210804085819.846610-5-oupton@google.com> Mime-Version: 1.0 References: <20210804085819.846610-1-oupton@google.com> X-Mailer: git-send-email 2.32.0.605.g8dce9f2422-goog Subject: [PATCH v6 04/21] KVM: x86: Refactor tsc synchronization code From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Paolo Bonzini , Sean Christopherson , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Andrew Jones , Will Deacon , Catalin Marinas , Oliver Upton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210804_015841_155155_0469B5C5 X-CRM114-Status: GOOD ( 15.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Refactor kvm_synchronize_tsc to make a new function that allows callers to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly for the sake of participating in TSC synchronization. Signed-off-by: Oliver Upton --- arch/x86/kvm/x86.c | 105 ++++++++++++++++++++++++++------------------- 1 file changed, 61 insertions(+), 44 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 93b449761fbe..91aea751d621 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2443,13 +2443,71 @@ static inline bool kvm_check_tsc_unstable(void) return check_tsc_unstable(); } +/* + * Infers attempts to synchronize the guest's tsc from host writes. Sets the + * offset for the vcpu and tracks the TSC matching generation that the vcpu + * participates in. + */ +static void __kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 offset, u64 tsc, + u64 ns, bool matched) +{ + struct kvm *kvm = vcpu->kvm; + bool already_matched; + + lockdep_assert_held(&kvm->arch.tsc_write_lock); + + already_matched = + (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); + + /* + * We track the most recent recorded KHZ, write and time to + * allow the matching interval to be extended at each write. + */ + kvm->arch.last_tsc_nsec = ns; + kvm->arch.last_tsc_write = tsc; + kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; + + vcpu->arch.last_guest_tsc = tsc; + + /* Keep track of which generation this VCPU has synchronized to */ + vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; + vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; + vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; + + kvm_vcpu_write_tsc_offset(vcpu, offset); + + if (!matched) { + /* + * We split periods of matched TSC writes into generations. + * For each generation, we track the original measured + * nanosecond time, offset, and write, so if TSCs are in + * sync, we can match exact offset, and if not, we can match + * exact software computation in compute_guest_tsc() + * + * These values are tracked in kvm->arch.cur_xxx variables. + */ + kvm->arch.cur_tsc_generation++; + kvm->arch.cur_tsc_nsec = ns; + kvm->arch.cur_tsc_write = tsc; + kvm->arch.cur_tsc_offset = offset; + + spin_lock(&kvm->arch.pvclock_gtod_sync_lock); + kvm->arch.nr_vcpus_matched_tsc = 0; + } else if (!already_matched) { + spin_lock(&kvm->arch.pvclock_gtod_sync_lock); + kvm->arch.nr_vcpus_matched_tsc++; + } + + kvm_track_tsc_matching(vcpu); + spin_unlock(&kvm->arch.pvclock_gtod_sync_lock); +} + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) { struct kvm *kvm = vcpu->kvm; u64 offset, ns, elapsed; unsigned long flags; - bool matched; - bool already_matched; + bool matched = false; bool synchronizing = false; raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags); @@ -2495,50 +2553,9 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) offset = kvm_compute_l1_tsc_offset(vcpu, data); } matched = true; - already_matched = (vcpu->arch.this_tsc_generation == kvm->arch.cur_tsc_generation); - } else { - /* - * We split periods of matched TSC writes into generations. - * For each generation, we track the original measured - * nanosecond time, offset, and write, so if TSCs are in - * sync, we can match exact offset, and if not, we can match - * exact software computation in compute_guest_tsc() - * - * These values are tracked in kvm->arch.cur_xxx variables. - */ - kvm->arch.cur_tsc_generation++; - kvm->arch.cur_tsc_nsec = ns; - kvm->arch.cur_tsc_write = data; - kvm->arch.cur_tsc_offset = offset; - matched = false; } - /* - * We also track th most recent recorded KHZ, write and time to - * allow the matching interval to be extended at each write. - */ - kvm->arch.last_tsc_nsec = ns; - kvm->arch.last_tsc_write = data; - kvm->arch.last_tsc_khz = vcpu->arch.virtual_tsc_khz; - - vcpu->arch.last_guest_tsc = data; - - /* Keep track of which generation this VCPU has synchronized to */ - vcpu->arch.this_tsc_generation = kvm->arch.cur_tsc_generation; - vcpu->arch.this_tsc_nsec = kvm->arch.cur_tsc_nsec; - vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; - - kvm_vcpu_write_tsc_offset(vcpu, offset); - - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); - if (!matched) { - kvm->arch.nr_vcpus_matched_tsc = 0; - } else if (!already_matched) { - kvm->arch.nr_vcpus_matched_tsc++; - } - - kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); + __kvm_synchronize_tsc(vcpu, offset, data, ns, matched); raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); } -- 2.32.0.605.g8dce9f2422-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel