From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6740C433EF for ; Thu, 16 Sep 2021 18:15:53 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id 8B71E611CA for ; Thu, 16 Sep 2021 18:15:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8B71E611CA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 3FA9A4B161; Thu, 16 Sep 2021 14:15:53 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Authentication-Results: mm01.cs.columbia.edu (amavisd-new); dkim=softfail (fail, message has been altered) header.i=@google.com Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id igK7uXNkoZIO; Thu, 16 Sep 2021 14:15:51 -0400 (EDT) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 0CCD54B168; Thu, 16 Sep 2021 14:15:50 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 0CA814B17C for ; Thu, 16 Sep 2021 14:15:49 -0400 (EDT) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id VcYZULskP4py for ; Thu, 16 Sep 2021 14:15:47 -0400 (EDT) Received: from mail-qv1-f74.google.com (mail-qv1-f74.google.com [209.85.219.74]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id D5C9C4B092 for ; Thu, 16 Sep 2021 14:15:47 -0400 (EDT) Received: by mail-qv1-f74.google.com with SMTP id r18-20020a056214069200b0037a291a6081so63337855qvz.18 for ; Thu, 16 Sep 2021 11:15:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RkkbGR7s65RKrojatUhUcN7AGOVGILJpEmrP2I02yFk=; b=XdE2G/H9rYPulSHBgAfwPKILoMSC5zHl88dGQYBg/i+k3tK/Lp6vKr7uKIi1bUdViU nO6mVeTKlL84xNt3/RjeQER4b5z04fYNOneo+sSl8hyWRLgRWx95iUtG4Rob9UELNUs5 49ZPhvIMW2J95O8+ZRMZ3wDpTpXFprQFUHnOo4v6XjG42qe48CzDlEWHDnLjkorS+fCE 2ennJ2nPGZu9b8rVUjiN2JEl+neb/jih0MVrD8xU0NtCs3//Q3k6ELLj1/lJZgN7Ksp/ Lpds1Nu+kE2B0NxTAYgjUdDyF4jGzjT36AYL5urhrF1eOhrQKfi0tf3X+b8/KR2bkpcx jocg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RkkbGR7s65RKrojatUhUcN7AGOVGILJpEmrP2I02yFk=; b=tTqzKPs5XlIJ8cqauiImIHQNl2h2jWPbwvH7E8rdTUQ6pLCkdbnkVarg5oRscBrU3m r31TUtEx9+Y43jjh7kvL8Ud95SVnpI7NDU6hDjmNQkVfwUNqD7vwlKFeq94vcQ2iYsN+ OypN0Afyg27eHXeMeIygfmVfaWIhQfIJ1Ct+XV7nmLZDE7MguCct0xNQaYw823EuWqJz vR3pVMHiXn70BxYyi0FC/2Ke5PB9nO15AUlv9693G0hPa/7U3n9/wSR2ICXXCH+sbLRa fWTpQ033+N9s/QfboqX5rWlDUGEhnVfq8UqMeo/ZvTJLBSuVOW2DF3tBV8kAmBAyvFXn UH6w== X-Gm-Message-State: AOAM531EL/x2wZmr5Xf936BI3fUkvtdiRa0HRGaeqnb2Y5Sa4Fq9Vp0/ 1cOuWd5AyjDZPG++/xGBQh19rhW3pEI= X-Google-Smtp-Source: ABdhPJz+iJU5f6z/grqrVZ21yLMPrc5CcINLjptZt9ThEios+Cu4etZJHmTZs6+key3lJTsf+GErso+7PVs= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a05:6214:3ca:: with SMTP id ce10mr6813032qvb.12.1631816147367; Thu, 16 Sep 2021 11:15:47 -0700 (PDT) Date: Thu, 16 Sep 2021 18:15:36 +0000 In-Reply-To: <20210916181538.968978-1-oupton@google.com> Message-Id: <20210916181538.968978-6-oupton@google.com> Mime-Version: 1.0 References: <20210916181538.968978-1-oupton@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v8 5/7] kvm: x86: protect masterclock with a seqcount From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Catalin Marinas , Will Deacon , Marc Zyngier , Peter Shier , Sean Christopherson , David Matlack , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, Jim Mattson X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu From: Paolo Bonzini Protect the reference point for kvmclock with a seqcount, so that kvmclock updates for all vCPUs can proceed in parallel. Xen runstate updates will also run in parallel and not bounce the kvmclock cacheline. nr_vcpus_matched_tsc is updated outside pvclock_update_vm_gtod_copy though, so a spinlock must be kept for that one. Signed-off-by: Paolo Bonzini [Oliver - drop unused locals, don't double acquire tsc_write_lock] Signed-off-by: Oliver Upton --- arch/x86/include/asm/kvm_host.h | 7 ++- arch/x86/kvm/x86.c | 83 +++++++++++++++++---------------- 2 files changed, 49 insertions(+), 41 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9c34b5b63e39..5accfe7246ce 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1087,6 +1087,11 @@ struct kvm_arch { unsigned long irq_sources_bitmap; s64 kvmclock_offset; + + /* + * This also protects nr_vcpus_matched_tsc which is read from a + * preemption-disabled region, so it must be a raw spinlock. + */ raw_spinlock_t tsc_write_lock; u64 last_tsc_nsec; u64 last_tsc_write; @@ -1097,7 +1102,7 @@ struct kvm_arch { u64 cur_tsc_generation; int nr_vcpus_matched_tsc; - spinlock_t pvclock_gtod_sync_lock; + seqcount_raw_spinlock_t pvclock_sc; bool use_master_clock; u64 master_kernel_ns; u64 master_cycle_now; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cb5d5cad5124..29156c49cd11 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2533,9 +2533,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; kvm_vcpu_write_tsc_offset(vcpu, offset); - raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); if (!matched) { kvm->arch.nr_vcpus_matched_tsc = 0; } else if (!already_matched) { @@ -2543,7 +2541,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) } kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); + raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); } static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, @@ -2731,9 +2729,6 @@ static void pvclock_update_vm_gtod_copy(struct kvm *kvm) int vclock_mode; bool host_tsc_clocksource, vcpus_matched; - vcpus_matched = (ka->nr_vcpus_matched_tsc + 1 == - atomic_read(&kvm->online_vcpus)); - /* * If the host uses TSC clock, then passthrough TSC as stable * to the guest. @@ -2742,6 +2737,10 @@ static void pvclock_update_vm_gtod_copy(struct kvm *kvm) &ka->master_kernel_ns, &ka->master_cycle_now); + lockdep_assert_held(&kvm->arch.tsc_write_lock); + vcpus_matched = (ka->nr_vcpus_matched_tsc + 1 == + atomic_read(&kvm->online_vcpus)); + ka->use_master_clock = host_tsc_clocksource && vcpus_matched && !ka->backwards_tsc_observed && !ka->boot_vcpu_runs_old_kvmclock; @@ -2760,14 +2759,18 @@ static void kvm_make_mclock_inprogress_request(struct kvm *kvm) kvm_make_all_cpus_request(kvm, KVM_REQ_MCLOCK_INPROGRESS); } -static void kvm_start_pvclock_update(struct kvm *kvm) +static void __kvm_start_pvclock_update(struct kvm *kvm) { - struct kvm_arch *ka = &kvm->arch; + raw_spin_lock_irq(&kvm->arch.tsc_write_lock); + write_seqcount_begin(&kvm->arch.pvclock_sc); +} +static void kvm_start_pvclock_update(struct kvm *kvm) +{ kvm_make_mclock_inprogress_request(kvm); /* no guest entries from this point */ - spin_lock_irq(&ka->pvclock_gtod_sync_lock); + __kvm_start_pvclock_update(kvm); } static void kvm_end_pvclock_update(struct kvm *kvm) @@ -2776,7 +2779,8 @@ static void kvm_end_pvclock_update(struct kvm *kvm) struct kvm_vcpu *vcpu; int i; - spin_unlock_irq(&ka->pvclock_gtod_sync_lock); + write_seqcount_end(&ka->pvclock_sc); + raw_spin_unlock_irq(&ka->tsc_write_lock); kvm_for_each_vcpu(i, vcpu, kvm) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); @@ -2797,20 +2801,12 @@ static void get_kvmclock(struct kvm *kvm, struct kvm_clock_data *data) { struct kvm_arch *ka = &kvm->arch; struct pvclock_vcpu_time_info hv_clock; - unsigned long flags; - spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); if (!ka->use_master_clock) { - spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); data->clock = get_kvmclock_base_ns() + ka->kvmclock_offset; return; } - data->flags |= KVM_CLOCK_TSC_STABLE; - hv_clock.tsc_timestamp = ka->master_cycle_now; - hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; - spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); - /* both __this_cpu_read() and rdtsc() should be on the same cpu */ get_cpu(); @@ -2825,6 +2821,9 @@ static void get_kvmclock(struct kvm *kvm, struct kvm_clock_data *data) #endif data->host_tsc = rdtsc(); + data->flags |= KVM_CLOCK_TSC_STABLE; + hv_clock.tsc_timestamp = ka->master_cycle_now; + hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL, &hv_clock.tsc_shift, &hv_clock.tsc_to_system_mul); @@ -2839,14 +2838,14 @@ static void get_kvmclock(struct kvm *kvm, struct kvm_clock_data *data) u64 get_kvmclock_ns(struct kvm *kvm) { struct kvm_clock_data data; + struct kvm_arch *ka = &kvm->arch; + unsigned seq; - /* - * Zero flags as it's accessed RMW, leave everything else uninitialized - * as clock is always written and no other fields are consumed. - */ - data.flags = 0; - - get_kvmclock(kvm, &data); + do { + seq = read_seqcount_begin(&ka->pvclock_sc); + data.flags = 0; + get_kvmclock(kvm, &data); + } while (read_seqcount_retry(&ka->pvclock_sc, seq)); return data.clock; } @@ -2912,6 +2911,7 @@ static void kvm_setup_pvclock_page(struct kvm_vcpu *v, static int kvm_guest_time_update(struct kvm_vcpu *v) { unsigned long flags, tgt_tsc_khz; + unsigned seq; struct kvm_vcpu_arch *vcpu = &v->arch; struct kvm_arch *ka = &v->kvm->arch; s64 kernel_ns; @@ -2926,13 +2926,14 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) * If the host uses TSC clock, then passthrough TSC as stable * to the guest. */ - spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); - use_master_clock = ka->use_master_clock; - if (use_master_clock) { - host_tsc = ka->master_cycle_now; - kernel_ns = ka->master_kernel_ns; - } - spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); + seq = read_seqcount_begin(&ka->pvclock_sc); + do { + use_master_clock = ka->use_master_clock; + if (use_master_clock) { + host_tsc = ka->master_cycle_now; + kernel_ns = ka->master_kernel_ns; + } + } while (read_seqcount_retry(&ka->pvclock_sc, seq)); /* Keep irq disabled to prevent changes to the clock */ local_irq_save(flags); @@ -5855,10 +5856,15 @@ int kvm_arch_pm_notifier(struct kvm *kvm, unsigned long state) static int kvm_vm_ioctl_get_clock(struct kvm *kvm, void __user *argp) { - struct kvm_clock_data data; + struct kvm_clock_data data = { 0 }; + unsigned seq; + + do { + seq = read_seqcount_begin(&kvm->arch.pvclock_sc); + data.flags = 0; + get_kvmclock(kvm, &data); + } while (read_seqcount_retry(&kvm->arch.pvclock_sc, seq)); - memset(&data, 0, sizeof(data)); - get_kvmclock(kvm, &data); if (copy_to_user(argp, &data, sizeof(data))) return -EFAULT; @@ -8159,9 +8165,7 @@ static void kvm_hyperv_tsc_notifier(void) kvm_max_guest_tsc_khz = tsc_khz; list_for_each_entry(kvm, &vm_list, vm_list) { - struct kvm_arch *ka = &kvm->arch; - - spin_lock_irq(&ka->pvclock_gtod_sync_lock); + __kvm_start_pvclock_update(kvm); pvclock_update_vm_gtod_copy(kvm); kvm_end_pvclock_update(kvm); } @@ -11188,8 +11192,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) raw_spin_lock_init(&kvm->arch.tsc_write_lock); mutex_init(&kvm->arch.apic_map_lock); - spin_lock_init(&kvm->arch.pvclock_gtod_sync_lock); - + seqcount_raw_spinlock_init(&kvm->arch.pvclock_sc, &kvm->arch.tsc_write_lock); kvm->arch.kvmclock_offset = -get_kvmclock_base_ns(); pvclock_update_vm_gtod_copy(kvm); -- 2.33.0.309.g3052b89438-goog _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6FA2C433EF for ; Thu, 16 Sep 2021 18:23:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B628D611C4 for ; Thu, 16 Sep 2021 18:23:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B628D611C4 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=OT1+rVIVYipogETtTeGkrMGT6oC3QjlUC+ZtS+NCpAs=; b=hNESIazmWHPUgIeuiRQAxoNebZ Ch8v+4TZPQUxzvSt6PKoafdT9Q5Wcld1IfKM4kCK54dkr6Xflq7SA7BAWJrY6RdCtMcPmhE2jDhcY 2j0h/gLJaaOLbqKyqwcyJ1ivTtxJQXDB6Y7QVNL0VRuaBBoAr2cTVU6oIEULdTfByzHYUE4iHigPY uWe2wi4V+DRdNAjGMFAKF54PFKLzhLdiaV6HeQsZ/HLX1aeEFx5jrsqzsB+QqUR7oF2qrucH/F4A5 PTIIu5hSSutsGKm0mFiTsdrnesX6inMmlmTp8YcaeNZg/UmI0xv9PMJMl4eHVUx/aflpS+EHp4sSH Y1e8S56g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mQw1O-00C4tP-4N; Thu, 16 Sep 2021 18:22:08 +0000 Received: from mail-qk1-x74a.google.com ([2607:f8b0:4864:20::74a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mQvvJ-00C1yO-1B for linux-arm-kernel@lists.infradead.org; Thu, 16 Sep 2021 18:15:51 +0000 Received: by mail-qk1-x74a.google.com with SMTP id k9-20020a05620a138900b003d59b580010so44713271qki.18 for ; Thu, 16 Sep 2021 11:15:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RkkbGR7s65RKrojatUhUcN7AGOVGILJpEmrP2I02yFk=; b=XdE2G/H9rYPulSHBgAfwPKILoMSC5zHl88dGQYBg/i+k3tK/Lp6vKr7uKIi1bUdViU nO6mVeTKlL84xNt3/RjeQER4b5z04fYNOneo+sSl8hyWRLgRWx95iUtG4Rob9UELNUs5 49ZPhvIMW2J95O8+ZRMZ3wDpTpXFprQFUHnOo4v6XjG42qe48CzDlEWHDnLjkorS+fCE 2ennJ2nPGZu9b8rVUjiN2JEl+neb/jih0MVrD8xU0NtCs3//Q3k6ELLj1/lJZgN7Ksp/ Lpds1Nu+kE2B0NxTAYgjUdDyF4jGzjT36AYL5urhrF1eOhrQKfi0tf3X+b8/KR2bkpcx jocg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RkkbGR7s65RKrojatUhUcN7AGOVGILJpEmrP2I02yFk=; b=Hnvmxo9lW8uQHnGNGgZccyxbo0yRpQXyrTnF2uxAD2I/w3MgJE8b4Z64nz0epXDjmK RYmBv0cGgHwlaml+Cc46+nNYPvrr6p3HjDfZ/6RpTkTcCq2Pzx7dsCXgMbsiF5hL2da4 +HoE2WYgnMeD6BycwFuqRgdFpX3Ug+L0gZdQkI9f3u3xrSMpm29/rH7hS8IPB6ViOEhw +PTDDfAjhbWW5VNc8gljzMKhMzNAg/CiA8loguxIdXpftGGxTomTAnaojbMqmcEvrT4I 59sj+bPa4Cg9xLcMvLDKf1cqf7COWDVOt26eq+6csQn/SelCiG+sIGsCMlS9vPJYoSfx Z6KQ== X-Gm-Message-State: AOAM5319x7Cj81a//PkCPe1W8eHnuGXzA/rQ94ol7NNCN2xPeXTXJmUZ Tx721SVxmoBzlF9eo1w8EMeF/DLF2ww= X-Google-Smtp-Source: ABdhPJz+iJU5f6z/grqrVZ21yLMPrc5CcINLjptZt9ThEios+Cu4etZJHmTZs6+key3lJTsf+GErso+7PVs= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a05:6214:3ca:: with SMTP id ce10mr6813032qvb.12.1631816147367; Thu, 16 Sep 2021 11:15:47 -0700 (PDT) Date: Thu, 16 Sep 2021 18:15:36 +0000 In-Reply-To: <20210916181538.968978-1-oupton@google.com> Message-Id: <20210916181538.968978-6-oupton@google.com> Mime-Version: 1.0 References: <20210916181538.968978-1-oupton@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v8 5/7] kvm: x86: protect masterclock with a seqcount From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Paolo Bonzini , Sean Christopherson , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Andrew Jones , Will Deacon , Catalin Marinas , Oliver Upton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210916_111549_155666_130D7AC1 X-CRM114-Status: GOOD ( 21.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Paolo Bonzini Protect the reference point for kvmclock with a seqcount, so that kvmclock updates for all vCPUs can proceed in parallel. Xen runstate updates will also run in parallel and not bounce the kvmclock cacheline. nr_vcpus_matched_tsc is updated outside pvclock_update_vm_gtod_copy though, so a spinlock must be kept for that one. Signed-off-by: Paolo Bonzini [Oliver - drop unused locals, don't double acquire tsc_write_lock] Signed-off-by: Oliver Upton --- arch/x86/include/asm/kvm_host.h | 7 ++- arch/x86/kvm/x86.c | 83 +++++++++++++++++---------------- 2 files changed, 49 insertions(+), 41 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9c34b5b63e39..5accfe7246ce 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1087,6 +1087,11 @@ struct kvm_arch { unsigned long irq_sources_bitmap; s64 kvmclock_offset; + + /* + * This also protects nr_vcpus_matched_tsc which is read from a + * preemption-disabled region, so it must be a raw spinlock. + */ raw_spinlock_t tsc_write_lock; u64 last_tsc_nsec; u64 last_tsc_write; @@ -1097,7 +1102,7 @@ struct kvm_arch { u64 cur_tsc_generation; int nr_vcpus_matched_tsc; - spinlock_t pvclock_gtod_sync_lock; + seqcount_raw_spinlock_t pvclock_sc; bool use_master_clock; u64 master_kernel_ns; u64 master_cycle_now; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cb5d5cad5124..29156c49cd11 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2533,9 +2533,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; kvm_vcpu_write_tsc_offset(vcpu, offset); - raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); if (!matched) { kvm->arch.nr_vcpus_matched_tsc = 0; } else if (!already_matched) { @@ -2543,7 +2541,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) } kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); + raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); } static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, @@ -2731,9 +2729,6 @@ static void pvclock_update_vm_gtod_copy(struct kvm *kvm) int vclock_mode; bool host_tsc_clocksource, vcpus_matched; - vcpus_matched = (ka->nr_vcpus_matched_tsc + 1 == - atomic_read(&kvm->online_vcpus)); - /* * If the host uses TSC clock, then passthrough TSC as stable * to the guest. @@ -2742,6 +2737,10 @@ static void pvclock_update_vm_gtod_copy(struct kvm *kvm) &ka->master_kernel_ns, &ka->master_cycle_now); + lockdep_assert_held(&kvm->arch.tsc_write_lock); + vcpus_matched = (ka->nr_vcpus_matched_tsc + 1 == + atomic_read(&kvm->online_vcpus)); + ka->use_master_clock = host_tsc_clocksource && vcpus_matched && !ka->backwards_tsc_observed && !ka->boot_vcpu_runs_old_kvmclock; @@ -2760,14 +2759,18 @@ static void kvm_make_mclock_inprogress_request(struct kvm *kvm) kvm_make_all_cpus_request(kvm, KVM_REQ_MCLOCK_INPROGRESS); } -static void kvm_start_pvclock_update(struct kvm *kvm) +static void __kvm_start_pvclock_update(struct kvm *kvm) { - struct kvm_arch *ka = &kvm->arch; + raw_spin_lock_irq(&kvm->arch.tsc_write_lock); + write_seqcount_begin(&kvm->arch.pvclock_sc); +} +static void kvm_start_pvclock_update(struct kvm *kvm) +{ kvm_make_mclock_inprogress_request(kvm); /* no guest entries from this point */ - spin_lock_irq(&ka->pvclock_gtod_sync_lock); + __kvm_start_pvclock_update(kvm); } static void kvm_end_pvclock_update(struct kvm *kvm) @@ -2776,7 +2779,8 @@ static void kvm_end_pvclock_update(struct kvm *kvm) struct kvm_vcpu *vcpu; int i; - spin_unlock_irq(&ka->pvclock_gtod_sync_lock); + write_seqcount_end(&ka->pvclock_sc); + raw_spin_unlock_irq(&ka->tsc_write_lock); kvm_for_each_vcpu(i, vcpu, kvm) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); @@ -2797,20 +2801,12 @@ static void get_kvmclock(struct kvm *kvm, struct kvm_clock_data *data) { struct kvm_arch *ka = &kvm->arch; struct pvclock_vcpu_time_info hv_clock; - unsigned long flags; - spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); if (!ka->use_master_clock) { - spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); data->clock = get_kvmclock_base_ns() + ka->kvmclock_offset; return; } - data->flags |= KVM_CLOCK_TSC_STABLE; - hv_clock.tsc_timestamp = ka->master_cycle_now; - hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; - spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); - /* both __this_cpu_read() and rdtsc() should be on the same cpu */ get_cpu(); @@ -2825,6 +2821,9 @@ static void get_kvmclock(struct kvm *kvm, struct kvm_clock_data *data) #endif data->host_tsc = rdtsc(); + data->flags |= KVM_CLOCK_TSC_STABLE; + hv_clock.tsc_timestamp = ka->master_cycle_now; + hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL, &hv_clock.tsc_shift, &hv_clock.tsc_to_system_mul); @@ -2839,14 +2838,14 @@ static void get_kvmclock(struct kvm *kvm, struct kvm_clock_data *data) u64 get_kvmclock_ns(struct kvm *kvm) { struct kvm_clock_data data; + struct kvm_arch *ka = &kvm->arch; + unsigned seq; - /* - * Zero flags as it's accessed RMW, leave everything else uninitialized - * as clock is always written and no other fields are consumed. - */ - data.flags = 0; - - get_kvmclock(kvm, &data); + do { + seq = read_seqcount_begin(&ka->pvclock_sc); + data.flags = 0; + get_kvmclock(kvm, &data); + } while (read_seqcount_retry(&ka->pvclock_sc, seq)); return data.clock; } @@ -2912,6 +2911,7 @@ static void kvm_setup_pvclock_page(struct kvm_vcpu *v, static int kvm_guest_time_update(struct kvm_vcpu *v) { unsigned long flags, tgt_tsc_khz; + unsigned seq; struct kvm_vcpu_arch *vcpu = &v->arch; struct kvm_arch *ka = &v->kvm->arch; s64 kernel_ns; @@ -2926,13 +2926,14 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) * If the host uses TSC clock, then passthrough TSC as stable * to the guest. */ - spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); - use_master_clock = ka->use_master_clock; - if (use_master_clock) { - host_tsc = ka->master_cycle_now; - kernel_ns = ka->master_kernel_ns; - } - spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); + seq = read_seqcount_begin(&ka->pvclock_sc); + do { + use_master_clock = ka->use_master_clock; + if (use_master_clock) { + host_tsc = ka->master_cycle_now; + kernel_ns = ka->master_kernel_ns; + } + } while (read_seqcount_retry(&ka->pvclock_sc, seq)); /* Keep irq disabled to prevent changes to the clock */ local_irq_save(flags); @@ -5855,10 +5856,15 @@ int kvm_arch_pm_notifier(struct kvm *kvm, unsigned long state) static int kvm_vm_ioctl_get_clock(struct kvm *kvm, void __user *argp) { - struct kvm_clock_data data; + struct kvm_clock_data data = { 0 }; + unsigned seq; + + do { + seq = read_seqcount_begin(&kvm->arch.pvclock_sc); + data.flags = 0; + get_kvmclock(kvm, &data); + } while (read_seqcount_retry(&kvm->arch.pvclock_sc, seq)); - memset(&data, 0, sizeof(data)); - get_kvmclock(kvm, &data); if (copy_to_user(argp, &data, sizeof(data))) return -EFAULT; @@ -8159,9 +8165,7 @@ static void kvm_hyperv_tsc_notifier(void) kvm_max_guest_tsc_khz = tsc_khz; list_for_each_entry(kvm, &vm_list, vm_list) { - struct kvm_arch *ka = &kvm->arch; - - spin_lock_irq(&ka->pvclock_gtod_sync_lock); + __kvm_start_pvclock_update(kvm); pvclock_update_vm_gtod_copy(kvm); kvm_end_pvclock_update(kvm); } @@ -11188,8 +11192,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) raw_spin_lock_init(&kvm->arch.tsc_write_lock); mutex_init(&kvm->arch.apic_map_lock); - spin_lock_init(&kvm->arch.pvclock_gtod_sync_lock); - + seqcount_raw_spinlock_init(&kvm->arch.pvclock_sc, &kvm->arch.tsc_write_lock); kvm->arch.kvmclock_offset = -get_kvmclock_base_ns(); pvclock_update_vm_gtod_copy(kvm); -- 2.33.0.309.g3052b89438-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83E09C433EF for ; Thu, 16 Sep 2021 18:56:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 69C3B60E52 for ; Thu, 16 Sep 2021 18:56:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230452AbhIPS56 (ORCPT ); Thu, 16 Sep 2021 14:57:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243594AbhIPS5a (ORCPT ); Thu, 16 Sep 2021 14:57:30 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 740A1C05BD0D for ; Thu, 16 Sep 2021 11:15:48 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id c27-20020a05620a165b00b003d3817c7c23so44672205qko.16 for ; Thu, 16 Sep 2021 11:15:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RkkbGR7s65RKrojatUhUcN7AGOVGILJpEmrP2I02yFk=; b=XdE2G/H9rYPulSHBgAfwPKILoMSC5zHl88dGQYBg/i+k3tK/Lp6vKr7uKIi1bUdViU nO6mVeTKlL84xNt3/RjeQER4b5z04fYNOneo+sSl8hyWRLgRWx95iUtG4Rob9UELNUs5 49ZPhvIMW2J95O8+ZRMZ3wDpTpXFprQFUHnOo4v6XjG42qe48CzDlEWHDnLjkorS+fCE 2ennJ2nPGZu9b8rVUjiN2JEl+neb/jih0MVrD8xU0NtCs3//Q3k6ELLj1/lJZgN7Ksp/ Lpds1Nu+kE2B0NxTAYgjUdDyF4jGzjT36AYL5urhrF1eOhrQKfi0tf3X+b8/KR2bkpcx jocg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RkkbGR7s65RKrojatUhUcN7AGOVGILJpEmrP2I02yFk=; b=qRfUbmAb2QIhFHnc9Lj/76RINq0vAf24fsbPycxiPodCermqnABAmjWI5w+87tso3H pB51/Xz7DDR7/Jju1eD7zXZgSCCov0KA2F4hF0rUZ4HoQ8wWVyMNMcuErrAkBOLJx7qM GNgskQuFpXIgP4euC0+yVLOtwP7dsTz3lOe7UPvPrLovsB/QYmaIAY0V0ZjXZ7EqPqQ8 NSV3AP9erbhJ0bDui/bU77xK6GUkflpVAuGSTfjwZtM76N7VWgVDwe6Ze5CqutYNLM9w xZQnluhpgfQ0CwQLRZXLQMkyFsUNnmTPwRonLH52Y7xtRAXLY3oWBKgb/zIiiGT6xfmo XsSQ== X-Gm-Message-State: AOAM530f/UYRu2kc8dmTq1etO088ae27kT/40gvrmbeBVCpgUQhXKFME PLjHbNVDYKytFNBekfC3IJWP0MQVzew5xP4wd7bHa14sg6IfQtF+OH/k8fAOkC0vezVZB+yuDjv 7JMxjfq3+x8rdkmEG+YIkpVvzdGJgbHg5WcISqwUfie9peDPMtlqUyQ96VA== X-Google-Smtp-Source: ABdhPJz+iJU5f6z/grqrVZ21yLMPrc5CcINLjptZt9ThEios+Cu4etZJHmTZs6+key3lJTsf+GErso+7PVs= X-Received: from oupton.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:404]) (user=oupton job=sendgmr) by 2002:a05:6214:3ca:: with SMTP id ce10mr6813032qvb.12.1631816147367; Thu, 16 Sep 2021 11:15:47 -0700 (PDT) Date: Thu, 16 Sep 2021 18:15:36 +0000 In-Reply-To: <20210916181538.968978-1-oupton@google.com> Message-Id: <20210916181538.968978-6-oupton@google.com> Mime-Version: 1.0 References: <20210916181538.968978-1-oupton@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v8 5/7] kvm: x86: protect masterclock with a seqcount From: Oliver Upton To: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu Cc: Paolo Bonzini , Sean Christopherson , Marc Zyngier , Peter Shier , Jim Mattson , David Matlack , Ricardo Koller , Jing Zhang , Raghavendra Rao Anata , James Morse , Alexandru Elisei , Suzuki K Poulose , linux-arm-kernel@lists.infradead.org, Andrew Jones , Will Deacon , Catalin Marinas , Oliver Upton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Paolo Bonzini Protect the reference point for kvmclock with a seqcount, so that kvmclock updates for all vCPUs can proceed in parallel. Xen runstate updates will also run in parallel and not bounce the kvmclock cacheline. nr_vcpus_matched_tsc is updated outside pvclock_update_vm_gtod_copy though, so a spinlock must be kept for that one. Signed-off-by: Paolo Bonzini [Oliver - drop unused locals, don't double acquire tsc_write_lock] Signed-off-by: Oliver Upton --- arch/x86/include/asm/kvm_host.h | 7 ++- arch/x86/kvm/x86.c | 83 +++++++++++++++++---------------- 2 files changed, 49 insertions(+), 41 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9c34b5b63e39..5accfe7246ce 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1087,6 +1087,11 @@ struct kvm_arch { unsigned long irq_sources_bitmap; s64 kvmclock_offset; + + /* + * This also protects nr_vcpus_matched_tsc which is read from a + * preemption-disabled region, so it must be a raw spinlock. + */ raw_spinlock_t tsc_write_lock; u64 last_tsc_nsec; u64 last_tsc_write; @@ -1097,7 +1102,7 @@ struct kvm_arch { u64 cur_tsc_generation; int nr_vcpus_matched_tsc; - spinlock_t pvclock_gtod_sync_lock; + seqcount_raw_spinlock_t pvclock_sc; bool use_master_clock; u64 master_kernel_ns; u64 master_cycle_now; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cb5d5cad5124..29156c49cd11 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2533,9 +2533,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) vcpu->arch.this_tsc_write = kvm->arch.cur_tsc_write; kvm_vcpu_write_tsc_offset(vcpu, offset); - raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); - spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags); if (!matched) { kvm->arch.nr_vcpus_matched_tsc = 0; } else if (!already_matched) { @@ -2543,7 +2541,7 @@ static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) } kvm_track_tsc_matching(vcpu); - spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags); + raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); } static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, @@ -2731,9 +2729,6 @@ static void pvclock_update_vm_gtod_copy(struct kvm *kvm) int vclock_mode; bool host_tsc_clocksource, vcpus_matched; - vcpus_matched = (ka->nr_vcpus_matched_tsc + 1 == - atomic_read(&kvm->online_vcpus)); - /* * If the host uses TSC clock, then passthrough TSC as stable * to the guest. @@ -2742,6 +2737,10 @@ static void pvclock_update_vm_gtod_copy(struct kvm *kvm) &ka->master_kernel_ns, &ka->master_cycle_now); + lockdep_assert_held(&kvm->arch.tsc_write_lock); + vcpus_matched = (ka->nr_vcpus_matched_tsc + 1 == + atomic_read(&kvm->online_vcpus)); + ka->use_master_clock = host_tsc_clocksource && vcpus_matched && !ka->backwards_tsc_observed && !ka->boot_vcpu_runs_old_kvmclock; @@ -2760,14 +2759,18 @@ static void kvm_make_mclock_inprogress_request(struct kvm *kvm) kvm_make_all_cpus_request(kvm, KVM_REQ_MCLOCK_INPROGRESS); } -static void kvm_start_pvclock_update(struct kvm *kvm) +static void __kvm_start_pvclock_update(struct kvm *kvm) { - struct kvm_arch *ka = &kvm->arch; + raw_spin_lock_irq(&kvm->arch.tsc_write_lock); + write_seqcount_begin(&kvm->arch.pvclock_sc); +} +static void kvm_start_pvclock_update(struct kvm *kvm) +{ kvm_make_mclock_inprogress_request(kvm); /* no guest entries from this point */ - spin_lock_irq(&ka->pvclock_gtod_sync_lock); + __kvm_start_pvclock_update(kvm); } static void kvm_end_pvclock_update(struct kvm *kvm) @@ -2776,7 +2779,8 @@ static void kvm_end_pvclock_update(struct kvm *kvm) struct kvm_vcpu *vcpu; int i; - spin_unlock_irq(&ka->pvclock_gtod_sync_lock); + write_seqcount_end(&ka->pvclock_sc); + raw_spin_unlock_irq(&ka->tsc_write_lock); kvm_for_each_vcpu(i, vcpu, kvm) kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu); @@ -2797,20 +2801,12 @@ static void get_kvmclock(struct kvm *kvm, struct kvm_clock_data *data) { struct kvm_arch *ka = &kvm->arch; struct pvclock_vcpu_time_info hv_clock; - unsigned long flags; - spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); if (!ka->use_master_clock) { - spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); data->clock = get_kvmclock_base_ns() + ka->kvmclock_offset; return; } - data->flags |= KVM_CLOCK_TSC_STABLE; - hv_clock.tsc_timestamp = ka->master_cycle_now; - hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; - spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); - /* both __this_cpu_read() and rdtsc() should be on the same cpu */ get_cpu(); @@ -2825,6 +2821,9 @@ static void get_kvmclock(struct kvm *kvm, struct kvm_clock_data *data) #endif data->host_tsc = rdtsc(); + data->flags |= KVM_CLOCK_TSC_STABLE; + hv_clock.tsc_timestamp = ka->master_cycle_now; + hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL, &hv_clock.tsc_shift, &hv_clock.tsc_to_system_mul); @@ -2839,14 +2838,14 @@ static void get_kvmclock(struct kvm *kvm, struct kvm_clock_data *data) u64 get_kvmclock_ns(struct kvm *kvm) { struct kvm_clock_data data; + struct kvm_arch *ka = &kvm->arch; + unsigned seq; - /* - * Zero flags as it's accessed RMW, leave everything else uninitialized - * as clock is always written and no other fields are consumed. - */ - data.flags = 0; - - get_kvmclock(kvm, &data); + do { + seq = read_seqcount_begin(&ka->pvclock_sc); + data.flags = 0; + get_kvmclock(kvm, &data); + } while (read_seqcount_retry(&ka->pvclock_sc, seq)); return data.clock; } @@ -2912,6 +2911,7 @@ static void kvm_setup_pvclock_page(struct kvm_vcpu *v, static int kvm_guest_time_update(struct kvm_vcpu *v) { unsigned long flags, tgt_tsc_khz; + unsigned seq; struct kvm_vcpu_arch *vcpu = &v->arch; struct kvm_arch *ka = &v->kvm->arch; s64 kernel_ns; @@ -2926,13 +2926,14 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) * If the host uses TSC clock, then passthrough TSC as stable * to the guest. */ - spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags); - use_master_clock = ka->use_master_clock; - if (use_master_clock) { - host_tsc = ka->master_cycle_now; - kernel_ns = ka->master_kernel_ns; - } - spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags); + seq = read_seqcount_begin(&ka->pvclock_sc); + do { + use_master_clock = ka->use_master_clock; + if (use_master_clock) { + host_tsc = ka->master_cycle_now; + kernel_ns = ka->master_kernel_ns; + } + } while (read_seqcount_retry(&ka->pvclock_sc, seq)); /* Keep irq disabled to prevent changes to the clock */ local_irq_save(flags); @@ -5855,10 +5856,15 @@ int kvm_arch_pm_notifier(struct kvm *kvm, unsigned long state) static int kvm_vm_ioctl_get_clock(struct kvm *kvm, void __user *argp) { - struct kvm_clock_data data; + struct kvm_clock_data data = { 0 }; + unsigned seq; + + do { + seq = read_seqcount_begin(&kvm->arch.pvclock_sc); + data.flags = 0; + get_kvmclock(kvm, &data); + } while (read_seqcount_retry(&kvm->arch.pvclock_sc, seq)); - memset(&data, 0, sizeof(data)); - get_kvmclock(kvm, &data); if (copy_to_user(argp, &data, sizeof(data))) return -EFAULT; @@ -8159,9 +8165,7 @@ static void kvm_hyperv_tsc_notifier(void) kvm_max_guest_tsc_khz = tsc_khz; list_for_each_entry(kvm, &vm_list, vm_list) { - struct kvm_arch *ka = &kvm->arch; - - spin_lock_irq(&ka->pvclock_gtod_sync_lock); + __kvm_start_pvclock_update(kvm); pvclock_update_vm_gtod_copy(kvm); kvm_end_pvclock_update(kvm); } @@ -11188,8 +11192,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) raw_spin_lock_init(&kvm->arch.tsc_write_lock); mutex_init(&kvm->arch.apic_map_lock); - spin_lock_init(&kvm->arch.pvclock_gtod_sync_lock); - + seqcount_raw_spinlock_init(&kvm->arch.pvclock_sc, &kvm->arch.tsc_write_lock); kvm->arch.kvmclock_offset = -get_kvmclock_base_ns(); pvclock_update_vm_gtod_copy(kvm); -- 2.33.0.309.g3052b89438-goog