linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	rcu@vger.kernel.org, mimoja@mimoja.de, hewenliang4@huawei.com,
	hushiyuan@huawei.com, luolongjun@huawei.com,
	hejingxian@huawei.com
Subject: [PATCH 08/11] x86/tsc: Avoid synchronizing TSCs with multiple CPUs in parallel
Date: Thu,  9 Dec 2021 15:09:35 +0000	[thread overview]
Message-ID: <20211209150938.3518-9-dwmw2@infradead.org> (raw)
In-Reply-To: <20211209150938.3518-1-dwmw2@infradead.org>

From: David Woodhouse <dwmw@amazon.co.uk>

The TSC sync algorithm is only designed to do a 1:1 sync between the
source and target CPUs.

In order to enable parallel CPU bringup, serialize it by using an
atomic_t containing the number of the target CPU whose turn it is.

In future we should look at inventing a 1:many TSC synchronization
algorithm, perhaps falling back to 1:1 if a warp is observed but
doing them all in parallel for the common case where no adjustment
is needed. Or just avoiding the sync completely for cases like kexec
where we trust that they were in sync already.

This is perfectly sufficient for the short term though, until we get
those further optimisations.

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
 arch/x86/kernel/tsc_sync.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/kernel/tsc_sync.c b/arch/x86/kernel/tsc_sync.c
index 50a4515fe0ad..4ee247d89a49 100644
--- a/arch/x86/kernel/tsc_sync.c
+++ b/arch/x86/kernel/tsc_sync.c
@@ -202,6 +202,7 @@ bool tsc_store_and_check_tsc_adjust(bool bootcpu)
  * Entry/exit counters that make sure that both CPUs
  * run the measurement code at once:
  */
+static atomic_t tsc_sync_cpu = ATOMIC_INIT(-1);
 static atomic_t start_count;
 static atomic_t stop_count;
 static atomic_t skip_test;
@@ -326,6 +327,8 @@ void check_tsc_sync_source(int cpu)
 		atomic_set(&test_runs, 1);
 	else
 		atomic_set(&test_runs, 3);
+
+	atomic_set(&tsc_sync_cpu, cpu);
 retry:
 	/*
 	 * Wait for the target to start or to skip the test:
@@ -407,6 +410,10 @@ void check_tsc_sync_target(void)
 	if (unsynchronized_tsc())
 		return;
 
+	/* Wait for this CPU's turn */
+	while (atomic_read(&tsc_sync_cpu) != cpu)
+		cpu_relax();
+
 	/*
 	 * Store, verify and sanitize the TSC adjust register. If
 	 * successful skip the test.
-- 
2.31.1


  parent reply	other threads:[~2021-12-09 15:10 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-09 15:09 [PATCH 00/11] Parallel CPU bringup for x86_64 David Woodhouse
2021-12-09 15:09 ` [PATCH 01/11] x86/apic/x2apic: Fix parallel handling of cluster_mask David Woodhouse
2021-12-09 15:09 ` [PATCH 02/11] rcu: Kill rnp->ofl_seq and use only rcu_state.ofl_lock for exclusion David Woodhouse
2021-12-09 17:18   ` Paul E. McKenney
2021-12-09 18:52     ` David Woodhouse
2021-12-09 18:31   ` Neeraj Upadhyay
2021-12-09 18:43     ` David Woodhouse
2021-12-09 19:21     ` [PATCH v1.1 " David Woodhouse
2021-12-10  4:26       ` Neeraj Upadhyay
2021-12-13  8:57         ` David Woodhouse
2021-12-13  9:11           ` Neeraj Upadhyay
2021-12-09 15:09 ` [PATCH 03/11] rcu: Add mutex for rcu boost kthread spawning and affinity setting David Woodhouse
2021-12-09 17:20   ` Paul E. McKenney
2021-12-09 15:09 ` [PATCH 04/11] cpu/hotplug: Add dynamic parallel bringup states before CPUHP_BRINGUP_CPU David Woodhouse
2021-12-09 15:09 ` [PATCH 05/11] x86/smpboot: Reference count on smpboot_setup_warm_reset_vector() David Woodhouse
2021-12-09 15:09 ` [PATCH 06/11] x86/smpboot: Split up native_cpu_up into separate phases David Woodhouse
2021-12-09 15:09 ` [PATCH 07/11] cpu/hotplug: Move idle_thread_get() to <linux/smpboot.h> David Woodhouse
2021-12-09 15:09 ` David Woodhouse [this message]
2021-12-09 15:43   ` [PATCH 08/11] x86/tsc: Avoid synchronizing TSCs with multiple CPUs in parallel Peter Zijlstra
2021-12-09 15:50     ` David Woodhouse
2021-12-09 15:09 ` [PATCH 09/11] x86/boot: Support parallel startup of secondary CPUs David Woodhouse
2021-12-09 15:50   ` Peter Zijlstra
2021-12-14 11:33     ` David Woodhouse
2021-12-09 15:09 ` [PATCH 10/11] x86/smp: Bring up secondary CPUs in parallel David Woodhouse
2021-12-09 15:09 ` [PATCH 11/11] x86/kvm: Silence per-cpu pr_info noise about KVM clocks and steal time David Woodhouse
2021-12-09 17:39   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211209150938.3518-9-dwmw2@infradead.org \
    --to=dwmw2@infradead.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hejingxian@huawei.com \
    --cc=hewenliang4@huawei.com \
    --cc=hpa@zytor.com \
    --cc=hushiyuan@huawei.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luolongjun@huawei.com \
    --cc=mimoja@mimoja.de \
    --cc=mingo@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rcu@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).