All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] acpi_pm: Fix bootup softlockup due to PMTMR counter read contention
@ 2019-03-14  8:42 Zhenzhong Duan
  2019-03-14  8:42 ` [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention" Zhenzhong Duan
  0 siblings, 1 reply; 8+ messages in thread
From: Zhenzhong Duan @ 2019-03-14  8:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Zhenzhong Duan, John Stultz, Thomas Gleixner, Stephen Boyd,
	Waiman Long, Srinivas Eeda

During bootup stage of a large system with many CPUs, with nohpet, PMTMR
is temporarily selected as the clock source which can lead to a softlockup
because of the following reasons:
 1) There is a single PMTMR counter shared by all the CPUs.
 2) PMTMR counter reading is a very slow operation.

At bootup stage tick device is firstly initialized in periodic mode and
then switch to one-shot mode when a high resolution clocksource is
initialized. Between clocksoure initialization and switching to one-shot
mode, there is small window where timer interrupt triggers.

Due to PMTMR read contention, the 1ms(HZ=1000) interval isn't enough
for all the CPUs to process timer interrupt in periodic mode.
Then CPUs are busy processing interrupt one by one without a break,
tick_clock_notify() have no chance to be called and we never switch
to one-shot mode. Finally the system may crash because of a NMI
watchdog soft lockup, logs:

[   20.181521] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff,
max_idle_ns: 2085701024 ns
[   44.273786] BUG: soft lockup - CPU#48 stuck for 23s! [swapper/48:0]
[   44.279992] BUG: soft lockup - CPU#49 stuck for 23s! [migration/49:307]
[   44.285169] BUG: soft lockup - CPU#50 stuck for 23s! [migration/50:313]

In one-shot mode, the contention is still there but next event is always
set with a future value. We may missed some ticks, but the timer code is
smart enough to pick up those missed ticks.

By moving tick_clock_notify() into stop_machine, kernel changes to one-shot
mode early before the contention accumulate and lockup system.

This patch also address the same issue of commit f99fd22e4d4b ("x86/hpet:
Reduce HPET counter read contention") in a simple way, so that commit could
be reverted.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Tested-by: Kin Cho <kin.cho@oracle.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Stephen Boyd <sboyd@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Srinivas Eeda <srinivas.eeda@oracle.com>
---
 kernel/time/timekeeping.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index f986e19..815c92d 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1378,6 +1378,7 @@ static int change_clocksource(void *data)
 
 	write_seqcount_end(&tk_core.seq);
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
+	tick_clock_notify();
 
 	return 0;
 }
@@ -1396,7 +1397,6 @@ int timekeeping_notify(struct clocksource *clock)
 	if (tk->tkr_mono.clock == clock)
 		return 0;
 	stop_machine(change_clocksource, clock, NULL);
-	tick_clock_notify();
 	return tk->tkr_mono.clock == clock ? 0 : -1;
 }
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention"
  2019-03-14  8:42 [PATCH 1/2] acpi_pm: Fix bootup softlockup due to PMTMR counter read contention Zhenzhong Duan
@ 2019-03-14  8:42 ` Zhenzhong Duan
  2019-03-15  9:25   ` Peter Zijlstra
  0 siblings, 1 reply; 8+ messages in thread
From: Zhenzhong Duan @ 2019-03-14  8:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: Zhenzhong Duan, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Waiman Long, Srinivas Eeda, x86

This reverts commit f99fd22e4d4bc84880a8a3117311bbf0e3a6a9dc.

It's unnecessory after commit "acpi_pm: Fix bootup softlockup due to PMTMR
counter read contention", the simple HPET access code could be restored.

On a general system with good TSC, TSC is the final default clocksource.
So the potential performce loss is only at bootup stage before TSC
replacing HPET, we didn't observe obvious delay of bootup.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Tested-by: Kin Cho <kin.cho@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Waiman Long <longman@redhat.com>
Cc: Srinivas Eeda <srinivas.eeda@oracle.com>
Cc: x86@kernel.org
---
 arch/x86/kernel/hpet.c | 94 --------------------------------------------------
 1 file changed, 94 deletions(-)

diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
index dfd3aca..b4fdee6a 100644
--- a/arch/x86/kernel/hpet.c
+++ b/arch/x86/kernel/hpet.c
@@ -749,104 +749,10 @@ static void hpet_reserve_msi_timers(struct hpet_data *hd)
 /*
  * Clock source related code
  */
-#if defined(CONFIG_SMP) && defined(CONFIG_64BIT)
-/*
- * Reading the HPET counter is a very slow operation. If a large number of
- * CPUs are trying to access the HPET counter simultaneously, it can cause
- * massive delay and slow down system performance dramatically. This may
- * happen when HPET is the default clock source instead of TSC. For a
- * really large system with hundreds of CPUs, the slowdown may be so
- * severe that it may actually crash the system because of a NMI watchdog
- * soft lockup, for example.
- *
- * If multiple CPUs are trying to access the HPET counter at the same time,
- * we don't actually need to read the counter multiple times. Instead, the
- * other CPUs can use the counter value read by the first CPU in the group.
- *
- * This special feature is only enabled on x86-64 systems. It is unlikely
- * that 32-bit x86 systems will have enough CPUs to require this feature
- * with its associated locking overhead. And we also need 64-bit atomic
- * read.
- *
- * The lock and the hpet value are stored together and can be read in a
- * single atomic 64-bit read. It is explicitly assumed that arch_spinlock_t
- * is 32 bits in size.
- */
-union hpet_lock {
-	struct {
-		arch_spinlock_t lock;
-		u32 value;
-	};
-	u64 lockval;
-};
-
-static union hpet_lock hpet __cacheline_aligned = {
-	{ .lock = __ARCH_SPIN_LOCK_UNLOCKED, },
-};
-
-static u64 read_hpet(struct clocksource *cs)
-{
-	unsigned long flags;
-	union hpet_lock old, new;
-
-	BUILD_BUG_ON(sizeof(union hpet_lock) != 8);
-
-	/*
-	 * Read HPET directly if in NMI.
-	 */
-	if (in_nmi())
-		return (u64)hpet_readl(HPET_COUNTER);
-
-	/*
-	 * Read the current state of the lock and HPET value atomically.
-	 */
-	old.lockval = READ_ONCE(hpet.lockval);
-
-	if (arch_spin_is_locked(&old.lock))
-		goto contended;
-
-	local_irq_save(flags);
-	if (arch_spin_trylock(&hpet.lock)) {
-		new.value = hpet_readl(HPET_COUNTER);
-		/*
-		 * Use WRITE_ONCE() to prevent store tearing.
-		 */
-		WRITE_ONCE(hpet.value, new.value);
-		arch_spin_unlock(&hpet.lock);
-		local_irq_restore(flags);
-		return (u64)new.value;
-	}
-	local_irq_restore(flags);
-
-contended:
-	/*
-	 * Contended case
-	 * --------------
-	 * Wait until the HPET value change or the lock is free to indicate
-	 * its value is up-to-date.
-	 *
-	 * It is possible that old.value has already contained the latest
-	 * HPET value while the lock holder was in the process of releasing
-	 * the lock. Checking for lock state change will enable us to return
-	 * the value immediately instead of waiting for the next HPET reader
-	 * to come along.
-	 */
-	do {
-		cpu_relax();
-		new.lockval = READ_ONCE(hpet.lockval);
-	} while ((new.value == old.value) && arch_spin_is_locked(&new.lock));
-
-	return (u64)new.value;
-}
-#else
-/*
- * For UP or 32-bit.
- */
 static u64 read_hpet(struct clocksource *cs)
 {
 	return (u64)hpet_readl(HPET_COUNTER);
 }
-#endif
 
 static struct clocksource clocksource_hpet = {
 	.name		= "hpet",
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention"
  2019-03-14  8:42 ` [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention" Zhenzhong Duan
@ 2019-03-15  9:25   ` Peter Zijlstra
  2019-03-15  9:29     ` Peter Zijlstra
  2019-03-15 14:17     ` Waiman Long
  0 siblings, 2 replies; 8+ messages in thread
From: Peter Zijlstra @ 2019-03-15  9:25 UTC (permalink / raw)
  To: Zhenzhong Duan
  Cc: linux-kernel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Waiman Long, Srinivas Eeda, x86

On Thu, Mar 14, 2019 at 04:42:12PM +0800, Zhenzhong Duan wrote:
> This reverts commit f99fd22e4d4bc84880a8a3117311bbf0e3a6a9dc.
> 
> It's unnecessory after commit "acpi_pm: Fix bootup softlockup due to PMTMR
> counter read contention", the simple HPET access code could be restored.
> 
> On a general system with good TSC, TSC is the final default clocksource.
> So the potential performce loss is only at bootup stage before TSC
> replacing HPET, we didn't observe obvious delay of bootup.

The timeline here is:

 - Len took out SKX from native_calibrate_tsc
   b51120309348 ("x86/tsc: Fix erroneous TSC rate on Skylake Xeon")

   This causes the TSC to run through the calibration code, which
   completes _after_ SMP bringup.

 - This then caused HPET to be used during SMP bringup, which resulted
   in Waiman doing the patch you now propose removing.

   Because large (multi-socket) SKX machines would barely boot.

   f99fd22e4d4b ("x86/hpet: Reduce HPET counter read contention")

 - Now, I figured that was all crazy to begin with, and introduced
   clocksource_tsc_early, such that we can run at the guestimate TSC
   frequency until we've completed calibration and then swap to the real
   TSC clocksource.

   aa83c45762a2 ("x86/tsc: Introduce early tsc clocksource")
   (and assorted fixes)

This means that we now only use HPET for a very short time in early
boot, _IFF_ TSC is stable.

Now, given the amount of wreckage we still see with TSC, I'm very
reluctant to revert this patch. Because the moment TSC goes out the
window, we're back on HPET, and this patch does make a huge difference.

Yes, its sad, gross and nasty... but the same is true for TSC still being
a trainwreck.

So NAK.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention"
  2019-03-15  9:25   ` Peter Zijlstra
@ 2019-03-15  9:29     ` Peter Zijlstra
  2019-03-15 14:17     ` Waiman Long
  1 sibling, 0 replies; 8+ messages in thread
From: Peter Zijlstra @ 2019-03-15  9:29 UTC (permalink / raw)
  To: Zhenzhong Duan
  Cc: linux-kernel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Waiman Long, Srinivas Eeda, x86

On Fri, Mar 15, 2019 at 10:25:29AM +0100, Peter Zijlstra wrote:
> On Thu, Mar 14, 2019 at 04:42:12PM +0800, Zhenzhong Duan wrote:
> > This reverts commit f99fd22e4d4bc84880a8a3117311bbf0e3a6a9dc.
> > 
> > It's unnecessory after commit "acpi_pm: Fix bootup softlockup due to PMTMR
> > counter read contention", the simple HPET access code could be restored.
> > 
> > On a general system with good TSC, TSC is the final default clocksource.
> > So the potential performce loss is only at bootup stage before TSC
> > replacing HPET, we didn't observe obvious delay of bootup.
> 
> The timeline here is:
> 
>  - Len took out SKX from native_calibrate_tsc
>    b51120309348 ("x86/tsc: Fix erroneous TSC rate on Skylake Xeon")
> 
>    This causes the TSC to run through the calibration code, which
>    completes _after_ SMP bringup.
> 
>  - This then caused HPET to be used during SMP bringup, which resulted
>    in Waiman doing the patch you now propose removing.
> 
>    Because large (multi-socket) SKX machines would barely boot.
> 
>    f99fd22e4d4b ("x86/hpet: Reduce HPET counter read contention")

Damn, my memory tricked me.. I just checked the dates on those patches
and I got it in reverse.

Anyway, the point still stands, when TSC is wrecked we still need HPET.
And you don't see a difference with the revert because of
clocksource_tsc_early, which didn't exist at the time.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention"
  2019-03-15  9:25   ` Peter Zijlstra
  2019-03-15  9:29     ` Peter Zijlstra
@ 2019-03-15 14:17     ` Waiman Long
  2019-03-18  8:44       ` Zhenzhong Duan
  1 sibling, 1 reply; 8+ messages in thread
From: Waiman Long @ 2019-03-15 14:17 UTC (permalink / raw)
  To: Peter Zijlstra, Zhenzhong Duan
  Cc: linux-kernel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Srinivas Eeda, x86

On 03/15/2019 05:25 AM, Peter Zijlstra wrote:
> On Thu, Mar 14, 2019 at 04:42:12PM +0800, Zhenzhong Duan wrote:
>> This reverts commit f99fd22e4d4bc84880a8a3117311bbf0e3a6a9dc.
>>
>> It's unnecessory after commit "acpi_pm: Fix bootup softlockup due to PMTMR
>> counter read contention", the simple HPET access code could be restored.
>>
>> On a general system with good TSC, TSC is the final default clocksource.
>> So the potential performce loss is only at bootup stage before TSC
>> replacing HPET, we didn't observe obvious delay of bootup.
> The timeline here is:
>
>  - Len took out SKX from native_calibrate_tsc
>    b51120309348 ("x86/tsc: Fix erroneous TSC rate on Skylake Xeon")
>
>    This causes the TSC to run through the calibration code, which
>    completes _after_ SMP bringup.
>
>  - This then caused HPET to be used during SMP bringup, which resulted
>    in Waiman doing the patch you now propose removing.
>
>    Because large (multi-socket) SKX machines would barely boot.
>
>    f99fd22e4d4b ("x86/hpet: Reduce HPET counter read contention")
>
>  - Now, I figured that was all crazy to begin with, and introduced
>    clocksource_tsc_early, such that we can run at the guestimate TSC
>    frequency until we've completed calibration and then swap to the real
>    TSC clocksource.
>
>    aa83c45762a2 ("x86/tsc: Introduce early tsc clocksource")
>    (and assorted fixes)
>
> This means that we now only use HPET for a very short time in early
> boot, _IFF_ TSC is stable.
>
> Now, given the amount of wreckage we still see with TSC, I'm very
> reluctant to revert this patch. Because the moment TSC goes out the
> window, we're back on HPET, and this patch does make a huge difference.
>
> Yes, its sad, gross and nasty... but the same is true for TSC still being
> a trainwreck.
>
> So NAK.

I concur. In the uncontended case, the overhead is mostly just the
additional cmpxchg instruction for acquiring the spinlock. Even then, it
isn't significant when compared with the time needed to actually read
from the HPET. Without that code, any fallback to HPET for whatever
reason will likely see degradation in performance especially on systems
with large number of CPUs.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention"
  2019-03-15 14:17     ` Waiman Long
@ 2019-03-18  8:44       ` Zhenzhong Duan
  2019-03-18 15:35         ` Waiman Long
  0 siblings, 1 reply; 8+ messages in thread
From: Zhenzhong Duan @ 2019-03-18  8:44 UTC (permalink / raw)
  To: Waiman Long, Peter Zijlstra
  Cc: linux-kernel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Srinivas Eeda, x86


On 2019/3/15 22:17, Waiman Long wrote:
> On 03/15/2019 05:25 AM, Peter Zijlstra wrote:
>> On Thu, Mar 14, 2019 at 04:42:12PM +0800, Zhenzhong Duan wrote:
>>> This reverts commit f99fd22e4d4bc84880a8a3117311bbf0e3a6a9dc.
>>>
>>> It's unnecessory after commit "acpi_pm: Fix bootup softlockup due to PMTMR
>>> counter read contention", the simple HPET access code could be restored.
>>>
>>> On a general system with good TSC, TSC is the final default clocksource.
>>> So the potential performce loss is only at bootup stage before TSC
>>> replacing HPET, we didn't observe obvious delay of bootup.
>> The timeline here is:
>>
>>   - Len took out SKX from native_calibrate_tsc
>>     b51120309348 ("x86/tsc: Fix erroneous TSC rate on Skylake Xeon")
>>
>>     This causes the TSC to run through the calibration code, which
>>     completes _after_ SMP bringup.
>>
>>   - This then caused HPET to be used during SMP bringup, which resulted
>>     in Waiman doing the patch you now propose removing.
>>
>>     Because large (multi-socket) SKX machines would barely boot.
>>
>>     f99fd22e4d4b ("x86/hpet: Reduce HPET counter read contention")
>>
>>   - Now, I figured that was all crazy to begin with, and introduced
>>     clocksource_tsc_early, such that we can run at the guestimate TSC
>>     frequency until we've completed calibration and then swap to the real
>>     TSC clocksource.
>>
>>     aa83c45762a2 ("x86/tsc: Introduce early tsc clocksource")
>>     (and assorted fixes)
>>
>> This means that we now only use HPET for a very short time in early
>> boot, _IFF_ TSC is stable.
>>
>> Now, given the amount of wreckage we still see with TSC, I'm very
>> reluctant to revert this patch. Because the moment TSC goes out the
>> window, we're back on HPET, and this patch does make a huge difference.
>>
>> Yes, its sad, gross and nasty... but the same is true for TSC still being
>> a trainwreck.
>>
>> So NAK.
> I concur. In the uncontended case, the overhead is mostly just the
> additional cmpxchg instruction for acquiring the spinlock. Even then, it
> isn't significant when compared with the time needed to actually read
> from the HPET. Without that code, any fallback to HPET for whatever
> reason will likely see degradation in performance especially on systems
> with large number of CPUs.

Thank Peter and Waiman for reply.

I see, we still care the performance on a system with wreckage TSC.


So now we come back to the old question, do we care the softlockup

and the performance when pmtmr is chosed for whatever reason?

For which I had provide two different fixes:

https://lkml.org/lkml/2019/1/22/1172

and

https://lkml.org/lkml/2019/3/15/101

-- 
Thanks
Zhenzhong


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention"
  2019-03-18  8:44       ` Zhenzhong Duan
@ 2019-03-18 15:35         ` Waiman Long
  2019-03-20 10:24           ` Thomas Gleixner
  0 siblings, 1 reply; 8+ messages in thread
From: Waiman Long @ 2019-03-18 15:35 UTC (permalink / raw)
  To: Zhenzhong Duan, Peter Zijlstra
  Cc: linux-kernel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Srinivas Eeda, x86

On 03/18/2019 04:44 AM, Zhenzhong Duan wrote:
>
> On 2019/3/15 22:17, Waiman Long wrote:
>> On 03/15/2019 05:25 AM, Peter Zijlstra wrote:
>>> On Thu, Mar 14, 2019 at 04:42:12PM +0800, Zhenzhong Duan wrote:
>>>> This reverts commit f99fd22e4d4bc84880a8a3117311bbf0e3a6a9dc.
>>>>
>>>> It's unnecessory after commit "acpi_pm: Fix bootup softlockup due
>>>> to PMTMR
>>>> counter read contention", the simple HPET access code could be
>>>> restored.
>>>>
>>>> On a general system with good TSC, TSC is the final default
>>>> clocksource.
>>>> So the potential performce loss is only at bootup stage before TSC
>>>> replacing HPET, we didn't observe obvious delay of bootup.
>>> The timeline here is:
>>>
>>>   - Len took out SKX from native_calibrate_tsc
>>>     b51120309348 ("x86/tsc: Fix erroneous TSC rate on Skylake Xeon")
>>>
>>>     This causes the TSC to run through the calibration code, which
>>>     completes _after_ SMP bringup.
>>>
>>>   - This then caused HPET to be used during SMP bringup, which resulted
>>>     in Waiman doing the patch you now propose removing.
>>>
>>>     Because large (multi-socket) SKX machines would barely boot.
>>>
>>>     f99fd22e4d4b ("x86/hpet: Reduce HPET counter read contention")
>>>
>>>   - Now, I figured that was all crazy to begin with, and introduced
>>>     clocksource_tsc_early, such that we can run at the guestimate TSC
>>>     frequency until we've completed calibration and then swap to the
>>> real
>>>     TSC clocksource.
>>>
>>>     aa83c45762a2 ("x86/tsc: Introduce early tsc clocksource")
>>>     (and assorted fixes)
>>>
>>> This means that we now only use HPET for a very short time in early
>>> boot, _IFF_ TSC is stable.
>>>
>>> Now, given the amount of wreckage we still see with TSC, I'm very
>>> reluctant to revert this patch. Because the moment TSC goes out the
>>> window, we're back on HPET, and this patch does make a huge difference.
>>>
>>> Yes, its sad, gross and nasty... but the same is true for TSC still
>>> being
>>> a trainwreck.
>>>
>>> So NAK.
>> I concur. In the uncontended case, the overhead is mostly just the
>> additional cmpxchg instruction for acquiring the spinlock. Even then, it
>> isn't significant when compared with the time needed to actually read
>> from the HPET. Without that code, any fallback to HPET for whatever
>> reason will likely see degradation in performance especially on systems
>> with large number of CPUs.
>
> Thank Peter and Waiman for reply.
>
> I see, we still care the performance on a system with wreckage TSC.
>
>
> So now we come back to the old question, do we care the softlockup
>
> and the performance when pmtmr is chosed for whatever reason?
>
> For which I had provide two different fixes:
>
> https://lkml.org/lkml/2019/1/22/1172
>
> and
>
> https://lkml.org/lkml/2019/3/15/101
>
I think what Thomas was asking is to provide a REALISTIC use case where
TSC is wrecked and HPET is somehow not used and we have to fall back to
use PM timer. If such use case exists, I am sure Thomas will be happy to
take it.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention"
  2019-03-18 15:35         ` Waiman Long
@ 2019-03-20 10:24           ` Thomas Gleixner
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Gleixner @ 2019-03-20 10:24 UTC (permalink / raw)
  To: Waiman Long
  Cc: Zhenzhong Duan, Peter Zijlstra, linux-kernel, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Srinivas Eeda, x86

On Mon, 18 Mar 2019, Waiman Long wrote:
> On 03/18/2019 04:44 AM, Zhenzhong Duan wrote:
> > https://lkml.org/lkml/2019/3/15/101
> >
> I think what Thomas was asking is to provide a REALISTIC use case where
> TSC is wrecked and HPET is somehow not used and we have to fall back to
> use PM timer. If such use case exists, I am sure Thomas will be happy to
> take it.

Exactly.

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-03-20 10:24 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-14  8:42 [PATCH 1/2] acpi_pm: Fix bootup softlockup due to PMTMR counter read contention Zhenzhong Duan
2019-03-14  8:42 ` [PATCH 2/2] Revert "x86/hpet: Reduce HPET counter read contention" Zhenzhong Duan
2019-03-15  9:25   ` Peter Zijlstra
2019-03-15  9:29     ` Peter Zijlstra
2019-03-15 14:17     ` Waiman Long
2019-03-18  8:44       ` Zhenzhong Duan
2019-03-18 15:35         ` Waiman Long
2019-03-20 10:24           ` Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.