linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Excessive delays from GHES polling on dual-socket AMD EPYC
@ 2021-11-27 21:40 Alexander Monakov
  2021-12-02 16:01 ` Yazen Ghannam
  0 siblings, 1 reply; 3+ messages in thread
From: Alexander Monakov @ 2021-11-27 21:40 UTC (permalink / raw)
  To: linux-edac
  Cc: Yazen Ghannam, Borislav Petkov, Mauro Carvalho Chehab,
	Linus Torvalds, linux-kernel

Hello world,

when lightly testing a dual-socket server with 64-core AMD processors I
noticed that workloads running on cpu #0 can exhibit significantly worse
latencies compared to cpu #1 ... cpu #255. Checking SSD response time,
on cpu #0 I got:

taskset -c 0 ioping -R /dev/sdf

--- /dev/sdf (block device 1.75 TiB) ioping statistics ---
70.7 k requests completed in 2.97 s, 276.3 MiB read, 23.8 k iops, 93.1 MiB/s
generated 70.7 k requests in 3.00 s, 276.4 MiB, 23.6 k iops, 92.1 MiB/s
min/avg/max/mdev = 33.1 us / 41.9 us / 87.9 ms / 452.6 us

Notice 87.9 millisecond maximum response time, and compare with its
hyperthread sibing:

taskset -c 128 ioping -R /dev/sdf

--- /dev/sdf (block device 1.75 TiB) ioping statistics ---
80.5 k requests completed in 2.96 s, 314.5 MiB read, 27.2 k iops, 106.2 MiB/s
generated 80.5 k requests in 3.00 s, 314.5 MiB, 26.8 k iops, 104.8 MiB/s
min/avg/max/mdev = 33.2 us / 36.8 us / 89.2 us / 2.00 us

Of course maximum times themselves vary from run to run, but the general
picture stays: on cpu #0 I get about three orders of magnitude
longer latencies. I think this is outside of "latency-sensitive
workloads might care" territory and closer to "hurts everyone" kind of
issue, hence I'm reporting it.


On this machine there's AMD HEST ACPI table that registers 14342 polled
"generic hardware error sources" (GHES) with poll interval 5 seconds.
(this seems misdesigned: it will cause cross-socket polling unless the
OS takes special care to divine which GHES to poll from where)

Linux setups a timer for each of those individually, so when the machine
is idle there's approximately 2800 timers per second invoked on cpu #0.
Plus, there's a secondary issue with timer migration:
get_nohz_timer_target will attempt to select a non-idle CPU out of 256
(visiting some CPUs repeatedly if they appear in nested domains), and
fail. If I help it along by running 'taskset -c 1 yes > /dev/null' or
disable kernel.timer_migration entirely, it drops maximum latency in the
above ioping test to 1..10ms range (down to two orders of magnitude from
three).

I guess the short answer is that if I don't like it I can boot that
server with 'ghes_disable=1', but is a proper solution possible? Like
requiring explicit opt-in to honor polled GHES entries?

Thank you.
Alexander

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Excessive delays from GHES polling on dual-socket AMD EPYC
  2021-11-27 21:40 Excessive delays from GHES polling on dual-socket AMD EPYC Alexander Monakov
@ 2021-12-02 16:01 ` Yazen Ghannam
  2021-12-02 18:46   ` Alexander Monakov
  0 siblings, 1 reply; 3+ messages in thread
From: Yazen Ghannam @ 2021-12-02 16:01 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: linux-edac, Borislav Petkov, Mauro Carvalho Chehab,
	Linus Torvalds, linux-kernel

On Sun, Nov 28, 2021 at 12:40:48AM +0300, Alexander Monakov wrote:
> Hello world,
> 
> when lightly testing a dual-socket server with 64-core AMD processors I
> noticed that workloads running on cpu #0 can exhibit significantly worse
> latencies compared to cpu #1 ... cpu #255. Checking SSD response time,
> on cpu #0 I got:
> 
> taskset -c 0 ioping -R /dev/sdf
> 
> --- /dev/sdf (block device 1.75 TiB) ioping statistics ---
> 70.7 k requests completed in 2.97 s, 276.3 MiB read, 23.8 k iops, 93.1 MiB/s
> generated 70.7 k requests in 3.00 s, 276.4 MiB, 23.6 k iops, 92.1 MiB/s
> min/avg/max/mdev = 33.1 us / 41.9 us / 87.9 ms / 452.6 us
> 
> Notice 87.9 millisecond maximum response time, and compare with its
> hyperthread sibing:
> 
> taskset -c 128 ioping -R /dev/sdf
> 
> --- /dev/sdf (block device 1.75 TiB) ioping statistics ---
> 80.5 k requests completed in 2.96 s, 314.5 MiB read, 27.2 k iops, 106.2 MiB/s
> generated 80.5 k requests in 3.00 s, 314.5 MiB, 26.8 k iops, 104.8 MiB/s
> min/avg/max/mdev = 33.2 us / 36.8 us / 89.2 us / 2.00 us
> 
> Of course maximum times themselves vary from run to run, but the general
> picture stays: on cpu #0 I get about three orders of magnitude
> longer latencies. I think this is outside of "latency-sensitive
> workloads might care" territory and closer to "hurts everyone" kind of
> issue, hence I'm reporting it.
> 
> 
> On this machine there's AMD HEST ACPI table that registers 14342 polled
> "generic hardware error sources" (GHES) with poll interval 5 seconds.
> (this seems misdesigned: it will cause cross-socket polling unless the
> OS takes special care to divine which GHES to poll from where)
> 
> Linux setups a timer for each of those individually, so when the machine
> is idle there's approximately 2800 timers per second invoked on cpu #0.
> Plus, there's a secondary issue with timer migration:
> get_nohz_timer_target will attempt to select a non-idle CPU out of 256
> (visiting some CPUs repeatedly if they appear in nested domains), and
> fail. If I help it along by running 'taskset -c 1 yes > /dev/null' or
> disable kernel.timer_migration entirely, it drops maximum latency in the
> above ioping test to 1..10ms range (down to two orders of magnitude from
> three).
> 
> I guess the short answer is that if I don't like it I can boot that
> server with 'ghes_disable=1', but is a proper solution possible? Like
> requiring explicit opt-in to honor polled GHES entries?
>

Hi Alexander,

I believe the large number of GHES structures you have are intended to be used
for the ACPI "GHES_ASSIST" feature. The GHES structures in this case are not
to be used as independent sources. However, this feature is not implemented
yet in Linux, so the kernel does set up these GHES structures as independent
error sources.

One way to avoid the issue is for the firmware to give a large polling
interval in the GHES structures. The kernel will still set up timers for each
structure, but there should be less interference from them. The ACPI spec
seems to allow a polling interval up to 0xFFFFFFFF ms.

Ultimately, I think we'd want the kernel to ignore the GHES structures used
for GHES_ASSIST, and then GHES_ASSIST support can be implemented and used
where appropriate.

I can send a patchset for ignoring the structures. This would be setup for
another set than can fully implement the GHES_ASSIST feature. Would you be
willing to test out that first set to see if it resolves the issue?

Thanks,
Yazen

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Excessive delays from GHES polling on dual-socket AMD EPYC
  2021-12-02 16:01 ` Yazen Ghannam
@ 2021-12-02 18:46   ` Alexander Monakov
  0 siblings, 0 replies; 3+ messages in thread
From: Alexander Monakov @ 2021-12-02 18:46 UTC (permalink / raw)
  To: Yazen Ghannam
  Cc: linux-edac, Borislav Petkov, Mauro Carvalho Chehab,
	Linus Torvalds, linux-kernel

On Thu, 2 Dec 2021, Yazen Ghannam wrote:

> I believe the large number of GHES structures you have are intended to be used
> for the ACPI "GHES_ASSIST" feature. The GHES structures in this case are not
> to be used as independent sources. However, this feature is not implemented
> yet in Linux, so the kernel does set up these GHES structures as independent
> error sources.

Yes, our HEST has "GHES Assist: 1". But it is disappointing those sources have
"Polled" type, ACPI allocated eight bits for the type, and only 12 types are
registered so far, so it's not like they were running out of space to designate
a separate type for this kind of sources.

[snip increasing polling interval]

> Ultimately, I think we'd want the kernel to ignore the GHES structures used
> for GHES_ASSIST, and then GHES_ASSIST support can be implemented and used
> where appropriate.
> 
> I can send a patchset for ignoring the structures. This would be setup for
> another set than can fully implement the GHES_ASSIST feature. Would you be
> willing to test out that first set to see if it resolves the issue?

Sure, please Cc me on the patches.

Alexander

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-12-02 18:46 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-27 21:40 Excessive delays from GHES polling on dual-socket AMD EPYC Alexander Monakov
2021-12-02 16:01 ` Yazen Ghannam
2021-12-02 18:46   ` Alexander Monakov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).