* Network latency on 5.4 and 5.10
@ 2021-05-20 7:36 Christoph Mathys
2021-05-20 8:08 ` Christoph Mathys
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Christoph Mathys @ 2021-05-20 7:36 UTC (permalink / raw)
To: linux-rt-users
Hi all!
I'm trying to upgrade from 4.9-rt to 5.4-rt (also tried 5.10). I'm
having a hard time getting networking to perform on the same level as
4.9 (latency wise). We drive an EtherCAT bus at 4kHz using the igb driver.
On 4.9, I just had to disable coalescing on the nic and increase the
priority of the irg threads and it performed as desired (more or less
according to [1] for 3.6+ without task pinning)
Doing the same on 5.4/5.10 does not seem to suffice any more. I get a
lot of warnings from our code that packets have not been sent or
received in time for the next control loop iteration.
Any suggestions on what I could try to improve the nic response?
Thanks!
Christoph
[1]
https://www.osadl.org/Real-time-Ethernet-UDP-worst-case-roun.qa-farm-rt-ethernet-udp-monitor+M514ebdffa90.0.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Network latency on 5.4 and 5.10
2021-05-20 7:36 Network latency on 5.4 and 5.10 Christoph Mathys
@ 2021-05-20 8:08 ` Christoph Mathys
2021-05-20 13:35 ` Peter C. Wallace
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Christoph Mathys @ 2021-05-20 8:08 UTC (permalink / raw)
To: linux-rt-users
While investigating the network latency on 5.4 I noticed
that the packat latencies gets much worse when I run
"stress --hdd 2".
I tried to run cyclictest at the same prio (=80) as the
network interrupt threads and observed very high cyclictest
latencies (this does not happen on 4.9).
The control loop runs at a 250us interval sending and
receiving an EtherCAT packet in every cycle. Below is
a trace of one "normal" iteration and then the last one
which triggered a 400us+ latency on cyclictest.
- irq/132-ecat0-t: nic tx irq thread @ rt prio 80
- irq/131-ecat0-r: nic rx irq thread @ rt prio 80
- EtherlabDaemon-5200: control loop @ rt prio 79
- EtherCATControl-4181: control loop @ rt prio 79
kworker/u4:4-91 [000] d...211 169.985712: sched_switch: prev_comm=kworker/u4:4 prev_pid=91 prev_prio=120 prev_state=R+ ==> next_comm=irq/140-nvme0q1 next_pid=157 next_prio=49
irq/140-nvme0q1-157 [000] .....12 169.985713: nvme_sq: nvme0: disk=nvme0n1, qid=1, head=767, tail=767
irq/140-nvme0q1-157 [000] .....13 169.985714: nvme_complete_rq: nvme0: disk=nvme0n1, qid=1, cmdid=670, res=0x0, retries=0, flags=0x0, status=0x0
irq/140-nvme0q1-157 [000] d..h214 169.985716: sched_waking: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=000
irq/140-nvme0q1-157 [000] d..h214 169.985716: sched_migrate_task: comm=EtherlabDaemon pid=5200 prio=20 orig_cpu=0 dest_cpu=1
irq/140-nvme0q1-157 [000] d..h314 169.985716: sched_wakeup: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=001
Wakeup control loop 5716
stress-6348 [001] dN..211 169.985717: sched_stat_runtime: comm=stress pid=6348 runtime=32699 [ns] vruntime=12653242325 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985717: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
stress-6348 [001] d...211 169.985717: sched_switch: prev_comm=stress prev_pid=6348 prev_prio=120 prev_state=R+ ==> next_comm=EtherlabDaemon next_pid=5200 next_prio=20
irq/140-nvme0q1-157 [000] d...414 169.985720: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
EtherlabDaemon-5200 [001] d...2.. 169.985735: sched_waking: comm=EtherCATControl pid=4181 prio=20 target_cpu=001
EtherlabDaemon-5200 [001] d...2.. 169.985736: sched_migrate_task: comm=EtherCATControl pid=4181 prio=20 orig_cpu=1 dest_cpu=0
EtherlabDaemon-5200 [001] d...3.. 169.985737: sched_wakeup: comm=EtherCATControl pid=4181 prio=20 target_cpu=000
irq/140-nvme0q1-157 [000] d...214 169.985737: sched_switch: prev_comm=irq/140-nvme0q1 prev_pid=157 prev_prio=49 prev_state=R+ ==> next_comm=EtherCATControl next_pid=4181 next_prio=20
EtherlabDaemon-5200 [001] d...2.. 169.985740: sched_switch: prev_comm=EtherlabDaemon prev_pid=5200 prev_prio=20 prev_state=S ==> next_comm=stress next_pid=6348 next_prio=120
stress-6348 [001] d...211 169.985743: sched_stat_runtime: comm=stress pid=6348 runtime=4002 [ns] vruntime=12653246327 [ns]
EtherCATControl-4181 [000] d..h2.. 169.985743: sched_waking: comm=irq/132-ecat0-t pid=3892 prio=19 target_cpu=000
stress-6348 [001] d...211 169.985744: sched_stat_runtime: comm=stress pid=6348 runtime=809 [ns] vruntime=12653247136 [ns]
EtherCATControl-4181 [000] dN.h3.. 169.985744: sched_wakeup: comm=irq/132-ecat0-t pid=3892 prio=19 target_cpu=000
Wakeup net-send 5744
stress-6348 [001] d...211 169.985745: sched_switch: prev_comm=stress prev_pid=6348 prev_prio=120 prev_state=D ==> next_comm=swapper/1 next_pid=0 next_prio=120
EtherCATControl-4181 [000] d...2.. 169.985745: sched_switch: prev_comm=EtherCATControl prev_pid=4181 prev_prio=20 prev_state=R ==> next_comm=irq/132-ecat0-t next_pid=3892 next_prio=19
irq/132-ecat0-t-3892 [000] d...3.. 169.985746: sched_migrate_task: comm=EtherCATControl pid=4181 prio=20 orig_cpu=0 dest_cpu=1
irq/132-ecat0-t-3892 [000] d...3.. 169.985746: sched_wake_idle_without_ipi: cpu=1
<idle>-0 [001] d...2.. 169.985747: sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=EtherCATControl next_pid=4181 next_prio=20
irq/132-ecat0-t-3892 [000] d...312 169.985747: sched_pi_setprio: comm=irq/140-nvme0q1 pid=157 oldprio=49 newprio=19
irq/132-ecat0-t-3892 [000] d...212 169.985748: sched_switch: prev_comm=irq/132-ecat0-t prev_pid=3892 prev_prio=19 prev_state=D ==> next_comm=irq/140-nvme0q1 next_pid=157 next_prio=19
net-send done 5748
irq/140-nvme0q1-157 [000] d...314 169.985749: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.985750: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
EtherCATControl-4181 [001] d...2.. 169.985782: sched_waking: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=001
EtherCATControl-4181 [001] d...3.. 169.985783: sched_wakeup: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=001
EtherCATControl-4181 [001] d...2.. 169.985786: sched_switch: prev_comm=EtherCATControl prev_pid=4181 prev_prio=20 prev_state=S ==> next_comm=EtherlabDaemon next_pid=5200 next_prio=20
EtherlabDaemon-5200 [001] d...2.. 169.985792: sched_switch: prev_comm=EtherlabDaemon prev_pid=5200 prev_prio=20 prev_state=S ==> next_comm=stress next_pid=6348 next_prio=120
irq/140-nvme0q1-157 [000] d..h514 169.985793: sched_waking: comm=cyclictest pid=5753 prio=19 target_cpu=000
irq/140-nvme0q1-157 [000] d..h614 169.985794: sched_wakeup: comm=cyclictest pid=5753 prio=19 target_cpu=000
irq/140-nvme0q1-157 [000] d...314 169.985796: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985796: sched_stat_runtime: comm=stress pid=6348 runtime=4476 [ns] vruntime=12653251612 [ns]
stress-6348 [001] d...311 169.985798: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985800: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985800: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985801: sched_stat_runtime: comm=stress pid=6348 runtime=2795 [ns] vruntime=12653254407 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985804: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985804: sched_stat_runtime: comm=stress pid=6348 runtime=3889 [ns] vruntime=12653258296 [ns]
stress-6348 [001] d...311 169.985806: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985807: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985807: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985807: sched_stat_runtime: comm=stress pid=6348 runtime=1701 [ns] vruntime=12653259997 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985815: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.985815: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.985816: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985816: sched_stat_runtime: comm=stress pid=6348 runtime=8482 [ns] vruntime=12653268479 [ns]
stress-6348 [001] d...311 169.985817: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985818: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985818: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985819: sched_stat_runtime: comm=stress pid=6348 runtime=1561 [ns] vruntime=12653270040 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985820: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.985821: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.985823: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985823: sched_stat_runtime: comm=stress pid=6348 runtime=4379 [ns] vruntime=12653274419 [ns]
stress-6348 [001] d...311 169.985824: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985825: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985825: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985826: sched_stat_runtime: comm=stress pid=6348 runtime=1510 [ns] vruntime=12653275929 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985828: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985829: sched_stat_runtime: comm=stress pid=6348 runtime=2966 [ns] vruntime=12653278895 [ns]
stress-6348 [001] d...311 169.985830: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985831: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985831: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985831: sched_stat_runtime: comm=stress pid=6348 runtime=1578 [ns] vruntime=12653280473 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985839: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985839: sched_stat_runtime: comm=stress pid=6348 runtime=7921 [ns] vruntime=12653288394 [ns]
stress-6348 [001] d...311 169.985840: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985841: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985841: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985842: sched_stat_runtime: comm=stress pid=6348 runtime=1508 [ns] vruntime=12653289902 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985845: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985845: sched_stat_runtime: comm=stress pid=6348 runtime=3004 [ns] vruntime=12653292906 [ns]
stress-6348 [001] d...211 169.985845: sched_waking: comm=ksoftirqd/1 pid=21 prio=120 target_cpu=001
stress-6348 [001] d...311 169.985846: sched_wakeup: comm=ksoftirqd/1 pid=21 prio=120 target_cpu=001
stress-6348 [001] d...311 169.985847: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985848: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985848: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985848: sched_stat_runtime: comm=stress pid=6348 runtime=1401 [ns] vruntime=12653294307 [ns]
stress-6348 [001] d...211 169.985849: sched_switch: prev_comm=stress prev_pid=6348 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/1 next_pid=21 next_prio=120
ksoftirqd/1-21 [001] d...213 169.985850: sched_waking: comm=EtherlabDaemon pid=3998 prio=120 target_cpu=001
ksoftirqd/1-21 [001] d...313 169.985850: sched_stat_runtime: comm=ksoftirqd/1 pid=21 runtime=1929 [ns] vruntime=62640055548 [ns]
ksoftirqd/1-21 [001] d...313 169.985851: sched_wakeup: comm=EtherlabDaemon pid=3998 prio=120 target_cpu=001
ksoftirqd/1-21 [001] d..h413 169.985852: sched_waking: comm=irq/131-ecat0-r pid=3891 prio=19 target_cpu=001
ksoftirqd/1-21 [001] dN.h513 169.985852: sched_wakeup: comm=irq/131-ecat0-r pid=3891 prio=19 target_cpu=001
wakeup net receive 5852
ksoftirqd/1-21 [001] dN..213 169.985852: sched_stat_runtime: comm=ksoftirqd/1 pid=21 runtime=1513 [ns] vruntime=62640057061 [ns]
ksoftirqd/1-21 [001] d...213 169.985853: sched_switch: prev_comm=ksoftirqd/1 prev_pid=21 prev_prio=120 prev_state=R+ ==> next_comm=irq/131-ecat0-r next_pid=3891 next_prio=19
irq/131-ecat0-r-3891 [001] d...312 169.985853: sched_pi_setprio: comm=ksoftirqd/1 pid=21 oldprio=120 newprio=19
irq/131-ecat0-r-3891 [001] d...212 169.985855: sched_switch: prev_comm=irq/131-ecat0-r prev_pid=3891 prev_prio=19 prev_state=D ==> next_comm=ksoftirqd/1 next_pid=21 next_prio=19
ksoftirqd/1-21 [001] d...312 169.985855: sched_pi_setprio: comm=ksoftirqd/1 pid=21 oldprio=19 newprio=120
ksoftirqd/1-21 [001] dN..312 169.985856: sched_waking: comm=irq/131-ecat0-r pid=3891 prio=19 target_cpu=001
ksoftirqd/1-21 [001] dN..412 169.985856: sched_wakeup: comm=irq/131-ecat0-r pid=3891 prio=19 target_cpu=001
ksoftirqd/1-21 [001] dN..212 169.985856: sched_stat_runtime: comm=ksoftirqd/1 pid=21 runtime=814 [ns] vruntime=62640057875 [ns]
ksoftirqd/1-21 [001] d...212 169.985857: sched_switch: prev_comm=ksoftirqd/1 prev_pid=21 prev_prio=120 prev_state=R+ ==> next_comm=irq/131-ecat0-r next_pid=3891 next_prio=19
irq/131-ecat0-r-3891 [001] d...2.. 169.985858: sched_switch: prev_comm=irq/131-ecat0-r prev_pid=3891 prev_prio=19 prev_state=S ==> next_comm=EtherlabDaemon next_pid=3998 next_prio=120
net receive done 5858
EtherlabDaemon-3998 [001] d...2.. 169.985865: sched_stat_runtime: comm=EtherlabDaemon pid=3998 runtime=6760 [ns] vruntime=20976020850 [ns]
EtherlabDaemon-3998 [001] d...2.. 169.985866: sched_switch: prev_comm=EtherlabDaemon prev_pid=3998 prev_prio=120 prev_state=S ==> next_comm=ksoftirqd/1 next_pid=21 next_prio=120
ksoftirqd/1-21 [001] d...2.. 169.985868: sched_stat_runtime: comm=ksoftirqd/1 pid=21 runtime=3909 [ns] vruntime=62640061784 [ns]
ksoftirqd/1-21 [001] d...2.. 169.985869: sched_switch: prev_comm=ksoftirqd/1 prev_pid=21 prev_prio=120 prev_state=S ==> next_comm=stress next_pid=6348 next_prio=120
irq/140-nvme0q1-157 [000] d...314 169.985874: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985874: sched_stat_runtime: comm=stress pid=6348 runtime=5573 [ns] vruntime=12653299880 [ns]
stress-6348 [001] d...311 169.985875: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985877: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985877: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985877: sched_stat_runtime: comm=stress pid=6348 runtime=2275 [ns] vruntime=12653302155 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985881: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985881: sched_stat_runtime: comm=stress pid=6348 runtime=3173 [ns] vruntime=12653305328 [ns]
stress-6348 [001] d...311 169.985882: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985883: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985883: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985883: sched_stat_runtime: comm=stress pid=6348 runtime=1533 [ns] vruntime=12653306861 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985889: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.985889: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.985890: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985890: sched_stat_runtime: comm=stress pid=6348 runtime=6763 [ns] vruntime=12653313624 [ns]
stress-6348 [001] d...311 169.985891: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985892: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985892: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985892: sched_stat_runtime: comm=stress pid=6348 runtime=1528 [ns] vruntime=12653315152 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985894: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985894: sched_stat_runtime: comm=stress pid=6348 runtime=1780 [ns] vruntime=12653316932 [ns]
stress-6348 [001] d...311 169.985895: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
irq/140-nvme0q1-157 [000] d..h214 169.985896: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
irq/140-nvme0q1-157 [000] d..h314 169.985896: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985897: sched_stat_runtime: comm=stress pid=6348 runtime=1471 [ns] vruntime=12653318403 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985900: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.985901: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.985901: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985902: sched_stat_runtime: comm=stress pid=6348 runtime=4502 [ns] vruntime=12653322905 [ns]
stress-6348 [001] d...311 169.985902: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985903: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985904: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985904: sched_stat_runtime: comm=stress pid=6348 runtime=1466 [ns] vruntime=12653324371 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985906: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.985906: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.985907: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985907: sched_stat_runtime: comm=stress pid=6348 runtime=2695 [ns] vruntime=12653327066 [ns]
stress-6348 [001] d...311 169.985908: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.985909: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.985909: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.985909: sched_stat_runtime: comm=stress pid=6348 runtime=1419 [ns] vruntime=12653328485 [ns]
irq/140-nvme0q1-157 [000] d...314 169.985911: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985911: sched_stat_runtime: comm=stress pid=6348 runtime=1859 [ns] vruntime=12653330344 [ns]
stress-6348 [001] d...311 169.985912: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN.h411 169.985914: sched_waking: comm=cyclictest pid=5754 prio=19 target_cpu=001
stress-6348 [001] dN.h511 169.985914: sched_wakeup: comm=cyclictest pid=5754 prio=19 target_cpu=001
stress-6348 [001] dN.h411 169.985914: sched_waking: comm=SafetyLogicShad pid=3990 prio=30 target_cpu=001
stress-6348 [001] dN.h511 169.985915: sched_wakeup: comm=SafetyLogicShad pid=3990 prio=30 target_cpu=001
stress-6348 [001] dN..211 169.985916: sched_stat_runtime: comm=stress pid=6348 runtime=2695 [ns] vruntime=12653333039 [ns]
stress-6348 [001] d...211 169.985916: sched_switch: prev_comm=stress prev_pid=6348 prev_prio=120 prev_state=R+ ==> next_comm=cyclictest next_pid=5754 next_prio=19
cyclictest-5754 [001] d...2.. 169.985920: sched_switch: prev_comm=cyclictest prev_pid=5754 prev_prio=19 prev_state=S ==> next_comm=SafetyLogicShad next_pid=3990 next_prio=30
SafetyLogicShad-3990 [001] d...2.. 169.985958: sched_switch: prev_comm=SafetyLogicShad prev_pid=3990 prev_prio=30 prev_state=S ==> next_comm=stress next_pid=6348 next_prio=120
irq/140-nvme0q1-157 [000] d...314 169.985964: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.985964: sched_stat_runtime: comm=stress pid=6348 runtime=7141 [ns] vruntime=12653340180 [ns]
stress-6348 [001] d...311 169.985965: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN.h411 169.985967: sched_waking: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=001
stress-6348 [001] dN.h511 169.985968: sched_wakeup: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=001
Wakeup next control loop cycle 5967
stress-6348 [001] dN..211 169.985969: sched_stat_runtime: comm=stress pid=6348 runtime=2493 [ns] vruntime=12653342673 [ns]
stress-6348 [001] d...211 169.985969: sched_switch: prev_comm=stress prev_pid=6348 prev_prio=120 prev_state=R+ ==> next_comm=EtherlabDaemon next_pid=5200 next_prio=20
irq/140-nvme0q1-157 [000] d..h214 169.985976: sched_waking: comm=SafetyLogic pid=3989 prio=30 target_cpu=000
irq/140-nvme0q1-157 [000] d..h314 169.985977: sched_wakeup: comm=SafetyLogic pid=3989 prio=30 target_cpu=000
EtherlabDaemon-5200 [001] d...2.. 169.985987: sched_waking: comm=EtherCATControl pid=4181 prio=20 target_cpu=001
EtherlabDaemon-5200 [001] d...3.. 169.985988: sched_wakeup: comm=EtherCATControl pid=4181 prio=20 target_cpu=001
EtherlabDaemon-5200 [001] d...2.. 169.985991: sched_switch: prev_comm=EtherlabDaemon prev_pid=5200 prev_prio=20 prev_state=S ==> next_comm=EtherCATControl next_pid=4181 next_prio=20
EtherCATControl-4181 [001] d...2.. 169.986023: sched_waking: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=001
EtherCATControl-4181 [001] d...3.. 169.986024: sched_wakeup: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=001
EtherCATControl-4181 [001] d...2.. 169.986027: sched_switch: prev_comm=EtherCATControl prev_pid=4181 prev_prio=20 prev_state=S ==> next_comm=EtherlabDaemon next_pid=5200 next_prio=20
EtherlabDaemon-5200 [001] d...2.. 169.986033: sched_switch: prev_comm=EtherlabDaemon prev_pid=5200 prev_prio=20 prev_state=S ==> next_comm=stress next_pid=6348 next_prio=120
irq/140-nvme0q1-157 [000] d...314 169.986039: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986039: sched_stat_runtime: comm=stress pid=6348 runtime=7186 [ns] vruntime=12653349859 [ns]
stress-6348 [001] d...311 169.986040: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986042: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986042: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986043: sched_stat_runtime: comm=stress pid=6348 runtime=2461 [ns] vruntime=12653352320 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986045: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986046: sched_stat_runtime: comm=stress pid=6348 runtime=2848 [ns] vruntime=12653355168 [ns]
stress-6348 [001] d...311 169.986047: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986048: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986048: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986048: sched_stat_runtime: comm=stress pid=6348 runtime=1553 [ns] vruntime=12653356721 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986051: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986052: sched_stat_runtime: comm=stress pid=6348 runtime=3251 [ns] vruntime=12653359972 [ns]
stress-6348 [001] d...311 169.986053: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986054: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986054: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986054: sched_stat_runtime: comm=stress pid=6348 runtime=1729 [ns] vruntime=12653361701 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986057: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.986058: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.986058: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986059: sched_stat_runtime: comm=stress pid=6348 runtime=4213 [ns] vruntime=12653365914 [ns]
stress-6348 [001] d...311 169.986059: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986060: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986061: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986061: sched_stat_runtime: comm=stress pid=6348 runtime=1535 [ns] vruntime=12653367449 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986067: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986067: sched_stat_runtime: comm=stress pid=6348 runtime=5639 [ns] vruntime=12653373088 [ns]
stress-6348 [001] d...311 169.986068: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986069: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986069: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986069: sched_stat_runtime: comm=stress pid=6348 runtime=1561 [ns] vruntime=12653374649 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986071: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986071: sched_stat_runtime: comm=stress pid=6348 runtime=1607 [ns] vruntime=12653376256 [ns]
stress-6348 [001] d...311 169.986072: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986073: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986073: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986073: sched_stat_runtime: comm=stress pid=6348 runtime=1382 [ns] vruntime=12653377638 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986075: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.986076: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.986076: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986077: sched_stat_runtime: comm=stress pid=6348 runtime=3120 [ns] vruntime=12653380758 [ns]
stress-6348 [001] d...311 169.986078: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986079: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986079: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986079: sched_stat_runtime: comm=stress pid=6348 runtime=1522 [ns] vruntime=12653382280 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986081: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.986081: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.986086: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986086: sched_stat_runtime: comm=stress pid=6348 runtime=6862 [ns] vruntime=12653389142 [ns]
stress-6348 [001] d...311 169.986087: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986088: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986088: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986089: sched_stat_runtime: comm=stress pid=6348 runtime=1484 [ns] vruntime=12653390626 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986092: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986092: sched_stat_runtime: comm=stress pid=6348 runtime=3101 [ns] vruntime=12653393727 [ns]
stress-6348 [001] d...311 169.986093: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986094: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986094: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986094: sched_stat_runtime: comm=stress pid=6348 runtime=1466 [ns] vruntime=12653395193 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986098: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.986098: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.986099: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986099: sched_stat_runtime: comm=stress pid=6348 runtime=4177 [ns] vruntime=12653399370 [ns]
stress-6348 [001] d...311 169.986100: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN.h411 169.986101: sched_waking: comm=irq/131-ecat0-r pid=3891 prio=19 target_cpu=001
stress-6348 [001] dN.h511 169.986102: sched_wakeup: comm=irq/131-ecat0-r pid=3891 prio=19 target_cpu=001
stress-6348 [001] dN..211 169.986102: sched_stat_runtime: comm=stress pid=6348 runtime=1821 [ns] vruntime=12653401191 [ns]
stress-6348 [001] d...211 169.986103: sched_switch: prev_comm=stress prev_pid=6348 prev_prio=120 prev_state=R+ ==> next_comm=irq/131-ecat0-r next_pid=3891 next_prio=19
irq/131-ecat0-r-3891 [001] d...2.. 169.986106: sched_switch: prev_comm=irq/131-ecat0-r prev_pid=3891 prev_prio=19 prev_state=S ==> next_comm=stress next_pid=6348 next_prio=120
irq/140-nvme0q1-157 [000] d...314 169.986110: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986110: sched_stat_runtime: comm=stress pid=6348 runtime=4399 [ns] vruntime=12653405590 [ns]
stress-6348 [001] d...311 169.986111: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986112: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986112: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986112: sched_stat_runtime: comm=stress pid=6348 runtime=1619 [ns] vruntime=12653407209 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986115: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986116: sched_stat_runtime: comm=stress pid=6348 runtime=3025 [ns] vruntime=12653410234 [ns]
stress-6348 [001] d...311 169.986117: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986118: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986118: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986118: sched_stat_runtime: comm=stress pid=6348 runtime=1766 [ns] vruntime=12653412000 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986127: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986127: sched_stat_runtime: comm=stress pid=6348 runtime=8700 [ns] vruntime=12653420700 [ns]
stress-6348 [001] d...311 169.986128: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986129: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986129: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986130: sched_stat_runtime: comm=stress pid=6348 runtime=1524 [ns] vruntime=12653422224 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986131: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.986132: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.986132: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986133: sched_stat_runtime: comm=stress pid=6348 runtime=2694 [ns] vruntime=12653424918 [ns]
stress-6348 [001] d...311 169.986133: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986134: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986135: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986135: sched_stat_runtime: comm=stress pid=6348 runtime=1481 [ns] vruntime=12653426399 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986137: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986137: sched_stat_runtime: comm=stress pid=6348 runtime=1882 [ns] vruntime=12653428281 [ns]
stress-6348 [001] d...311 169.986138: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986139: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986139: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986139: sched_stat_runtime: comm=stress pid=6348 runtime=1456 [ns] vruntime=12653429737 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986144: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.986144: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.986145: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986145: sched_stat_runtime: comm=stress pid=6348 runtime=5167 [ns] vruntime=12653434904 [ns]
stress-6348 [001] d...311 169.986146: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986147: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986147: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986147: sched_stat_runtime: comm=stress pid=6348 runtime=1512 [ns] vruntime=12653436416 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986150: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.986150: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.986151: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986151: sched_stat_runtime: comm=stress pid=6348 runtime=4009 [ns] vruntime=12653440425 [ns]
stress-6348 [001] d...311 169.986152: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986153: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986153: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986154: sched_stat_runtime: comm=stress pid=6348 runtime=1467 [ns] vruntime=12653441892 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986156: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986157: sched_stat_runtime: comm=stress pid=6348 runtime=2929 [ns] vruntime=12653444821 [ns]
stress-6348 [001] d...311 169.986158: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986159: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986159: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986159: sched_stat_runtime: comm=stress pid=6348 runtime=1452 [ns] vruntime=12653446273 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986161: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.986161: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.986163: sched_waking: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...414 169.986163: sched_wakeup: comm=stress pid=6348 prio=120 target_cpu=001
irq/140-nvme0q1-157 [000] d...314 169.986164: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986164: sched_stat_runtime: comm=stress pid=6348 runtime=5179 [ns] vruntime=12653451452 [ns]
stress-6348 [001] d...311 169.986165: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..411 169.986167: sched_waking: comm=ksoftirqd/1 pid=21 prio=120 target_cpu=001
stress-6348 [001] dN..511 169.986167: sched_wakeup: comm=ksoftirqd/1 pid=21 prio=120 target_cpu=001
stress-6348 [001] dN..211 169.986168: sched_stat_runtime: comm=stress pid=6348 runtime=1907 [ns] vruntime=12653453359 [ns]
stress-6348 [001] d...211 169.986168: sched_switch: prev_comm=stress prev_pid=6348 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/1 next_pid=21 next_prio=120
ksoftirqd/1-21 [001] d...213 169.986169: sched_waking: comm=EtherlabDaemon pid=3998 prio=120 target_cpu=001
ksoftirqd/1-21 [001] d...313 169.986170: sched_stat_runtime: comm=ksoftirqd/1 pid=21 runtime=2392 [ns] vruntime=62640255383 [ns]
ksoftirqd/1-21 [001] d...313 169.986170: sched_wakeup: comm=EtherlabDaemon pid=3998 prio=120 target_cpu=001
ksoftirqd/1-21 [001] d...2.. 169.986171: sched_stat_runtime: comm=ksoftirqd/1 pid=21 runtime=1320 [ns] vruntime=62640256703 [ns]
ksoftirqd/1-21 [001] d...2.. 169.986171: sched_switch: prev_comm=ksoftirqd/1 prev_pid=21 prev_prio=120 prev_state=S ==> next_comm=EtherlabDaemon next_pid=3998 next_prio=120
EtherlabDaemon-3998 [001] d...2.. 169.986178: sched_stat_runtime: comm=EtherlabDaemon pid=3998 runtime=7323 [ns] vruntime=20976028173 [ns]
EtherlabDaemon-3998 [001] d...2.. 169.986180: sched_switch: prev_comm=EtherlabDaemon prev_pid=3998 prev_prio=120 prev_state=S ==> next_comm=stress next_pid=6348 next_prio=120
irq/140-nvme0q1-157 [000] d...314 169.986185: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986185: sched_stat_runtime: comm=stress pid=6348 runtime=6749 [ns] vruntime=12653460108 [ns]
stress-6348 [001] d...311 169.986186: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986188: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986188: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986189: sched_stat_runtime: comm=stress pid=6348 runtime=2245 [ns] vruntime=12653462353 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986191: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986192: sched_stat_runtime: comm=stress pid=6348 runtime=3197 [ns] vruntime=12653465550 [ns]
stress-6348 [001] d...311 169.986193: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986194: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986194: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986194: sched_stat_runtime: comm=stress pid=6348 runtime=1508 [ns] vruntime=12653467058 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986196: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986196: sched_stat_runtime: comm=stress pid=6348 runtime=1661 [ns] vruntime=12653468719 [ns]
stress-6348 [001] d...311 169.986197: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986198: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986198: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986198: sched_stat_runtime: comm=stress pid=6348 runtime=1528 [ns] vruntime=12653470247 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986200: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986200: sched_stat_runtime: comm=stress pid=6348 runtime=1777 [ns] vruntime=12653472024 [ns]
stress-6348 [001] d...311 169.986201: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986202: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986202: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986203: sched_stat_runtime: comm=stress pid=6348 runtime=1487 [ns] vruntime=12653473511 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986208: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986209: sched_stat_runtime: comm=stress pid=6348 runtime=5585 [ns] vruntime=12653479096 [ns]
stress-6348 [001] d...311 169.986209: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN..311 169.986210: sched_waking: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..411 169.986211: sched_wakeup: comm=irq/140-nvme0q1 pid=157 prio=19 target_cpu=000
stress-6348 [001] dN..211 169.986211: sched_stat_runtime: comm=stress pid=6348 runtime=1509 [ns] vruntime=12653480605 [ns]
irq/140-nvme0q1-157 [000] d...314 169.986214: sched_pi_setprio: comm=stress pid=6348 oldprio=120 newprio=19
irq/140-nvme0q1-157 [000] d...314 169.986214: sched_stat_runtime: comm=stress pid=6348 runtime=3062 [ns] vruntime=12653483667 [ns]
stress-6348 [001] d...311 169.986215: sched_pi_setprio: comm=stress pid=6348 oldprio=19 newprio=120
stress-6348 [001] dN.h411 169.986217: sched_waking: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=001
stress-6348 [001] dN.h511 169.986217: sched_wakeup: comm=EtherlabDaemon pid=5200 prio=20 target_cpu=001
stress-6348 [001] dN..211 169.986218: sched_stat_runtime: comm=stress pid=6348 runtime=1942 [ns] vruntime=12653485609 [ns]
stress-6348 [001] d...211 169.986219: sched_switch: prev_comm=stress prev_pid=6348 prev_prio=120 prev_state=R+ ==> next_comm=EtherlabDaemon next_pid=5200 next_prio=20
EtherlabDaemon-5200 [001] d...2.. 169.986234: sched_waking: comm=EtherCATControl pid=4181 prio=20 target_cpu=001
EtherlabDaemon-5200 [001] d...3.. 169.986235: sched_wakeup: comm=EtherCATControl pid=4181 prio=20 target_cpu=001
EtherlabDaemon-5200 [001] d...2.. 169.986238: sched_switch: prev_comm=EtherlabDaemon prev_pid=5200 prev_prio=20 prev_state=S ==> next_comm=EtherCATControl next_pid=4181 next_prio=20
irq/140-nvme0q1-157 [000] d...213 169.986242: sched_waking: comm=RTController pid=5320 prio=120 target_cpu=000
irq/140-nvme0q1-157 [000] d...313 169.986244: sched_wakeup: comm=RTController pid=5320 prio=120 target_cpu=000
irq/140-nvme0q1-157 [000] d...213 169.986244: sched_waking: comm=EtherCATControl pid=4185 prio=120 target_cpu=000
irq/140-nvme0q1-157 [000] d...313 169.986245: sched_wakeup: comm=EtherCATControl pid=4185 prio=120 target_cpu=000
irq/140-nvme0q1-157 [000] d...312 169.986246: sched_pi_setprio: comm=irq/140-nvme0q1 pid=157 oldprio=19 newprio=49
irq/140-nvme0q1-157 [000] dN..312 169.986246: sched_waking: comm=irq/132-ecat0-t pid=3892 prio=19 target_cpu=000
irq/140-nvme0q1-157 [000] dN..412 169.986247: sched_wakeup: comm=irq/132-ecat0-t pid=3892 prio=19 target_cpu=000
irq/140-nvme0q1-157 [000] d...212 169.986247: sched_switch: prev_comm=irq/140-nvme0q1 prev_pid=157 prev_prio=49 prev_state=R+ ==> next_comm=cyclictest next_pid=5753 next_prio=19
cyclictest-5753 [000] ....... 169.986261: tracing_mark_write: hit latency threshold (458 > 150)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Network latency on 5.4 and 5.10
2021-05-20 7:36 Network latency on 5.4 and 5.10 Christoph Mathys
2021-05-20 8:08 ` Christoph Mathys
@ 2021-05-20 13:35 ` Peter C. Wallace
2021-05-25 14:30 ` Christoph Mathys
2021-05-31 15:16 ` Christoph Mathys
3 siblings, 0 replies; 8+ messages in thread
From: Peter C. Wallace @ 2021-05-20 13:35 UTC (permalink / raw)
To: Christoph Mathys; +Cc: linux-rt-users
On Thu, 20 May 2021, Christoph Mathys wrote:
> Date: Thu, 20 May 2021 09:36:46 +0200
> From: Christoph Mathys <eraserix@gmail.com>
> To: linux-rt-users@vger.kernel.org
> Subject: Network latency on 5.4 and 5.10
>
> Hi all!
>
> I'm trying to upgrade from 4.9-rt to 5.4-rt (also tried 5.10). I'm having a
> hard time getting networking to perform on the same level as 4.9 (latency
> wise). We drive an EtherCAT bus at 4kHz using the igb driver.
>
> On 4.9, I just had to disable coalescing on the nic and increase the priority
> of the irg threads and it performed as desired (more or less according to [1]
> for 3.6+ without task pinning)
>
> Doing the same on 5.4/5.10 does not seem to suffice any more. I get a lot of
> warnings from our code that packets have not been sent or received in time
> for the next control loop iteration.
>
> Any suggestions on what I could try to improve the nic response?
>
> Thanks!
> Christoph
>
> [1]
> https://www.osadl.org/Real-time-Ethernet-UDP-worst-case-roun.qa-farm-rt-ethernet-udp-monitor+M514ebdffa90.0.html
>
No, help but I have seen the same thing with our real time UDP connected
Ethernet devices. 4.X seems to have much better latencies that 5.X. I've tried
up to 5.11.
Peter Wallace
Mesa Electronics
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Network latency on 5.4 and 5.10
2021-05-20 7:36 Network latency on 5.4 and 5.10 Christoph Mathys
2021-05-20 8:08 ` Christoph Mathys
2021-05-20 13:35 ` Peter C. Wallace
@ 2021-05-25 14:30 ` Christoph Mathys
2021-05-26 3:23 ` chensong_2000
2021-05-31 15:16 ` Christoph Mathys
3 siblings, 1 reply; 8+ messages in thread
From: Christoph Mathys @ 2021-05-25 14:30 UTC (permalink / raw)
To: linux-rt-users
I did some more testing on 5.10 and it seems my
network irq thread runs into some kind of PI with
the RCU thread and thus cannot run for ~200us.
Some questions:
- Why does this PI happen?
- Can I avoid it? How?
- Can I avoid that the RCU runs for such a long
time at once?
I'm playing around with NOHZ_FULL and RCU callback
offloading at the moment to see if this helps somehow.
But with only two cores my options on juggling
threads onto CPUs are somewhat limited.
rcuc/1-20 [001] .....12 16017.277265: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=000000000c04e43c
rcuc/1-20 [001] .....12 16017.277265: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=00000000c6b95ab2
rcuc/1-20 [001] .....12 16017.277266: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=00000000f9863725
rcuc/1-20 [001] d..h212 16017.277266: irq_handler_entry: irq=130 name=ecat0-rx-0
rcuc/1-20 [001] d..h212 16017.277266: irq_handler_exit: irq=130 ret=handled
rcuc/1-20 [001] d..h412 16017.277266: sched_waking: comm=irq/130-ecat0-r pid=25665 prio=19 target_cpu=001
rcuc/1-20 [001] dN.h512 16017.277267: sched_wakeup: comm=irq/130-ecat0-r pid=25665 prio=19 target_cpu=001
rcuc/1-20 [001] dN..112 16017.277267: rcu_utilization: Start context switch
rcuc/1-20 [001] dN..112 16017.277267: rcu_utilization: End context switch
rcuc/1-20 [001] d...212 16017.277268: sched_switch: prev_comm=rcuc/1 prev_pid=20 prev_prio=98 prev_state=R+ ==> next_comm=irq/130-ecat0-r next_pid=25665 next_prio=19
irq/130-ecat0-r-25665 [001] d...311 16017.277269: sched_pi_setprio: comm=rcuc/1 pid=20 oldprio=98 newprio=19
irq/130-ecat0-r-25665 [001] d...111 16017.277270: rcu_utilization: Start context switch
irq/130-ecat0-r-25665 [001] d...111 16017.277270: rcu_utilization: End context switch
irq/130-ecat0-r-25665 [001] d...211 16017.277270: sched_switch: prev_comm=irq/130-ecat0-r prev_pid=25665 prev_prio=19 prev_state=R+ ==> next_comm=rcuc/1 next_pid=20 next_prio=19
rcuc/1-20 [001] .....12 16017.277271: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=000000008900a49c
<lots of similar lines>
rcuc/1-20 [001] d...112 16017.277288: workqueue_queue_work: work struct=0000000093763c71 function=css_killed_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
rcuc/1-20 [001] d...112 16017.277289: workqueue_activate_work: work struct 0000000093763c71
rcuc/1-20 [001] d...312 16017.277289: sched_waking: comm=kworker/1:2 pid=13240 prio=120 target_cpu=001
rcuc/1-20 [001] d...412 16017.277290: sched_wakeup: comm=kworker/1:2 pid=13240 prio=120 target_cpu=001
rcuc/1-20 [001] .....13 16017.277290: percpu_free_percpu: base_addr=0000000026e5557e off=111288 ptr=00000000cb224c7a
rcuc/1-20 [001] .....13 16017.277290: percpu_free_percpu: base_addr=0000000026e5557e off=111280 ptr=000000001ea01103
rcuc/1-20 [001] .....12 16017.277290: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=00000000f7fe2da7
<lots of similar lines>
rcuc/1-20 [001] .....12 16017.277299: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=000000004363b5fd
rcuc/1-20 [001] d..h212 16017.277300: local_timer_entry: vector=236
rcuc/1-20 [001] d..h312 16017.277300: softirq_raise: vec=8 [action=HRTIMER]
rcuc/1-20 [001] d..h212 16017.277300: write_msr: 6e0, value 304f993b507a
rcuc/1-20 [001] d..h212 16017.277300: local_timer_exit: vector=236
rcuc/1-20 [001] .....12 16017.277301: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=00000000407b3531
<lots of similar lines>
rcuc/1-20 [001] .....12 16017.277305: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=00000000af2cd46f
rcuc/1-20 [001] d..h.12 16017.277305: local_timer_entry: vector=236
rcuc/1-20 [001] d..h112 16017.277306: hrtimer_cancel: hrtimer=0000000082cde4bc
rcuc/1-20 [001] d..h.12 16017.277306: hrtimer_expire_entry: hrtimer=0000000082cde4bc function=hrtimer_wakeup now=16017273215561
rcuc/1-20 [001] d..h212 16017.277306: sched_waking: comm=cyclictest pid=20680 prio=19 target_cpu=001
rcuc/1-20 [001] d..h312 16017.277306: sched_wakeup: comm=cyclictest pid=20680 prio=19 target_cpu=001
rcuc/1-20 [001] d..h.12 16017.277307: hrtimer_expire_exit: hrtimer=0000000082cde4bc
rcuc/1-20 [001] d..h.12 16017.277307: write_msr: 6e0, value 304f996a73c6
rcuc/1-20 [001] d..h.12 16017.277307: local_timer_exit: vector=236
rcuc/1-20 [001] .....12 16017.277307: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=000000008da0302a
<lots of similar lines>
rcuc/1-20 [001] .....12 16017.277418: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=00000000994b1e7d
rcuc/1-20 [001] .....12 16017.277418: mm_page_free: page=00000000a7e4a633 pfn=104560 order=2
rcuc/1-20 [001] .....12 16017.277418: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=0000000054b769bf
rcuc/1-20 [001] .....12 16017.277419: sched_process_free: comm=kill pid=14926 prio=120
rcuc/1-20 [001] d...113 16017.277421: workqueue_queue_work: work struct=00000000d2b86e8a function=css_release_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
rcuc/1-20 [001] .....13 16017.277421: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=000000000b628ae9
rcuc/1-20 [001] d...113 16017.277422: workqueue_queue_work: work struct=00000000d6255bb4 function=css_release_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
rcuc/1-20 [001] .....13 16017.277422: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000d844ad24
rcuc/1-20 [001] .....13 16017.277422: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000a49f8139
rcuc/1-20 [001] .....13 16017.277422: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000f2b3224d
rcuc/1-20 [001] .....13 16017.277422: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000d2397b3c
rcuc/1-20 [001] .....13 16017.277423: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000a367555e
rcuc/1-20 [001] .....13 16017.277423: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=000000002aaa5cac
rcuc/1-20 [001] .....13 16017.277423: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000430e91ae
rcuc/1-20 [001] .....13 16017.277423: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000f8512c96
rcuc/1-20 [001] .....13 16017.277423: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=000000005e5ac681
rcuc/1-20 [001] d...213 16017.277424: timer_start: timer=0000000098e46346 function=delayed_work_timer_fn expires=4310684537 [timeout=20] cpu=1 idx=58 flags=I
rcuc/1-20 [001] .....12 16017.277425: kfree: call_site=security_task_free+0x36/0x50 ptr=00000000e928d73e
rcuc/1-20 [001] .....12 16017.277425: kmem_cache_free: call_site=__put_task_struct+0x88/0x190 ptr=00000000d32d025d
<lots of similar lines>
rcuc/1-20 [001] .....12 16017.277452: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=00000000097691fb
rcuc/1-20 [001] .....12 16017.277453: sched_process_free: comm=(sd-pam) pid=14847 prio=120
rcuc/1-20 [001] d...112 16017.277454: workqueue_queue_work: work struct=0000000077c15f97 function=free_work workqueue=00000000f605602a req_cpu=16 cpu=1
rcuc/1-20 [001] d...112 16017.277454: workqueue_activate_work: work struct 0000000077c15f97
rcuc/1-20 [001] .....12 16017.277454: kfree: call_site=security_task_free+0x36/0x50 ptr=000000003016d2cc
rcuc/1-20 [001] .....12 16017.277454: kmem_cache_free: call_site=__put_task_struct+0x88/0x190 ptr=000000008d44c92c
rcuc/1-20 [001] .....12 16017.277455: kmem_cache_free: call_site=__put_task_struct+0x101/0x190 ptr=00000000bf1f1ae9
rcuc/1-20 [001] .....12 16017.277455: kfree: call_site=free_task+0x1b/0x60 ptr=0000000000000000
rcuc/1-20 [001] .....12 16017.277455: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=00000000eb792cfc
rcuc/1-20 [001] .....12 16017.277456: sched_process_free: comm=systemd pid=14846 prio=120
rcuc/1-20 [001] d...113 16017.277457: workqueue_queue_work: work struct=00000000a65dcda5 function=css_release_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
rcuc/1-20 [001] .....13 16017.277457: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000a29d18ed
rcuc/1-20 [001] d...113 16017.277457: workqueue_queue_work: work struct=0000000000e1ccca function=css_release_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
rcuc/1-20 [001] .....13 16017.277457: kfree: call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000cfeca6c9
<lots of similar lines>
rcuc/1-20 [001] .....12 16017.277461: kmem_cache_free: call_site=rcu_do_batch+0x194/0x480 ptr=000000009e31b81e
rcuc/1-20 [001] .....12 16017.277461: rcu_utilization: End RCU core
rcuc/1-20 [001] ...s.12 16017.277462: softirq_entry: vec=8 [action=HRTIMER]
rcuc/1-20 [001] d..s113 16017.277462: hrtimer_cancel: hrtimer=000000009862add4
rcuc/1-20 [001] ...s.13 16017.277462: hrtimer_expire_entry: hrtimer=000000009862add4 function=hrtimer_wakeup now=16017273371650
rcuc/1-20 [001] d..s213 16017.277462: sched_waking: comm=EtherlabDaemon pid=25770 prio=120 target_cpu=001
rcuc/1-20 [001] d..s313 16017.277464: sched_wakeup: comm=EtherlabDaemon pid=25770 prio=120 target_cpu=001
rcuc/1-20 [001] ...s.13 16017.277464: hrtimer_expire_exit: hrtimer=000000009862add4
rcuc/1-20 [001] d..s113 16017.277464: write_msr: 6e0, value 304f994bc448
rcuc/1-20 [001] ...s.12 16017.277464: softirq_exit: vec=8 [action=HRTIMER]
rcuc/1-20 [001] dN..311 16017.277465: sched_pi_setprio: comm=rcuc/1 pid=20 oldprio=19 newprio=98
rcuc/1-20 [001] dN..311 16017.277466: sched_waking: comm=irq/130-ecat0-r pid=25665 prio=19 target_cpu=001
rcuc/1-20 [001] dN..411 16017.277466: sched_wakeup: comm=irq/130-ecat0-r pid=25665 prio=19 target_cpu=001
rcuc/1-20 [001] .N..... 16017.277467: rcu_utilization: End CPU kthread@rcu_wait
rcuc/1-20 [001] dN..1.. 16017.277467: rcu_utilization: Start context switch
rcuc/1-20 [001] dN..1.. 16017.277467: rcu_utilization: End context switch
rcuc/1-20 [001] d...2.. 16017.277468: sched_switch: prev_comm=rcuc/1 prev_pid=20 prev_prio=98 prev_state=R+ ==> next_comm=cyclictest next_pid=20680 next_prio=19
rcuc/1-20 [001] d...2.. 16017.277468: tlb_flush: pages:0 reason:flush on task switch (0)
cyclictest-20680 [001] ....1.. 16017.277469: sys_clock_nanosleep -> 0x0
cyclictest-20680 [001] ....... 16017.277470: sys_exit: NR 230 = 0
cyclictest-20680 [001] d...... 16017.277470: x86_fpu_regs_activated: x86/fpu: 000000001e825867 load: 1 xfeatures: 2 xcomp_bv: 8000000000000003
cyclictest-20680 [001] d..h... 16017.277474: call_function_single_entry: vector=251
cyclictest-20680 [001] d..h... 16017.277475: call_function_single_exit: vector=251
cyclictest-20680 [001] d..h... 16017.277476: irq_work_entry: vector=246
cyclictest-20680 [001] d..h2.. 16017.277479: sched_migrate_task: comm=EtherCATControl pid=25916 prio=20 orig_cpu=1 dest_cpu=0
cyclictest-20680 [001] d..h... 16017.277480: irq_work_exit: vector=246
cyclictest-20680 [001] ....1.. 16017.277485: sys_write(fd: 4, buf: 7f98e50e0300, count: 21)
cyclictest-20680 [001] ....... 16017.277485: sys_enter: NR 1 (4, 7f98e50e0300, 21, 0, 0, 0)
cyclictest-20680 [001] ....... 16017.277486: tracing_mark_write: hit latency threshold (165 > 150)
cyclictest-20680 [001] ....1.. 16017.277487: sys_write -> 0x21
cyclictest-20680 [001] ....... 16017.277487: sys_exit: NR 1 = 33
cyclictest-20680 [001] ....1.. 16017.277487: sys_write(fd: 5, buf: 409a8a, count: 2)
cyclictest-20680 [001] ....... 16017.277487: sys_enter: NR 1 (5, 409a8a, 2, 0, 0, 0)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Network latency on 5.4 and 5.10
2021-05-25 14:30 ` Christoph Mathys
@ 2021-05-26 3:23 ` chensong_2000
2021-05-26 6:10 ` Christoph Mathys
0 siblings, 1 reply; 8+ messages in thread
From: chensong_2000 @ 2021-05-26 3:23 UTC (permalink / raw)
To: Christoph Mathys, linux-rt-users
hi Christoph,
what's your command to get this log? it seems to have better information
than mine.
Is it "trace-cmd start -e all -p function -l "kmem""? would you mind to
share yours, many thanks.
BR
Song
在 2021/5/25 下午10:30, Christoph Mathys 写道:
> I did some more testing on 5.10 and it seems my
> network irq thread runs into some kind of PI with
> the RCU thread and thus cannot run for ~200us.
>
> Some questions:
> - Why does this PI happen?
> - Can I avoid it? How?
> - Can I avoid that the RCU runs for such a long
> time at once?
>
> I'm playing around with NOHZ_FULL and RCU callback
> offloading at the moment to see if this helps somehow.
> But with only two cores my options on juggling
> threads onto CPUs are somewhat limited.
>
>
> rcuc/1-20 [001] .....12 16017.277265: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=000000000c04e43c
> rcuc/1-20 [001] .....12 16017.277265: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=00000000c6b95ab2
> rcuc/1-20 [001] .....12 16017.277266: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=00000000f9863725
> rcuc/1-20 [001] d..h212 16017.277266: irq_handler_entry:
> irq=130 name=ecat0-rx-0
> rcuc/1-20 [001] d..h212 16017.277266: irq_handler_exit:
> irq=130 ret=handled
> rcuc/1-20 [001] d..h412 16017.277266: sched_waking:
> comm=irq/130-ecat0-r pid=25665 prio=19 target_cpu=001
> rcuc/1-20 [001] dN.h512 16017.277267: sched_wakeup:
> comm=irq/130-ecat0-r pid=25665 prio=19 target_cpu=001
> rcuc/1-20 [001] dN..112 16017.277267: rcu_utilization:
> Start context switch
> rcuc/1-20 [001] dN..112 16017.277267: rcu_utilization:
> End context switch
> rcuc/1-20 [001] d...212 16017.277268: sched_switch:
> prev_comm=rcuc/1 prev_pid=20 prev_prio=98 prev_state=R+ ==>
> next_comm=irq/130-ecat0-r next_pid=25665 next_prio=19
> irq/130-ecat0-r-25665 [001] d...311 16017.277269: sched_pi_setprio:
> comm=rcuc/1 pid=20 oldprio=98 newprio=19
> irq/130-ecat0-r-25665 [001] d...111 16017.277270: rcu_utilization:
> Start context switch
> irq/130-ecat0-r-25665 [001] d...111 16017.277270: rcu_utilization:
> End context switch
> irq/130-ecat0-r-25665 [001] d...211 16017.277270: sched_switch:
> prev_comm=irq/130-ecat0-r prev_pid=25665 prev_prio=19 prev_state=R+ ==>
> next_comm=rcuc/1 next_pid=20 next_prio=19
> rcuc/1-20 [001] .....12 16017.277271: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=000000008900a49c
> <lots of similar lines>
> rcuc/1-20 [001] d...112 16017.277288:
> workqueue_queue_work: work struct=0000000093763c71
> function=css_killed_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
> rcuc/1-20 [001] d...112 16017.277289:
> workqueue_activate_work: work struct 0000000093763c71
> rcuc/1-20 [001] d...312 16017.277289: sched_waking:
> comm=kworker/1:2 pid=13240 prio=120 target_cpu=001
> rcuc/1-20 [001] d...412 16017.277290: sched_wakeup:
> comm=kworker/1:2 pid=13240 prio=120 target_cpu=001
> rcuc/1-20 [001] .....13 16017.277290:
> percpu_free_percpu: base_addr=0000000026e5557e off=111288
> ptr=00000000cb224c7a
> rcuc/1-20 [001] .....13 16017.277290:
> percpu_free_percpu: base_addr=0000000026e5557e off=111280
> ptr=000000001ea01103
> rcuc/1-20 [001] .....12 16017.277290: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=00000000f7fe2da7
> <lots of similar lines>
> rcuc/1-20 [001] .....12 16017.277299: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=000000004363b5fd
> rcuc/1-20 [001] d..h212 16017.277300: local_timer_entry:
> vector=236
> rcuc/1-20 [001] d..h312 16017.277300: softirq_raise:
> vec=8 [action=HRTIMER]
> rcuc/1-20 [001] d..h212 16017.277300: write_msr: 6e0,
> value 304f993b507a
> rcuc/1-20 [001] d..h212 16017.277300: local_timer_exit:
> vector=236
> rcuc/1-20 [001] .....12 16017.277301: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=00000000407b3531
> <lots of similar lines>
> rcuc/1-20 [001] .....12 16017.277305: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=00000000af2cd46f
> rcuc/1-20 [001] d..h.12 16017.277305: local_timer_entry:
> vector=236
> rcuc/1-20 [001] d..h112 16017.277306: hrtimer_cancel:
> hrtimer=0000000082cde4bc
> rcuc/1-20 [001] d..h.12 16017.277306:
> hrtimer_expire_entry: hrtimer=0000000082cde4bc function=hrtimer_wakeup
> now=16017273215561
> rcuc/1-20 [001] d..h212 16017.277306: sched_waking:
> comm=cyclictest pid=20680 prio=19 target_cpu=001
> rcuc/1-20 [001] d..h312 16017.277306: sched_wakeup:
> comm=cyclictest pid=20680 prio=19 target_cpu=001
> rcuc/1-20 [001] d..h.12 16017.277307:
> hrtimer_expire_exit: hrtimer=0000000082cde4bc
> rcuc/1-20 [001] d..h.12 16017.277307: write_msr: 6e0,
> value 304f996a73c6
> rcuc/1-20 [001] d..h.12 16017.277307: local_timer_exit:
> vector=236
> rcuc/1-20 [001] .....12 16017.277307: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=000000008da0302a
> <lots of similar lines>
> rcuc/1-20 [001] .....12 16017.277418: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=00000000994b1e7d
> rcuc/1-20 [001] .....12 16017.277418: mm_page_free:
> page=00000000a7e4a633 pfn=104560 order=2
> rcuc/1-20 [001] .....12 16017.277418: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=0000000054b769bf
> rcuc/1-20 [001] .....12 16017.277419:
> sched_process_free: comm=kill pid=14926 prio=120
> rcuc/1-20 [001] d...113 16017.277421:
> workqueue_queue_work: work struct=00000000d2b86e8a
> function=css_release_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
> rcuc/1-20 [001] .....13 16017.277421: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=000000000b628ae9
> rcuc/1-20 [001] d...113 16017.277422:
> workqueue_queue_work: work struct=00000000d6255bb4
> function=css_release_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
> rcuc/1-20 [001] .....13 16017.277422: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000d844ad24
> rcuc/1-20 [001] .....13 16017.277422: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000a49f8139
> rcuc/1-20 [001] .....13 16017.277422: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000f2b3224d
> rcuc/1-20 [001] .....13 16017.277422: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000d2397b3c
> rcuc/1-20 [001] .....13 16017.277423: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000a367555e
> rcuc/1-20 [001] .....13 16017.277423: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=000000002aaa5cac
> rcuc/1-20 [001] .....13 16017.277423: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000430e91ae
> rcuc/1-20 [001] .....13 16017.277423: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000f8512c96
> rcuc/1-20 [001] .....13 16017.277423: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=000000005e5ac681
> rcuc/1-20 [001] d...213 16017.277424: timer_start:
> timer=0000000098e46346 function=delayed_work_timer_fn expires=4310684537
> [timeout=20] cpu=1 idx=58 flags=I
> rcuc/1-20 [001] .....12 16017.277425: kfree:
> call_site=security_task_free+0x36/0x50 ptr=00000000e928d73e
> rcuc/1-20 [001] .....12 16017.277425: kmem_cache_free:
> call_site=__put_task_struct+0x88/0x190 ptr=00000000d32d025d
> <lots of similar lines>
> rcuc/1-20 [001] .....12 16017.277452: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=00000000097691fb
> rcuc/1-20 [001] .....12 16017.277453:
> sched_process_free: comm=(sd-pam) pid=14847 prio=120
> rcuc/1-20 [001] d...112 16017.277454:
> workqueue_queue_work: work struct=0000000077c15f97 function=free_work
> workqueue=00000000f605602a req_cpu=16 cpu=1
> rcuc/1-20 [001] d...112 16017.277454:
> workqueue_activate_work: work struct 0000000077c15f97
> rcuc/1-20 [001] .....12 16017.277454: kfree:
> call_site=security_task_free+0x36/0x50 ptr=000000003016d2cc
> rcuc/1-20 [001] .....12 16017.277454: kmem_cache_free:
> call_site=__put_task_struct+0x88/0x190 ptr=000000008d44c92c
> rcuc/1-20 [001] .....12 16017.277455: kmem_cache_free:
> call_site=__put_task_struct+0x101/0x190 ptr=00000000bf1f1ae9
> rcuc/1-20 [001] .....12 16017.277455: kfree:
> call_site=free_task+0x1b/0x60 ptr=0000000000000000
> rcuc/1-20 [001] .....12 16017.277455: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=00000000eb792cfc
> rcuc/1-20 [001] .....12 16017.277456:
> sched_process_free: comm=systemd pid=14846 prio=120
> rcuc/1-20 [001] d...113 16017.277457:
> workqueue_queue_work: work struct=00000000a65dcda5
> function=css_release_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
> rcuc/1-20 [001] .....13 16017.277457: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000a29d18ed
> rcuc/1-20 [001] d...113 16017.277457:
> workqueue_queue_work: work struct=0000000000e1ccca
> function=css_release_work_fn workqueue=0000000040d70e90 req_cpu=16 cpu=1
> rcuc/1-20 [001] .....13 16017.277457: kfree:
> call_site=put_css_set_locked+0x155/0x2b0 ptr=00000000cfeca6c9
> <lots of similar lines>
> rcuc/1-20 [001] .....12 16017.277461: kmem_cache_free:
> call_site=rcu_do_batch+0x194/0x480 ptr=000000009e31b81e
> rcuc/1-20 [001] .....12 16017.277461: rcu_utilization:
> End RCU core
> rcuc/1-20 [001] ...s.12 16017.277462: softirq_entry:
> vec=8 [action=HRTIMER]
> rcuc/1-20 [001] d..s113 16017.277462: hrtimer_cancel:
> hrtimer=000000009862add4
> rcuc/1-20 [001] ...s.13 16017.277462:
> hrtimer_expire_entry: hrtimer=000000009862add4 function=hrtimer_wakeup
> now=16017273371650
> rcuc/1-20 [001] d..s213 16017.277462: sched_waking:
> comm=EtherlabDaemon pid=25770 prio=120 target_cpu=001
> rcuc/1-20 [001] d..s313 16017.277464: sched_wakeup:
> comm=EtherlabDaemon pid=25770 prio=120 target_cpu=001
> rcuc/1-20 [001] ...s.13 16017.277464:
> hrtimer_expire_exit: hrtimer=000000009862add4
> rcuc/1-20 [001] d..s113 16017.277464: write_msr: 6e0,
> value 304f994bc448
> rcuc/1-20 [001] ...s.12 16017.277464: softirq_exit:
> vec=8 [action=HRTIMER]
> rcuc/1-20 [001] dN..311 16017.277465: sched_pi_setprio:
> comm=rcuc/1 pid=20 oldprio=19 newprio=98
> rcuc/1-20 [001] dN..311 16017.277466: sched_waking:
> comm=irq/130-ecat0-r pid=25665 prio=19 target_cpu=001
> rcuc/1-20 [001] dN..411 16017.277466: sched_wakeup:
> comm=irq/130-ecat0-r pid=25665 prio=19 target_cpu=001
> rcuc/1-20 [001] .N..... 16017.277467: rcu_utilization:
> End CPU kthread@rcu_wait
> rcuc/1-20 [001] dN..1.. 16017.277467: rcu_utilization:
> Start context switch
> rcuc/1-20 [001] dN..1.. 16017.277467: rcu_utilization:
> End context switch
> rcuc/1-20 [001] d...2.. 16017.277468: sched_switch:
> prev_comm=rcuc/1 prev_pid=20 prev_prio=98 prev_state=R+ ==>
> next_comm=cyclictest next_pid=20680 next_prio=19
> rcuc/1-20 [001] d...2.. 16017.277468: tlb_flush: pages:0
> reason:flush on task switch (0)
> cyclictest-20680 [001] ....1.. 16017.277469:
> sys_clock_nanosleep -> 0x0
> cyclictest-20680 [001] ....... 16017.277470: sys_exit: NR 230 = 0
> cyclictest-20680 [001] d...... 16017.277470:
> x86_fpu_regs_activated: x86/fpu: 000000001e825867 load: 1 xfeatures: 2
> xcomp_bv: 8000000000000003
> cyclictest-20680 [001] d..h... 16017.277474:
> call_function_single_entry: vector=251
> cyclictest-20680 [001] d..h... 16017.277475:
> call_function_single_exit: vector=251
> cyclictest-20680 [001] d..h... 16017.277476: irq_work_entry:
> vector=246
> cyclictest-20680 [001] d..h2.. 16017.277479:
> sched_migrate_task: comm=EtherCATControl pid=25916 prio=20 orig_cpu=1
> dest_cpu=0
> cyclictest-20680 [001] d..h... 16017.277480: irq_work_exit:
> vector=246
> cyclictest-20680 [001] ....1.. 16017.277485: sys_write(fd: 4,
> buf: 7f98e50e0300, count: 21)
> cyclictest-20680 [001] ....... 16017.277485: sys_enter: NR 1
> (4, 7f98e50e0300, 21, 0, 0, 0)
> cyclictest-20680 [001] ....... 16017.277486:
> tracing_mark_write: hit latency threshold (165 > 150)
> cyclictest-20680 [001] ....1.. 16017.277487: sys_write -> 0x21
> cyclictest-20680 [001] ....... 16017.277487: sys_exit: NR 1 = 33
> cyclictest-20680 [001] ....1.. 16017.277487: sys_write(fd: 5,
> buf: 409a8a, count: 2)
> cyclictest-20680 [001] ....... 16017.277487: sys_enter: NR 1
> (5, 409a8a, 2, 0, 0, 0)
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Network latency on 5.4 and 5.10
2021-05-26 3:23 ` chensong_2000
@ 2021-05-26 6:10 ` Christoph Mathys
2021-05-26 7:15 ` chensong_2000
0 siblings, 1 reply; 8+ messages in thread
From: Christoph Mathys @ 2021-05-26 6:10 UTC (permalink / raw)
To: chensong_2000; +Cc: Linux RT Users
Hi Song,
> what's your command to get this log? it seems to have better information
> than mine.
I just do old-school tracing:
# enable all available events
echo 1 > /sys/kernel/debug/tracing/events/enable
# start tracing
echo 1 > /sys/kernel/debug/tracing/tracing_on
Then I run cyclictest with -b and --tracemark to stop tracing when a
latency hits and write a mark to the file.
The trace can then be obtained by:
cat /sys/kernel/debug/tracing/trace
HTH
Christoph
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Network latency on 5.4 and 5.10
2021-05-26 6:10 ` Christoph Mathys
@ 2021-05-26 7:15 ` chensong_2000
0 siblings, 0 replies; 8+ messages in thread
From: chensong_2000 @ 2021-05-26 7:15 UTC (permalink / raw)
To: Christoph Mathys; +Cc: Linux RT Users
ok, looks enabling all events is enough. many thanks.
BR
Song
在 2021/5/26 下午2:10, Christoph Mathys 写道:
> Hi Song,
>
>> what's your command to get this log? it seems to have better information
>> than mine.
>
> I just do old-school tracing:
>
> # enable all available events
> echo 1 > /sys/kernel/debug/tracing/events/enable
> # start tracing
> echo 1 > /sys/kernel/debug/tracing/tracing_on
>
> Then I run cyclictest with -b and --tracemark to stop tracing when a
> latency hits and write a mark to the file.
> The trace can then be obtained by:
> cat /sys/kernel/debug/tracing/trace
>
> HTH
> Christoph
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Network latency on 5.4 and 5.10
2021-05-20 7:36 Network latency on 5.4 and 5.10 Christoph Mathys
` (2 preceding siblings ...)
2021-05-25 14:30 ` Christoph Mathys
@ 2021-05-31 15:16 ` Christoph Mathys
3 siblings, 0 replies; 8+ messages in thread
From: Christoph Mathys @ 2021-05-31 15:16 UTC (permalink / raw)
To: linux-rt-users
In the trace below (5.10), the NET_RX softirq is executed very late by ksoftirqd
instead of directly by the irq thread.
Can I somehow force the irq thread to always handle the softirq? I fear that
raising the priority of ksoftirqd will cause all kinds of undesired latencies
because its a rather general thread.
While tracing, I also see some PI here and there, that introduce various amounts
of latencies. Is there a good way to find the mutex responsible for the PI?
RTController-20346 [001] .....12 3001.164215: netif_rx: dev=lo skbaddr=00000000f2ce0335 len=60
RTController-20346 [001] d...113 3001.164215: softirq_raise: vec=3 [action=NET_RX]
RTController-20346 [001] .....12 3001.164215: netif_rx_exit: ret=0
RTController-20346 [001] .....12 3001.164216: net_dev_xmit: dev=lo skbaddr=00000000f2ce0335 len=74 rc=0
RTController-20346 [001] ...s.12 3001.164216: softirq_entry: vec=3 [action=NET_RX]
RTController-20346 [001] ...s.12 3001.164216: netif_receive_skb: dev=lo skbaddr=00000000f2ce0335 len=60
<IRQ from returning EtherCAT packet>
RTController-20346 [001] d..H212 3001.164217: irq_handler_entry: irq=131 name=ecat0-rx-0
RTController-20346 [001] d..H212 3001.164217: irq_handler_exit: irq=131 ret=handled
RTController-20346 [001] d..H412 3001.164217: sched_waking: comm=irq/131-ecat0-r pid=19970 prio=19 target_cpu=001
RTController-20346 [001] dN.H512 3001.164218: sched_wakeup: comm=irq/131-ecat0-r pid=19970 prio=19 target_cpu=001
RTController-20346 [001] dN.s112 3001.164218: rcu_utilization: Start context switch
RTController-20346 [001] dN.s112 3001.164219: rcu_utilization: End context switch
RTController-20346 [001] dN.s212 3001.164219: sched_stat_runtime: comm=RTController pid=20346 runtime=37773 [ns] vruntime=50306493110 [ns]
RTController-20346 [001] d..s212 3001.164221: sched_switch: prev_comm=RTController prev_pid=20346 prev_prio=120 prev_state=R+ ==> next_comm=irq/131-ecat0-r next_pid=19970 next_prio=19
RTController-20346 [001] d..s212 3001.164221: x86_fpu_regs_deactivated: x86/fpu: 00000000a8ca490d load: 0 xfeatures: 3 xcomp_bv: 8000000000000003
<IRQ thread starts working>
irq/131-ecat0-r-19970 [001] d...2.. 3001.164222: write_msr: 48, value 0
irq/131-ecat0-r-19970 [001] d..h1.. 3001.164222: local_timer_entry: vector=236
irq/131-ecat0-r-19970 [001] d..h2.. 3001.164222: hrtimer_cancel: hrtimer=00000000cbde1eb6
irq/131-ecat0-r-19970 [001] d..h1.. 3001.164222: hrtimer_expire_entry: hrtimer=00000000cbde1eb6 function=hrtimer_wakeup now=3001159523681
irq/131-ecat0-r-19970 [001] d..h3.. 3001.164223: sched_waking: comm=SafetyLogicShad pid=20142 prio=30 target_cpu=001
irq/131-ecat0-r-19970 [001] d..h4.. 3001.164223: sched_wakeup: comm=SafetyLogicShad pid=20142 prio=30 target_cpu=001
irq/131-ecat0-r-19970 [001] d..h1.. 3001.164223: hrtimer_expire_exit: hrtimer=00000000cbde1eb6
<idle>-0 [000] ....1.. 3001.164223: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164224: cpu_idle: state=0 cpu_id=0
irq/131-ecat0-r-19970 [001] d..h1.. 3001.164224: write_msr: 6e0, value 91a6aa5a118
irq/131-ecat0-r-19970 [001] d..h1.. 3001.164224: local_timer_exit: vector=236
<PI? WTF?>
irq/131-ecat0-r-19970 [001] d...311 3001.164225: sched_pi_setprio: comm=RTController pid=20346 oldprio=120 newprio=19
irq/131-ecat0-r-19970 [001] d...111 3001.164226: rcu_utilization: Start context switch
irq/131-ecat0-r-19970 [001] d...111 3001.164226: rcu_utilization: End context switch
irq/131-ecat0-r-19970 [001] d...211 3001.164227: sched_switch: prev_comm=irq/131-ecat0-r prev_pid=19970 prev_prio=19 prev_state=R+ ==> next_comm=RTController next_pid=20346 next_prio=19
RTController-20346 [001] d..s212 3001.164227: write_msr: 48, value 4
RTController-20346 [001] ...s.12 3001.164230: fib_table_lookup: table 254 oif 0 iif 1 proto 6 127.0.0.1/9000 -> 127.0.0.1/57200 tos 0 scope 0 flags 0 ==> dev lo gw 114.121.0.0/1901:102:4b4f::ffff:4574:6865 err 0
RTController-20346 [001] ...s.12 3001.164231: kmem_cache_alloc_node: call_site=__alloc_skb+0x4f/0x200 ptr=000000000d68a52d bytes_req=224 bytes_alloc=256 gfp_flags=GFP_ATOMIC node=-1
RTController-20346 [001] ...s.12 3001.164231: kmalloc_node: call_site=__ip_append_data+0xc08/0xdf0 ptr=000000007e7f4744 bytes_req=448 bytes_alloc=512 gfp_flags=GFP_ATOMIC|__GFP_NOWARN|__GFP_NOMEMALLOC node=-1
RTController-20346 [001] ...s.12 3001.164232: kfree: call_site=__ip_make_skb+0x2f2/0x420 ptr=0000000000000000
<idle>-0 [000] ....1.. 3001.164233: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164233: cpu_idle: state=0 cpu_id=0
RTController-20346 [001] ...s.12 3001.164234: net_dev_queue: dev=lo skbaddr=000000000d68a52d len=54
RTController-20346 [001] ...s.12 3001.164234: net_dev_start_xmit: dev=lo queue_mapping=0 skbaddr=000000000d68a52d vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 len=54 data_len=0 network_offset=14 transport_offset_valid=1 transport_offset=34 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0x0
RTController-20346 [001] ...s.12 3001.164234: netif_rx_entry: dev=lo napi_id=0x0 queue_mapping=0 skbaddr=000000000d68a52d vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x0800 ip_summed=0 hash=0x00000000 l4_hash=0 len=40 data_len=0 truesize=768 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0x0
RTController-20346 [001] ...s.12 3001.164234: netif_rx: dev=lo skbaddr=000000000d68a52d len=40
RTController-20346 [001] ...s.12 3001.164234: netif_rx_exit: ret=0
RTController-20346 [001] ...s.12 3001.164234: net_dev_xmit: dev=lo skbaddr=000000000d68a52d len=54 rc=0
RTController-20346 [001] ...s.12 3001.164235: kfree_skb: skbaddr=00000000f2ce0335 protocol=2048 location=000000005f4796b3
RTController-20346 [001] ...s.12 3001.164235: netif_receive_skb: dev=lo skbaddr=000000000d68a52d len=40
RTController-20346 [001] ...s.12 3001.164237: kfree: call_site=nf_conntrack_free+0x1f/0x40 [nf_conntrack] ptr=00000000d1a8317d
RTController-20346 [001] ...s.12 3001.164237: kmem_cache_free: call_site=nf_conntrack_free+0x2e/0x40 [nf_conntrack] ptr=000000006bfc0f40
RTController-20346 [001] ...s.12 3001.164238: napi_poll: napi poll on napi struct 0000000081e7f255 for device (no_device) work 2 budget 64
RTController-20346 [001] ...s.12 3001.164238: softirq_exit: vec=3 [action=NET_RX]
<Uff, only a short one. But my packet that just arrived on ecat0 has not been handled...>
RTController-20346 [001] d...311 3001.164238: sched_pi_setprio: comm=RTController pid=20346 oldprio=19 newprio=120
RTController-20346 [001] dN..311 3001.164240: sched_waking: comm=irq/131-ecat0-r pid=19970 prio=19 target_cpu=001
RTController-20346 [001] dN..411 3001.164240: sched_wakeup: comm=irq/131-ecat0-r pid=19970 prio=19 target_cpu=001
RTController-20346 [001] dN..311 3001.164241: sched_stat_runtime: comm=RTController pid=20346 runtime=2053 [ns] vruntime=50306496047 [ns]
RTController-20346 [001] dN.h... 3001.164242: local_timer_entry: vector=236
RTController-20346 [001] dN.h1.. 3001.164242: softirq_raise: vec=8 [action=HRTIMER]
<idle>-0 [000] ....1.. 3001.164242: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164242: cpu_idle: state=0 cpu_id=0
RTController-20346 [001] dN.h... 3001.164242: write_msr: 6e0, value 91a6aace636
RTController-20346 [001] dN.h... 3001.164242: local_timer_exit: vector=236
RTController-20346 [001] dN..2.. 3001.164242: sched_waking: comm=ksoftirqd/1 pid=21 prio=120 target_cpu=001
RTController-20346 [001] dN..3.. 3001.164243: sched_wakeup: comm=ksoftirqd/1 pid=21 prio=120 target_cpu=001
RTController-20346 [001] dN..1.. 3001.164243: rcu_utilization: Start context switch
RTController-20346 [001] dN..1.. 3001.164243: rcu_utilization: End context switch
RTController-20346 [001] dN..2.. 3001.164243: sched_stat_runtime: comm=RTController pid=20346 runtime=2364 [ns] vruntime=50306498411 [ns]
RTController-20346 [001] d...2.. 3001.164244: sched_switch: prev_comm=RTController prev_pid=20346 prev_prio=120 prev_state=R+ ==> next_comm=irq/131-ecat0-r next_pid=19970 next_prio=19
< because it has not yet signaled the softirq? >
irq/131-ecat0-r-19970 [001] d...211 3001.164245: write_msr: 48, value 0
irq/131-ecat0-r-19970 [001] d....12 3001.164245: softirq_raise: vec=3 [action=NET_RX]
irq/131-ecat0-r-19970 [001] d...1.. 3001.164246: rcu_utilization: Start context switch
irq/131-ecat0-r-19970 [001] d...1.. 3001.164246: rcu_utilization: End context switch
irq/131-ecat0-r-19970 [001] d...2.. 3001.164247: sched_switch: prev_comm=irq/131-ecat0-r prev_pid=19970 prev_prio=19 prev_state=S ==> next_comm=SafetyLogicShad next_pid=20142 next_prio=30
irq/131-ecat0-r-19970 [001] d...2.. 3001.164249: tlb_flush: pages:0 reason:flush on task switch (0)
< hey, why is my packet not handled by the softirq right away? >
SafetyLogicShad-20142 [001] d...2.. 3001.164250: write_msr: 48, value 4
SafetyLogicShad-20142 [001] ....1.. 3001.164250: sys_nanosleep -> 0x0
SafetyLogicShad-20142 [001] ....... 3001.164250: sys_exit: NR 35 = 0
SafetyLogicShad-20142 [001] d...... 3001.164251: x86_fpu_regs_activated: x86/fpu: 00000000e034cb86 load: 1 xfeatures: 3 xcomp_bv: 8000000000000003
<idle>-0 [000] ....1.. 3001.164251: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164252: cpu_idle: state=0 cpu_id=0
<idle>-0 [000] ....1.. 3001.164261: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164261: cpu_idle: state=0 cpu_id=0
<idle>-0 [000] ....1.. 3001.164270: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164270: cpu_idle: state=0 cpu_id=0
<idle>-0 [000] ....1.. 3001.164279: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164279: cpu_idle: state=0 cpu_id=0
<idle>-0 [000] ....1.. 3001.164289: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164289: cpu_idle: state=0 cpu_id=0
<idle>-0 [000] ....1.. 3001.164298: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164298: cpu_idle: state=0 cpu_id=0
<idle>-0 [000] ....1.. 3001.164307: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164308: cpu_idle: state=0 cpu_id=0
<idle>-0 [000] ....1.. 3001.164317: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] d...1.. 3001.164317: cpu_idle: state=0 cpu_id=0
<idle>-0 [000] d..h1.. 3001.164324: local_timer_entry: vector=236
<idle>-0 [000] d..h2.. 3001.164324: hrtimer_cancel: hrtimer=00000000eb8a586a
<idle>-0 [000] d..h1.. 3001.164324: hrtimer_expire_entry: hrtimer=00000000eb8a586a function=hrtimer_wakeup now=3001159625341
<idle>-0 [000] d..h3.. 3001.164324: sched_waking: comm=EtherlabDaemon pid=20787 prio=20 target_cpu=000
<idle>-0 [000] dN.h4.. 3001.164325: sched_wakeup: comm=EtherlabDaemon pid=20787 prio=20 target_cpu=000
<idle>-0 [000] dN.h1.. 3001.164325: hrtimer_expire_exit: hrtimer=00000000eb8a586a
<idle>-0 [000] dN.h1.. 3001.164325: write_msr: 6e0, value 91a6aac112e
<idle>-0 [000] dN.h1.. 3001.164325: local_timer_exit: vector=236
<idle>-0 [000] .N..1.. 3001.164326: cpu_idle: state=4294967295 cpu_id=0
<idle>-0 [000] dN..1.. 3001.164326: rcu_utilization: Start context switch
<idle>-0 [000] dN..1.. 3001.164326: rcu_utilization: End context switch
<idle>-0 [000] d...2.. 3001.164326: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=EtherlabDaemon next_pid=20787 next_prio=20
EtherlabDaemon-20787 [000] d...2.. 3001.164327: write_msr: 48, value 4
EtherlabDaemon-20787 [000] ....1.. 3001.164327: sys_nanosleep -> 0x0
EtherlabDaemon-20787 [000] ....... 3001.164327: sys_exit: NR 35 = 0
EtherlabDaemon-20787 [000] d...... 3001.164327: x86_fpu_regs_activated: x86/fpu: 000000007cc67df0 load: 1 xfeatures: 3 xcomp_bv: 8000000000000003
< well, this sucks, no packet is ready for me... >
EtherlabDaemon-20787 [000] ....1.. 3001.164330: sys_recvfrom(fd: a, ubuf: a90c10, size: 640, flags: 40, addr: 0, addr_len: 0)
EtherlabDaemon-20787 [000] ....... 3001.164330: sys_enter: NR 45 (a, a90c10, 640, 40, 0, 0)
EtherlabDaemon-20787 [000] ....1.. 3001.164331: sys_recvfrom -> 0xfffffffffffffff5
EtherlabDaemon-20787 [000] ....... 3001.164331: sys_exit: NR 45 = -11
SafetyLogicShad-20142 [001] ....1.. 3001.164337: sys_nanosleep(rqtp: 7ffef5e5f960, rmtp: 0)
SafetyLogicShad-20142 [001] ....... 3001.164337: sys_enter: NR 35 (7ffef5e5f960, 0, 1, 32f4, 3e8, 4)
EtherlabDaemon-20787 [000] ....1.. 3001.164337: sys_sendto(fd: a, buff: a8cee0, len: 36a, flags: 0, addr: 0, addr_len: 0)
SafetyLogicShad-20142 [001] ....... 3001.164337: hrtimer_init: hrtimer=00000000cbde1eb6 clockid=CLOCK_MONOTONIC mode=REL
EtherlabDaemon-20787 [000] ....... 3001.164337: sys_enter: NR 44 (a, a8cee0, 36a, 0, 0, 0)
SafetyLogicShad-20142 [001] d...1.. 3001.164338: hrtimer_start: hrtimer=00000000cbde1eb6 function=hrtimer_wakeup expires=3001160639092 softexpires=3001160639092 mode=0x9
SafetyLogicShad-20142 [001] d...1.. 3001.164338: rcu_utilization: Start context switch
EtherlabDaemon-20787 [000] ....... 3001.164338: kmem_cache_alloc_node: call_site=__alloc_skb+0x4f/0x200 ptr=000000005b815b30 bytes_req=224 bytes_alloc=256 gfp_flags=GFP_KERNEL node=-1
SafetyLogicShad-20142 [001] d...1.. 3001.164338: rcu_utilization: End context switch
EtherlabDaemon-20787 [000] ....... 3001.164338: kmalloc_node: call_site=alloc_skb_with_frags+0x4d/0x1b0 ptr=0000000091eb5053 bytes_req=1216 bytes_alloc=2048 gfp_flags=GFP_KERNEL|__GFP_NOWARN|__GFP_NOMEMALLOC node=-1
EtherlabDaemon-20787 [000] .....12 3001.164339: net_dev_queue: dev=ecat0 skbaddr=000000005b815b30 len=874
EtherlabDaemon-20787 [000] .....15 3001.164339: net_dev_start_xmit: dev=ecat0 queue_mapping=0 skbaddr=000000005b815b30 vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x88a4 ip_summed=0 len=874 data_len=0 network_offset=14 transport_offset_valid=0 transport_offset=65533 tx_flags=0 gso_size=0 gso_segs=0 gso_type=0x0
SafetyLogicShad-20142 [001] d...2.. 3001.164339: sched_switch: prev_comm=SafetyLogicShad prev_pid=20142 prev_prio=30 prev_state=S ==> next_comm=ksoftirqd/1 next_pid=21 next_prio=120
EtherlabDaemon-20787 [000] .....15 3001.164340: net_dev_xmit: dev=ecat0 skbaddr=000000005b815b30 len=874 rc=0
SafetyLogicShad-20142 [001] d...2.. 3001.164340: x86_fpu_regs_deactivated: x86/fpu: 00000000e034cb86 load: 0 xfeatures: 3 xcomp_bv: 8000000000000003
EtherlabDaemon-20787 [000] .....14 3001.164340: qdisc_dequeue: dequeue ifindex=2 qdisc handle=0x0 parent=0x1 txq_state=0x0 packets=0 skbaddr=0000000000000000
EtherlabDaemon-20787 [000] ....1.. 3001.164340: sys_sendto -> 0x36a
EtherlabDaemon-20787 [000] ....... 3001.164340: sys_exit: NR 44 = 874
< only now the packet is being handled >
ksoftirqd/1-21 [001] d...2.. 3001.164340: write_msr: 48, value 0
ksoftirqd/1-21 [001] ...s.12 3001.164341: softirq_entry: vec=3 [action=NET_RX]
ksoftirqd/1-21 [001] ...s.12 3001.164342: kmem_cache_alloc: call_site=__build_skb+0x1f/0xe0 ptr=000000006c4a30be bytes_req=224 bytes_alloc=256 gfp_flags=GFP_ATOMIC
ksoftirqd/1-21 [001] ...s.12 3001.164342: napi_gro_receive_entry: dev=ecat0 napi_id=0x0 queue_mapping=1 skbaddr=000000006c4a30be vlan_tagged=0 vlan_proto=0x0000 vlan_tci=0x0000 protocol=0x88a4 ip_summed=0 hash=0x00000000 l4_hash=0 len=874 data_len=0 truesize=2304 mac_header_valid=1 mac_header=-14 nr_frags=0 gso_size=0 gso_type=0x0
ksoftirqd/1-21 [001] ...s.12 3001.164342: napi_gro_receive_exit: ret=3
ksoftirqd/1-21 [001] ...s.12 3001.164342: netif_receive_skb: dev=ecat0 skbaddr=000000006c4a30be len=874
ksoftirqd/1-21 [001] ...s.12 3001.164343: napi_poll: napi poll on napi struct 0000000028309abc for device ecat0 work 1 budget 64
EtherlabDaemon-20787 [000] ....1.. 3001.164343: sys_futex(uaddr: 7f8d9fca7000, op: 1, val: 1, utime: 0, uaddr2: 5b9663, val3: bb9)
ksoftirqd/1-21 [001] ...s.12 3001.164343: softirq_exit: vec=3 [action=NET_RX]
EtherlabDaemon-20787 [000] ....... 3001.164343: sys_enter: NR 202 (7f8d9fca7000, 1, 1, 0, 5b9663, bb9)
ksoftirqd/1-21 [001] ...s.12 3001.164343: softirq_entry: vec=8 [action=HRTIMER]
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-05-31 17:05 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-20 7:36 Network latency on 5.4 and 5.10 Christoph Mathys
2021-05-20 8:08 ` Christoph Mathys
2021-05-20 13:35 ` Peter C. Wallace
2021-05-25 14:30 ` Christoph Mathys
2021-05-26 3:23 ` chensong_2000
2021-05-26 6:10 ` Christoph Mathys
2021-05-26 7:15 ` chensong_2000
2021-05-31 15:16 ` Christoph Mathys
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.