* [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ @ 2021-03-25 16:43 Jens Axboe 2021-03-25 16:43 ` [PATCH 1/2] kernel: don't include PF_IO_WORKERs as part of same_thread_group() Jens Axboe ` (2 more replies) 0 siblings, 3 replies; 31+ messages in thread From: Jens Axboe @ 2021-03-25 16:43 UTC (permalink / raw) To: io-uring; +Cc: torvalds, ebiederm, linux-kernel, oleg, metze Hi, Stefan reports that attaching to a task with io_uring will leave gdb very confused and just repeatedly attempting to attach to the IO threads, even though it receives an -EPERM every time. This patchset proposes to skip PF_IO_WORKER threads as same_thread_group(), except for accounting purposes which we still desire. We also skip listing the IO threads in /proc/<pid>/task/ so that gdb doesn't think it should stop and attach to them. This makes us consistent with earlier kernels, where these async threads were not related to the ring owning task, and hence gdb (and others) ignored them anyway. Seems to me that this is the right approach, but open to comments on if others agree with this. Oleg, I did see your messages as well on SIGSTOP, and as was discussed with Eric as well, this is something we most certainly can revisit. I do think that the visibility of these threads is a separate issue. Even with SIGSTOP implemented (which I did try as well), we're never going to allow ptrace attach and hence gdb would still be broken. Hence I'd rather treat them as separate issues to attack. -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH 1/2] kernel: don't include PF_IO_WORKERs as part of same_thread_group() 2021-03-25 16:43 [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ Jens Axboe @ 2021-03-25 16:43 ` Jens Axboe 2021-03-25 16:43 ` [PATCH 2/2] proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/ Jens Axboe 2021-03-25 19:33 ` [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ Eric W. Biederman 2 siblings, 0 replies; 31+ messages in thread From: Jens Axboe @ 2021-03-25 16:43 UTC (permalink / raw) To: io-uring; +Cc: torvalds, ebiederm, linux-kernel, oleg, metze, Jens Axboe Don't pretend that the IO threads are in the same thread group, the only case where that seems to be desired is for accounting purposes. Add a special accounting function for that and make the scheduler side use it. For signals and ptrace, we don't allow them to be treated as threads anyway. Reported-by: Stefan Metzmacher <metze@samba.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> --- include/linux/sched/signal.h | 9 ++++++++- kernel/sched/cputime.c | 2 +- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index 3f6a0fcaa10c..4f621e386abf 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -668,11 +668,18 @@ static inline bool thread_group_leader(struct task_struct *p) } static inline -bool same_thread_group(struct task_struct *p1, struct task_struct *p2) +bool same_thread_group_account(struct task_struct *p1, struct task_struct *p2) { return p1->signal == p2->signal; } +static inline +bool same_thread_group(struct task_struct *p1, struct task_struct *p2) +{ + return same_thread_group_account(p1, p2) && + !((p1->flags | p2->flags) & PF_IO_WORKER); +} + static inline struct task_struct *next_thread(const struct task_struct *p) { return list_entry_rcu(p->thread_group.next, diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index 5f611658eeab..625110cacc2a 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -307,7 +307,7 @@ void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times) * those pending times and rely only on values updated on tick or * other scheduler action. */ - if (same_thread_group(current, tsk)) + if (same_thread_group_account(current, tsk)) (void) task_sched_runtime(current); rcu_read_lock(); -- 2.31.0 ^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH 2/2] proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/ 2021-03-25 16:43 [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ Jens Axboe 2021-03-25 16:43 ` [PATCH 1/2] kernel: don't include PF_IO_WORKERs as part of same_thread_group() Jens Axboe @ 2021-03-25 16:43 ` Jens Axboe 2021-03-29 1:57 ` [proc] 43b2a76b1a: will-it-scale.per_process_ops -11.3% regression kernel test robot 2021-03-25 19:33 ` [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ Eric W. Biederman 2 siblings, 1 reply; 31+ messages in thread From: Jens Axboe @ 2021-03-25 16:43 UTC (permalink / raw) To: io-uring; +Cc: torvalds, ebiederm, linux-kernel, oleg, metze, Jens Axboe We don't allow SIGSTOP and ptrace attach to these threads, and that confuses applications like gdb that assume they can attach to any thread listed in /proc/<pid>/task/. gdb then enters an infinite loop of retrying attach, even though it fails with the same error (-EPERM) every time. Skip over PF_IO_WORKER threads in the proc task setup. We can't just terminate the when we find a PF_IO_WORKER thread, as there's no real ordering here. It's perfectly feasible to have the first thread be an IO worker, and then a real thread after that. Hence just implement the skip. Reported-by: Stefan Metzmacher <metze@samba.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> --- fs/proc/base.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/fs/proc/base.c b/fs/proc/base.c index 3851bfcdba56..abff2fe10bfa 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -3723,7 +3723,7 @@ static struct task_struct *first_tid(struct pid *pid, int tid, loff_t f_pos, */ pos = task = task->group_leader; do { - if (!nr--) + if (same_thread_group(task, pos) && !nr--) goto found; } while_each_thread(task, pos); fail: @@ -3744,16 +3744,22 @@ static struct task_struct *first_tid(struct pid *pid, int tid, loff_t f_pos, */ static struct task_struct *next_tid(struct task_struct *start) { - struct task_struct *pos = NULL; + struct task_struct *tmp, *pos = NULL; + rcu_read_lock(); - if (pid_alive(start)) { - pos = next_thread(start); - if (thread_group_leader(pos)) - pos = NULL; - else - get_task_struct(pos); + if (!pid_alive(start)) + goto no_thread; + list_for_each_entry_rcu(tmp, &start->thread_group, thread_group) { + if (!thread_group_leader(tmp) && same_thread_group(start, tmp)) { + get_task_struct(tmp); + pos = tmp; + break; + } } +no_thread: rcu_read_unlock(); + if (!pos) + return NULL; put_task_struct(start); return pos; } -- 2.31.0 ^ permalink raw reply related [flat|nested] 31+ messages in thread
* [proc] 43b2a76b1a: will-it-scale.per_process_ops -11.3% regression 2021-03-25 16:43 ` [PATCH 2/2] proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/ Jens Axboe @ 2021-03-29 1:57 ` kernel test robot 0 siblings, 0 replies; 31+ messages in thread From: kernel test robot @ 2021-03-29 1:57 UTC (permalink / raw) To: Jens Axboe Cc: 0day robot, Stefan Metzmacher, LKML, lkp, ying.huang, feng.tang, zhengjun.xing, io-uring, torvalds, ebiederm, oleg, Jens Axboe [-- Attachment #1: Type: text/plain, Size: 1645450 bytes --] Greeting, FYI, we noticed a -11.3% regression of will-it-scale.per_process_ops due to commit: commit: 43b2a76b1a5abcc9833b463bef137d35cbb85cdd ("[PATCH 2/2] proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") url: https://github.com/0day-ci/linux/commits/Jens-Axboe/Don-t-show-PF_IO_WORKER-in-proc-pid-task/20210326-004554 base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git a74e6a014c9d4d4161061f770c9b4f98372ac778 in testcase: will-it-scale on test machine: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory with following parameters: nr_task: 16 mode: process test: eventfd1 cpufreq_governor: performance ucode: 0x5003006 test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two. test-url: https://github.com/antonblanchard/will-it-scale In addition to that, the commit also has significant impact on the following tests: +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_process_ops -11.1% regression | | test machine | 104 threads Skylake with 192G memory | | test parameters | cpufreq_governor=performance | | | mode=process | | | nr_task=16 | | | test=malloc1 | | | ucode=0x2006a0a | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_process_ops -6.3% regression | | test machine | 104 threads Skylake with 192G memory | | test parameters | cpufreq_governor=performance | | | mode=process | | | nr_task=100% | | | test=eventfd1 | | | ucode=0x2006a0a | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_thread_ops -2.2% regression | | test machine | 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory | | test parameters | cpufreq_governor=performance | | | mode=thread | | | nr_task=100% | | | test=poll1 | | | ucode=0x42e | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_process_ops -2.6% regression | | test machine | 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory | | test parameters | cpufreq_governor=performance | | | mode=process | | | nr_task=50% | | | test=pread1 | | | ucode=0x42e | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_process_ops -15.3% regression | | test machine | 104 threads Skylake with 192G memory | | test parameters | cpufreq_governor=performance | | | mode=process | | | nr_task=50% | | | test=brk1 | | | ucode=0x2006a0a | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_thread_ops -7.1% regression | | test machine | 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory | | test parameters | cpufreq_governor=performance | | | mode=thread | | | nr_task=16 | | | test=pipe1 | | | ucode=0x16 | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_process_ops -21.4% regression | | test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory | | test parameters | cpufreq_governor=performance | | | mode=process | | | nr_task=100% | | | test=eventfd1 | | | ucode=0x5003006 | +------------------+---------------------------------------------------------------------------+ | testcase: change | unixbench: unixbench.score -4.4% regression | | test machine | 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory | | test parameters | cpufreq_governor=performance | | | nr_task=1 | | | runtime=300s | | | test=pipe | | | ucode=0x4003006 | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_process_ops -20.3% regression | | test machine | 104 threads Skylake with 192G memory | | test parameters | cpufreq_governor=performance | | | mode=process | | | nr_task=16 | | | test=open1 | | | ucode=0x2006a0a | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_thread_ops -17.7% regression | | test machine | 144 threads Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory | | test parameters | cpufreq_governor=performance | | | mode=thread | | | nr_task=50% | | | test=readseek1 | | | ucode=0x700001e | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_thread_ops -11.2% regression | | test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory | | test parameters | cpufreq_governor=performance | | | mode=thread | | | nr_task=16 | | | test=eventfd1 | | | ucode=0x5003006 | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_thread_ops -1.1% regression | | test machine | 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory | | test parameters | cpufreq_governor=performance | | | mode=thread | | | nr_task=50% | | | test=pthread_mutex2 | | | ucode=0x42e | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_thread_ops -6.7% regression | | test machine | 104 threads Skylake with 192G memory | | test parameters | cpufreq_governor=performance | | | mode=thread | | | nr_task=100% | | | test=eventfd1 | | | ucode=0x2006a0a | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_process_ops -11.2% regression | | test machine | 104 threads Skylake with 192G memory | | test parameters | cpufreq_governor=performance | | | mode=process | | | nr_task=50% | | | test=lock1 | | | ucode=0x2006a0a | +------------------+---------------------------------------------------------------------------+ | testcase: change | will-it-scale: will-it-scale.per_thread_ops 3.3% improvement | | test machine | 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory | | test parameters | cpufreq_governor=performance | | | mode=thread | | | nr_task=50% | | | test=sched_yield | | | ucode=0x16 | +------------------+---------------------------------------------------------------------------+ | testcase: change | fxmark: fxmark.hdd_ext4_MWRL_9_bufferedio.works/sec -11.4% regression | | test machine | 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory | | test parameters | cpufreq_governor=performance | | | directio=bufferedio | | | disk=1HDD | | | fstype=ext4 | | | media=hdd | | | test=MWRL | | | ucode=0x11 | +------------------+---------------------------------------------------------------------------+ | testcase: change | fxmark: fxmark.hdd_btrfs_MWRL_9_bufferedio.works/sec -31.7% regression | | test machine | 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory | | test parameters | cpufreq_governor=performance | | | directio=bufferedio | | | disk=1HDD | | | fstype=btrfs | | | media=hdd | | | test=MWRL | | | ucode=0x11 | +------------------+---------------------------------------------------------------------------+ | testcase: change | stress-ng: stress-ng.netdev.ops_per_sec -16.5% regression | | test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory | | test parameters | class=network | | | cpufreq_governor=performance | | | disk=1HDD | | | nr_threads=100% | | | test=netdev | | | testtime=60s | | | ucode=0x5003006 | +------------------+---------------------------------------------------------------------------+ | testcase: change | fxmark: fxmark.hdd_xfs_DRBH_2_directio.works/sec -14.3% regression | | test machine | 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory | | test parameters | cpufreq_governor=performance | | | directio=directio | | | disk=1HDD | | | fstype=xfs | | | media=hdd | | | test=DRBH | | | ucode=0x11 | +------------------+---------------------------------------------------------------------------+ | testcase: change | stress-ng: stress-ng.fanotify.ops_per_sec 192.8% improvement | | test machine | 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory | | test parameters | class=filesystem | | | cpufreq_governor=performance | | | disk=1HDD | | | fs=btrfs | | | nr_threads=10% | | | test=fanotify | | | testtime=60s | | | ucode=0x42e | +------------------+---------------------------------------------------------------------------+ | testcase: change | fxmark: fxmark.hdd_xfs_MWCM_9_bufferedio.works/sec -9.0% regression | | test machine | 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory | | test parameters | cpufreq_governor=performance | | | directio=bufferedio | | | disk=1HDD | | | fstype=xfs | | | media=hdd | | | test=MWCM | | | ucode=0x11 | +------------------+---------------------------------------------------------------------------+ If you fix the issue, kindly add following tag Reported-by: kernel test robot <oliver.sang@intel.com> Details are as below: --------------------------------------------------------------------------------------------------> To reproduce: git clone https://github.com/intel/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp split-job --compatible job.yaml bin/lkp run compatible-job.yaml ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/process/16/debian-10.4-x86_64-20200603.cgz/lkp-csl-2ap2/eventfd1/will-it-scale/0x5003006 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 47173522 -11.3% 41822683 will-it-scale.16.processes 2948344 -11.3% 2613917 will-it-scale.per_process_ops 47173522 -11.3% 41822683 will-it-scale.workload 334392 ± 62% +222.6% 1078700 ± 43% numa-numastat.node3.local_node 359226 ± 51% +217.8% 1141654 ± 38% numa-numastat.node3.numa_hit 1.63 ± 20% -0.6 1.01 ± 8% mpstat.cpu.all.irq% 6.70 +0.8 7.53 mpstat.cpu.all.sys% 1.41 -0.1 1.27 mpstat.cpu.all.usr% 1387381 +194.5% 4086263 ± 5% vmstat.memory.cache 15.17 ± 2% +14.3% 17.33 ± 2% vmstat.procs.r 2193 ± 3% +9.6% 2402 vmstat.system.cs 1429848 ± 16% +26.2% 1804776 ± 12% sched_debug.cfs_rq:/.spread0.max 566.65 +74.3% 987.56 sched_debug.cfs_rq:/.util_est_enqueued.max 331.80 ± 3% +11.2% 368.82 sched_debug.cpu.curr->pid.avg 0.09 ± 4% +11.1% 0.10 sched_debug.cpu.nr_running.avg 54610 ± 3% +4930.8% 2747384 ± 7% meminfo.Active 54610 ± 3% +4930.8% 2747384 ± 7% meminfo.Active(anon) 173070 +13.0% 195601 meminfo.AnonHugePages 265559 +15.2% 305891 meminfo.AnonPages 1256498 +214.2% 3948319 ± 5% meminfo.Cached 509316 +545.4% 3287375 ± 6% meminfo.Committed_AS 285355 +13.8% 324656 meminfo.Inactive 285355 +13.8% 324656 meminfo.Inactive(anon) 3296220 +104.7% 6746177 ± 3% meminfo.Memused 4791 ± 3% +16.9% 5602 ± 2% meminfo.PageTables 75032 ± 2% +3587.6% 2766852 ± 7% meminfo.Shmem 4299183 ± 3% +57.3% 6764473 ± 3% meminfo.max_used_kB 179.17 ±158% +4225.3% 7749 ±160% numa-vmstat.node0.nr_active_anon 179.17 ±158% +4225.3% 7749 ±160% numa-vmstat.node0.nr_zone_active_anon 196.17 ±129% +95954.0% 188425 ± 98% numa-vmstat.node1.nr_active_anon 73277 ± 2% +258.3% 262578 ± 70% numa-vmstat.node1.nr_file_pages 2075 ± 82% +9064.7% 190228 ± 97% numa-vmstat.node1.nr_shmem 196.17 ±129% +95954.0% 188425 ± 98% numa-vmstat.node1.nr_zone_active_anon 254.67 ±134% +67712.8% 172696 ± 70% numa-vmstat.node2.nr_active_anon 76891 ± 4% +221.5% 247214 ± 49% numa-vmstat.node2.nr_file_pages 455.33 ±108% +38042.8% 173677 ± 70% numa-vmstat.node2.nr_shmem 254.67 ±134% +67712.8% 172696 ± 70% numa-vmstat.node2.nr_zone_active_anon 12988 ± 5% +2347.1% 317842 ± 58% numa-vmstat.node3.nr_active_anon 88291 ± 3% +344.8% 392756 ± 47% numa-vmstat.node3.nr_file_pages 14154 ± 8% +2154.8% 319144 ± 58% numa-vmstat.node3.nr_shmem 12988 ± 5% +2347.1% 317842 ± 58% numa-vmstat.node3.nr_zone_active_anon 707483 ± 24% +53.0% 1082663 ± 30% numa-vmstat.node3.numa_hit 717.67 ±158% +4214.9% 30966 ±160% numa-meminfo.node0.Active 717.67 ±158% +4214.9% 30966 ±160% numa-meminfo.node0.Active(anon) 898016 ± 7% +18.0% 1059629 ± 8% numa-meminfo.node0.MemUsed 785.33 ±129% +95815.7% 753258 ± 98% numa-meminfo.node1.Active 785.33 ±129% +95815.7% 753258 ± 98% numa-meminfo.node1.Active(anon) 293110 ± 2% +258.2% 1049866 ± 70% numa-meminfo.node1.FilePages 737028 ± 9% +128.1% 1681518 ± 47% numa-meminfo.node1.MemUsed 8303 ± 82% +9058.8% 760466 ± 97% numa-meminfo.node1.Shmem 1019 ±134% +67579.3% 690216 ± 70% numa-meminfo.node2.Active 1019 ±134% +67579.3% 690216 ± 70% numa-meminfo.node2.Active(anon) 94946 ± 62% +232.2% 315437 ± 45% numa-meminfo.node2.AnonPages.max 307567 ± 4% +221.3% 988285 ± 49% numa-meminfo.node2.FilePages 808971 ± 8% +111.6% 1712135 ± 29% numa-meminfo.node2.MemUsed 1822 ±108% +37980.2% 694137 ± 70% numa-meminfo.node2.Shmem 51989 ± 5% +2342.9% 1270067 ± 58% numa-meminfo.node3.Active 51989 ± 5% +2342.9% 1270067 ± 58% numa-meminfo.node3.Active(anon) 353216 ± 3% +344.4% 1569725 ± 47% numa-meminfo.node3.FilePages 853866 ± 11% +168.2% 2290171 ± 35% numa-meminfo.node3.MemUsed 56665 ± 8% +2150.6% 1275276 ± 58% numa-meminfo.node3.Shmem 13627 ± 3% +4930.0% 685442 ± 7% proc-vmstat.nr_active_anon 66425 +15.2% 76490 proc-vmstat.nr_anon_pages 4835055 -1.8% 4749110 proc-vmstat.nr_dirty_background_threshold 9681934 -1.8% 9509833 proc-vmstat.nr_dirty_threshold 314100 +213.8% 985675 ± 5% proc-vmstat.nr_file_pages 48604271 -1.8% 47743555 proc-vmstat.nr_free_pages 71374 +13.7% 81180 proc-vmstat.nr_inactive_anon 9741 +1.6% 9892 proc-vmstat.nr_mapped 1203 ± 2% +16.7% 1403 ± 2% proc-vmstat.nr_page_table_pages 18733 ± 2% +3584.9% 690308 ± 7% proc-vmstat.nr_shmem 72469 +7.5% 77891 ± 2% proc-vmstat.nr_slab_unreclaimable 13627 ± 3% +4930.0% 685442 ± 7% proc-vmstat.nr_zone_active_anon 71374 +13.7% 81180 proc-vmstat.nr_zone_inactive_anon 1243541 +141.3% 3000844 ± 3% proc-vmstat.numa_hit 983775 +178.6% 2741036 ± 4% proc-vmstat.numa_local 43874 ± 21% +65.0% 72400 ± 30% proc-vmstat.numa_pte_updates 18545 ± 3% +398.9% 92528 ± 6% proc-vmstat.pgactivate 1320163 +224.0% 4277306 ± 5% proc-vmstat.pgalloc_normal 1176045 -2.5% 1146897 proc-vmstat.pgfault 1325260 ± 5% +101.6% 2671978 ± 6% proc-vmstat.pgfree 2999 ± 5% -10.0% 2698 ± 7% slabinfo.PING.active_objs 2999 ± 5% -10.0% 2698 ± 7% slabinfo.PING.num_objs 73711 ± 6% +27.9% 94307 ± 4% slabinfo.filp.active_objs 1160 ± 6% +28.1% 1486 ± 4% slabinfo.filp.active_slabs 74267 ± 6% +28.1% 95154 ± 4% slabinfo.filp.num_objs 1160 ± 6% +28.1% 1486 ± 4% slabinfo.filp.num_slabs 28063 ± 2% +17.3% 32913 ± 5% slabinfo.kmalloc-512.active_objs 28448 ± 2% +16.8% 33214 ± 4% slabinfo.kmalloc-512.num_objs 21843 ± 2% -27.7% 15791 ± 3% slabinfo.proc_inode_cache.active_objs 458.33 ± 3% -23.3% 351.67 ± 4% slabinfo.proc_inode_cache.active_slabs 22024 ± 3% -23.2% 16912 ± 4% slabinfo.proc_inode_cache.num_objs 458.33 ± 3% -23.3% 351.67 ± 4% slabinfo.proc_inode_cache.num_slabs 27129 +38.3% 37516 ± 2% slabinfo.radix_tree_node.active_objs 484.00 +38.3% 669.50 ± 2% slabinfo.radix_tree_node.active_slabs 27129 +38.3% 37516 ± 2% slabinfo.radix_tree_node.num_objs 484.00 +38.3% 669.50 ± 2% slabinfo.radix_tree_node.num_slabs 5470 ± 5% +9.7% 6000 ± 5% slabinfo.signal_cache.active_objs 5506 ± 6% +9.0% 6000 ± 5% slabinfo.signal_cache.num_objs 2158 +20.5% 2601 ± 7% slabinfo.task_struct.active_objs 2172 +20.4% 2616 ± 7% slabinfo.task_struct.active_slabs 2172 +20.4% 2616 ± 7% slabinfo.task_struct.num_objs 2172 +20.4% 2616 ± 7% slabinfo.task_struct.num_slabs 16909 ± 10% -19.8% 13563 ± 6% softirqs.CPU10.RCU 14967 ± 7% -15.2% 12694 ± 2% softirqs.CPU104.RCU 17878 ± 6% -17.9% 14671 ± 8% softirqs.CPU12.RCU 16947 ± 14% -18.3% 13844 ± 5% softirqs.CPU13.RCU 17862 ± 14% -18.7% 14521 ± 3% softirqs.CPU14.RCU 13535 ± 18% +114.7% 29055 ± 34% softirqs.CPU144.RCU 12200 ± 17% -16.2% 10227 ± 8% softirqs.CPU163.RCU 12305 ± 13% +140.2% 29562 ± 18% softirqs.CPU168.RCU 12133 ± 12% -18.8% 9856 ± 8% softirqs.CPU171.RCU 12087 ± 12% -19.3% 9753 ± 8% softirqs.CPU174.RCU 14277 ± 15% -24.5% 10781 ± 17% softirqs.CPU185.RCU 13292 ± 16% -19.2% 10746 ± 16% softirqs.CPU186.RCU 13156 ± 16% -20.7% 10435 ± 16% softirqs.CPU187.RCU 17562 ± 11% -22.5% 13605 ± 8% softirqs.CPU2.RCU 16919 ± 11% -19.1% 13690 ± 8% softirqs.CPU3.RCU 17426 ± 10% -19.4% 14052 ± 8% softirqs.CPU4.RCU 13629 ± 19% +171.3% 36969 ± 32% softirqs.CPU48.RCU 40218 ± 4% -12.1% 35333 ± 8% softirqs.CPU48.SCHED 17180 ± 10% -19.8% 13783 ± 6% softirqs.CPU5.RCU 17300 ± 12% -18.4% 14110 ± 8% softirqs.CPU6.RCU 12435 ± 16% -17.2% 10295 ± 9% softirqs.CPU67.RCU 16439 ± 13% -20.2% 13110 ± 8% softirqs.CPU7.RCU 12481 ± 14% +229.3% 41098 ± 16% softirqs.CPU72.RCU 39776 ± 3% -19.1% 32174 ± 8% softirqs.CPU72.SCHED 16186 ± 10% -20.8% 12820 ± 7% softirqs.CPU9.RCU 18637 ± 4% +96.3% 36588 ± 17% softirqs.CPU95.SCHED 1.17 ± 5% -0.1 1.03 ± 2% perf-stat.i.branch-miss-rate% 1.92e+08 ± 5% -12.3% 1.685e+08 ± 2% perf-stat.i.branch-misses 13.96 ± 72% +16.8 30.78 ± 22% perf-stat.i.cache-miss-rate% 4525225 ± 10% +331.5% 19527153 ± 19% perf-stat.i.cache-misses 2145 ± 3% +9.8% 2356 perf-stat.i.context-switches 0.69 +15.5% 0.80 perf-stat.i.cpi 5.64e+10 +14.7% 6.472e+10 perf-stat.i.cpu-cycles 12806 ± 12% -57.3% 5465 ± 15% perf-stat.i.cycles-between-cache-misses 2.373e+10 -3.9% 2.28e+10 perf-stat.i.dTLB-loads 1.588e+10 -4.5% 1.516e+10 perf-stat.i.dTLB-stores 98.13 -0.8 97.32 perf-stat.i.iTLB-load-miss-rate% 1.763e+08 -9.3% 1.6e+08 perf-stat.i.iTLB-load-misses 463.51 +9.5% 507.53 perf-stat.i.instructions-per-iTLB-miss 1.44 -13.4% 1.25 perf-stat.i.ipc 1.39 ± 5% -84.1% 0.22 ± 84% perf-stat.i.major-faults 0.29 +14.7% 0.34 perf-stat.i.metric.GHz 292.33 -3.1% 283.30 perf-stat.i.metric.M/sec 3778 -2.6% 3681 perf-stat.i.minor-faults 91.63 +4.1 95.73 perf-stat.i.node-load-miss-rate% 437707 ± 55% +1002.0% 4823604 ± 27% perf-stat.i.node-load-misses 39983 ± 39% +313.0% 165142 ± 20% perf-stat.i.node-loads 94366 ± 21% +1359.2% 1376997 ± 17% perf-stat.i.node-store-misses 3780 -2.6% 3681 perf-stat.i.page-faults 1.17 ± 5% -0.1 1.03 ± 2% perf-stat.overall.branch-miss-rate% 0.69 +15.6% 0.80 perf-stat.overall.cpi 12633 ± 12% -72.6% 3457 ± 21% perf-stat.overall.cycles-between-cache-misses 98.18 -0.8 97.37 perf-stat.overall.iTLB-load-miss-rate% 462.23 +9.4% 505.87 perf-stat.overall.instructions-per-iTLB-miss 1.45 -13.5% 1.25 perf-stat.overall.ipc 90.92 +5.4 96.31 perf-stat.overall.node-load-miss-rate% 83.08 ± 13% +13.9 96.99 perf-stat.overall.node-store-miss-rate% 520246 +12.0% 582888 perf-stat.overall.path-length 1.914e+08 ± 5% -12.3% 1.679e+08 ± 2% perf-stat.ps.branch-misses 4510225 ± 10% +331.6% 19466927 ± 19% perf-stat.ps.cache-misses 2138 ± 3% +9.8% 2348 perf-stat.ps.context-switches 5.621e+10 +14.7% 6.45e+10 perf-stat.ps.cpu-cycles 197.85 -1.4% 195.09 perf-stat.ps.cpu-migrations 2.365e+10 -3.9% 2.272e+10 perf-stat.ps.dTLB-loads 1.583e+10 -4.5% 1.511e+10 perf-stat.ps.dTLB-stores 1.757e+08 -9.3% 1.594e+08 perf-stat.ps.iTLB-load-misses 1.38 ± 5% -84.1% 0.22 ± 84% perf-stat.ps.major-faults 3765 -2.6% 3668 perf-stat.ps.minor-faults 436378 ± 55% +1002.0% 4808882 ± 27% perf-stat.ps.node-load-misses 39914 ± 39% +312.5% 164641 ± 20% perf-stat.ps.node-loads 94069 ± 21% +1359.2% 1372706 ± 17% perf-stat.ps.node-store-misses 3766 -2.6% 3668 perf-stat.ps.page-faults 37.68 ± 15% -37.7 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 36.34 ± 14% -36.3 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 36.34 ± 14% -36.3 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 36.34 ± 14% -36.3 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 35.40 ± 9% -35.4 0.00 perf-profile.calltrace.cycles-pp.read 35.16 ± 15% -35.2 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 34.65 ± 14% -34.7 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 33.00 ± 17% -33.0 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 30.91 ± 9% -30.9 0.00 perf-profile.calltrace.cycles-pp.write 23.94 ± 9% -23.9 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read 21.78 ± 9% -21.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read 20.85 ± 9% -20.9 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read 19.55 ± 9% -19.6 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write 18.44 ± 9% -18.4 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read 17.39 ± 9% -17.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 16.48 ± 9% -16.5 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 14.08 ± 9% -14.1 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 10.62 ± 9% -10.6 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 8.92 ± 9% -8.9 0.00 perf-profile.calltrace.cycles-pp.eventfd_read.new_sync_read.vfs_read.ksys_read.do_syscall_64 7.90 ± 9% -7.9 0.00 perf-profile.calltrace.cycles-pp.eventfd_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 6.58 ± 9% -6.6 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.write 6.55 ± 8% -6.6 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.read 5.20 ± 9% -5.2 0.00 perf-profile.calltrace.cycles-pp._copy_to_iter.eventfd_read.new_sync_read.vfs_read.ksys_read 43.75 ± 9% -43.8 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 39.43 ± 9% -39.4 0.00 perf-profile.children.cycles-pp.do_syscall_64 37.68 ± 15% -37.7 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 37.68 ± 15% -37.7 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 37.68 ± 15% -37.7 0.00 perf-profile.children.cycles-pp.do_idle 36.51 ± 15% -36.5 0.00 perf-profile.children.cycles-pp.cpuidle_enter 36.51 ± 15% -36.5 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 36.34 ± 14% -36.3 0.00 perf-profile.children.cycles-pp.start_secondary 35.41 ± 9% -35.4 0.00 perf-profile.children.cycles-pp.read 33.03 ± 17% -33.0 0.00 perf-profile.children.cycles-pp.intel_idle 30.90 ± 9% -30.9 0.00 perf-profile.children.cycles-pp.write 20.96 ± 9% -21.0 0.00 perf-profile.children.cycles-pp.ksys_read 18.59 ± 9% -18.6 0.00 perf-profile.children.cycles-pp.vfs_read 16.59 ± 9% -16.6 0.00 perf-profile.children.cycles-pp.ksys_write 14.22 ± 9% -14.2 0.00 perf-profile.children.cycles-pp.vfs_write 10.75 ± 9% -10.7 0.00 perf-profile.children.cycles-pp.new_sync_read 9.12 ± 9% -9.1 0.00 perf-profile.children.cycles-pp.eventfd_read 8.41 ± 8% -8.4 0.00 perf-profile.children.cycles-pp.__entry_text_start 8.20 ± 9% -8.2 0.00 perf-profile.children.cycles-pp.security_file_permission 8.03 ± 9% -8.0 0.00 perf-profile.children.cycles-pp.eventfd_write 6.46 ± 9% -6.5 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 5.46 ± 8% -5.5 0.00 perf-profile.children.cycles-pp.common_file_perm 5.24 ± 9% -5.2 0.00 perf-profile.children.cycles-pp._copy_to_iter 33.03 ± 17% -33.0 0.00 perf-profile.self.cycles-pp.intel_idle 6.46 ± 9% -6.5 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 0.01 ± 47% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 90% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 35% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 92% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.56 ±221% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 45% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 34% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.00 ± 57% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.03 ±115% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 91% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 64% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 38% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 56% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 88% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 50% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 0.01 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 ± 19% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.24 ±206% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.02 ± 50% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.04 ± 97% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.03 ± 52% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.02 ±117% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 166.75 ±223% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.07 ± 79% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 31% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 35% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.32 ±176% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.06 ± 13% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.05 ±110% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.04 ±120% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 95% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.02 ± 45% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.19 ±192% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.88 ±209% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 44% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 1.46 ±211% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 43% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 141.01 ±218% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.03 ±115% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 308.12 ±140% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 218.18 ± 4% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 11274 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8874 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 218.15 ± 4% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8874 ± 3% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.50 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1631 ± 33% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 797.01 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1631 ± 33% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 218.64 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.74 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.03 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 1013 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 54.40 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 4.57 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.76 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 518.43 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 599.96 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.26 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 6.30 ± 23% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 691.51 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 440.34 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 5.50 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 22.67 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 5.50 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 302.50 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 294.67 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 215.83 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3.50 ± 42% -100.0% 0.00 perf-sched.wait_and_delay.count.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 2947 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 2208 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 100.50 ± 24% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 22.83 ± 35% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 55.17 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 78.33 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1642 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2593 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 71.33 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 651.50 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.62 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 4739 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 4739 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3519 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 16.30 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.12 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2009 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 4741 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1768 ± 43% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.86 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 4385 ± 40% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 6937 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.87 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 210.35 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1226 ± 35% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 43% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 8874 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.48 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1631 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 797.00 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1631 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 218.08 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.74 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.02 ± 18% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.03 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 1013 ± 22% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 54.39 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 4.57 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.74 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 518.41 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 599.95 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.24 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 50% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 6.29 ± 23% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 691.51 -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 440.10 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.61 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 4739 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 4739 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3519 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 16.30 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.07 ± 58% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.12 ± 30% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2009 ± 22% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 4741 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1768 ± 43% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.83 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 4384 ± 40% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 6937 ± 12% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.85 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 50% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 210.32 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1226 ± 35% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 8874 ± 3% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 4703 ± 53% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 4703 ± 53% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 5649 ± 32% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 5649 ± 32% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 6583 ± 37% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 6583 ± 37% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 5033 ± 40% -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 5033 ± 40% -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 7687 ± 23% -100.0% 1.00 ±223% interrupts.CPU101.NMI:Non-maskable_interrupts 7687 ± 23% -100.0% 1.00 ±223% interrupts.CPU101.PMI:Performance_monitoring_interrupts 5400 ± 39% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 5400 ± 39% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 4291 ± 23% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 4291 ± 23% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 4648 ± 36% -100.0% 0.00 interrupts.CPU104.NMI:Non-maskable_interrupts 4648 ± 36% -100.0% 0.00 interrupts.CPU104.PMI:Performance_monitoring_interrupts 3899 ± 31% -100.0% 0.00 interrupts.CPU105.NMI:Non-maskable_interrupts 3899 ± 31% -100.0% 0.00 interrupts.CPU105.PMI:Performance_monitoring_interrupts 5477 ± 47% -100.0% 0.00 interrupts.CPU106.NMI:Non-maskable_interrupts 5477 ± 47% -100.0% 0.00 interrupts.CPU106.PMI:Performance_monitoring_interrupts 6150 ± 40% -100.0% 0.00 interrupts.CPU107.NMI:Non-maskable_interrupts 6150 ± 40% -100.0% 0.00 interrupts.CPU107.PMI:Performance_monitoring_interrupts 4603 ± 53% -100.0% 0.00 interrupts.CPU108.NMI:Non-maskable_interrupts 4603 ± 53% -100.0% 0.00 interrupts.CPU108.PMI:Performance_monitoring_interrupts 6039 ± 40% -100.0% 0.00 interrupts.CPU109.NMI:Non-maskable_interrupts 6039 ± 40% -100.0% 0.00 interrupts.CPU109.PMI:Performance_monitoring_interrupts 5284 ± 46% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 5284 ± 46% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 6376 ± 35% -100.0% 0.00 interrupts.CPU110.NMI:Non-maskable_interrupts 6376 ± 35% -100.0% 0.00 interrupts.CPU110.PMI:Performance_monitoring_interrupts 4144 ± 38% -100.0% 0.00 interrupts.CPU111.NMI:Non-maskable_interrupts 4144 ± 38% -100.0% 0.00 interrupts.CPU111.PMI:Performance_monitoring_interrupts 248.67 ± 79% -100.0% 0.00 interrupts.CPU112.NMI:Non-maskable_interrupts 248.67 ± 79% -100.0% 0.00 interrupts.CPU112.PMI:Performance_monitoring_interrupts 109.50 ± 9% -100.0% 0.00 interrupts.CPU113.NMI:Non-maskable_interrupts 109.50 ± 9% -100.0% 0.00 interrupts.CPU113.PMI:Performance_monitoring_interrupts 228.67 ±120% -100.0% 0.00 interrupts.CPU114.NMI:Non-maskable_interrupts 228.67 ±120% -100.0% 0.00 interrupts.CPU114.PMI:Performance_monitoring_interrupts 109.50 ± 33% -100.0% 0.00 interrupts.CPU115.NMI:Non-maskable_interrupts 109.50 ± 33% -100.0% 0.00 interrupts.CPU115.PMI:Performance_monitoring_interrupts 106.50 ± 27% -100.0% 0.00 interrupts.CPU116.NMI:Non-maskable_interrupts 106.50 ± 27% -100.0% 0.00 interrupts.CPU116.PMI:Performance_monitoring_interrupts 107.00 ± 10% -100.0% 0.00 interrupts.CPU117.NMI:Non-maskable_interrupts 107.00 ± 10% -100.0% 0.00 interrupts.CPU117.PMI:Performance_monitoring_interrupts 89.50 ± 29% -100.0% 0.00 interrupts.CPU118.NMI:Non-maskable_interrupts 89.50 ± 29% -100.0% 0.00 interrupts.CPU118.PMI:Performance_monitoring_interrupts 97.33 ± 18% -100.0% 0.00 interrupts.CPU119.NMI:Non-maskable_interrupts 97.33 ± 18% -100.0% 0.00 interrupts.CPU119.PMI:Performance_monitoring_interrupts 7484 ± 36% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 7484 ± 36% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 114.33 ± 32% -100.0% 0.00 interrupts.CPU120.NMI:Non-maskable_interrupts 114.33 ± 32% -100.0% 0.00 interrupts.CPU120.PMI:Performance_monitoring_interrupts 105.50 ± 40% -100.0% 0.00 interrupts.CPU121.NMI:Non-maskable_interrupts 105.50 ± 40% -100.0% 0.00 interrupts.CPU121.PMI:Performance_monitoring_interrupts 108.00 ± 45% -100.0% 0.00 interrupts.CPU122.NMI:Non-maskable_interrupts 108.00 ± 45% -100.0% 0.00 interrupts.CPU122.PMI:Performance_monitoring_interrupts 139.17 ± 78% -100.0% 0.00 interrupts.CPU123.NMI:Non-maskable_interrupts 139.17 ± 78% -100.0% 0.00 interrupts.CPU123.PMI:Performance_monitoring_interrupts 113.83 ± 44% -100.0% 0.00 interrupts.CPU124.NMI:Non-maskable_interrupts 113.83 ± 44% -100.0% 0.00 interrupts.CPU124.PMI:Performance_monitoring_interrupts 124.83 ± 47% -100.0% 0.00 interrupts.CPU125.NMI:Non-maskable_interrupts 124.83 ± 47% -100.0% 0.00 interrupts.CPU125.PMI:Performance_monitoring_interrupts 139.83 ± 60% -100.0% 0.00 interrupts.CPU126.NMI:Non-maskable_interrupts 139.83 ± 60% -100.0% 0.00 interrupts.CPU126.PMI:Performance_monitoring_interrupts 110.67 ± 47% -99.1% 1.00 ±223% interrupts.CPU127.NMI:Non-maskable_interrupts 110.67 ± 47% -99.1% 1.00 ±223% interrupts.CPU127.PMI:Performance_monitoring_interrupts 105.00 ± 42% -100.0% 0.00 interrupts.CPU128.NMI:Non-maskable_interrupts 105.00 ± 42% -100.0% 0.00 interrupts.CPU128.PMI:Performance_monitoring_interrupts 102.00 ± 40% -100.0% 0.00 interrupts.CPU129.NMI:Non-maskable_interrupts 102.00 ± 40% -100.0% 0.00 interrupts.CPU129.PMI:Performance_monitoring_interrupts 6284 ± 40% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 6284 ± 40% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 112.83 ± 32% -100.0% 0.00 interrupts.CPU130.NMI:Non-maskable_interrupts 112.83 ± 32% -100.0% 0.00 interrupts.CPU130.PMI:Performance_monitoring_interrupts 118.50 ± 37% -100.0% 0.00 interrupts.CPU131.NMI:Non-maskable_interrupts 118.50 ± 37% -100.0% 0.00 interrupts.CPU131.PMI:Performance_monitoring_interrupts 115.17 ± 33% -100.0% 0.00 interrupts.CPU132.NMI:Non-maskable_interrupts 115.17 ± 33% -100.0% 0.00 interrupts.CPU132.PMI:Performance_monitoring_interrupts 112.50 ± 31% -100.0% 0.00 interrupts.CPU133.NMI:Non-maskable_interrupts 112.50 ± 31% -100.0% 0.00 interrupts.CPU133.PMI:Performance_monitoring_interrupts 105.33 ± 38% -100.0% 0.00 interrupts.CPU134.NMI:Non-maskable_interrupts 105.33 ± 38% -100.0% 0.00 interrupts.CPU134.PMI:Performance_monitoring_interrupts 102.33 ± 40% -100.0% 0.00 interrupts.CPU135.NMI:Non-maskable_interrupts 102.33 ± 40% -100.0% 0.00 interrupts.CPU135.PMI:Performance_monitoring_interrupts 102.83 ± 38% -100.0% 0.00 interrupts.CPU136.NMI:Non-maskable_interrupts 102.83 ± 38% -100.0% 0.00 interrupts.CPU136.PMI:Performance_monitoring_interrupts 102.33 ± 40% -100.0% 0.00 interrupts.CPU137.NMI:Non-maskable_interrupts 102.33 ± 40% -100.0% 0.00 interrupts.CPU137.PMI:Performance_monitoring_interrupts 104.00 ± 40% -100.0% 0.00 interrupts.CPU138.NMI:Non-maskable_interrupts 104.00 ± 40% -100.0% 0.00 interrupts.CPU138.PMI:Performance_monitoring_interrupts 110.00 ± 30% -100.0% 0.00 interrupts.CPU139.NMI:Non-maskable_interrupts 110.00 ± 30% -100.0% 0.00 interrupts.CPU139.PMI:Performance_monitoring_interrupts 5200 ± 40% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 5200 ± 40% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 102.50 ± 40% -100.0% 0.00 interrupts.CPU140.NMI:Non-maskable_interrupts 102.50 ± 40% -100.0% 0.00 interrupts.CPU140.PMI:Performance_monitoring_interrupts 103.50 ± 40% -99.0% 1.00 ±223% interrupts.CPU141.NMI:Non-maskable_interrupts 103.50 ± 40% -99.0% 1.00 ±223% interrupts.CPU141.PMI:Performance_monitoring_interrupts 103.67 ± 41% -100.0% 0.00 interrupts.CPU142.NMI:Non-maskable_interrupts 103.67 ± 41% -100.0% 0.00 interrupts.CPU142.PMI:Performance_monitoring_interrupts 104.33 ± 41% -100.0% 0.00 interrupts.CPU143.NMI:Non-maskable_interrupts 104.33 ± 41% -100.0% 0.00 interrupts.CPU143.PMI:Performance_monitoring_interrupts 168.33 ± 42% -100.0% 0.00 interrupts.CPU144.NMI:Non-maskable_interrupts 168.33 ± 42% -100.0% 0.00 interrupts.CPU144.PMI:Performance_monitoring_interrupts 103.33 ± 39% -100.0% 0.00 interrupts.CPU145.NMI:Non-maskable_interrupts 103.33 ± 39% -100.0% 0.00 interrupts.CPU145.PMI:Performance_monitoring_interrupts 106.83 ± 38% -100.0% 0.00 interrupts.CPU146.NMI:Non-maskable_interrupts 106.83 ± 38% -100.0% 0.00 interrupts.CPU146.PMI:Performance_monitoring_interrupts 114.50 ± 31% -100.0% 0.00 interrupts.CPU147.NMI:Non-maskable_interrupts 114.50 ± 31% -100.0% 0.00 interrupts.CPU147.PMI:Performance_monitoring_interrupts 116.17 ± 30% -100.0% 0.00 interrupts.CPU148.NMI:Non-maskable_interrupts 116.17 ± 30% -100.0% 0.00 interrupts.CPU148.PMI:Performance_monitoring_interrupts 124.33 ± 34% -100.0% 0.00 interrupts.CPU149.NMI:Non-maskable_interrupts 124.33 ± 34% -100.0% 0.00 interrupts.CPU149.PMI:Performance_monitoring_interrupts 7696 ± 20% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 7696 ± 20% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 124.83 ± 31% -100.0% 0.00 interrupts.CPU150.NMI:Non-maskable_interrupts 124.83 ± 31% -100.0% 0.00 interrupts.CPU150.PMI:Performance_monitoring_interrupts 124.00 ± 36% -100.0% 0.00 interrupts.CPU151.NMI:Non-maskable_interrupts 124.00 ± 36% -100.0% 0.00 interrupts.CPU151.PMI:Performance_monitoring_interrupts 114.33 ± 31% -100.0% 0.00 interrupts.CPU152.NMI:Non-maskable_interrupts 114.33 ± 31% -100.0% 0.00 interrupts.CPU152.PMI:Performance_monitoring_interrupts 118.17 ± 30% -100.0% 0.00 interrupts.CPU153.NMI:Non-maskable_interrupts 118.17 ± 30% -100.0% 0.00 interrupts.CPU153.PMI:Performance_monitoring_interrupts 116.33 ± 33% -100.0% 0.00 interrupts.CPU154.NMI:Non-maskable_interrupts 116.33 ± 33% -100.0% 0.00 interrupts.CPU154.PMI:Performance_monitoring_interrupts 118.17 ± 32% -100.0% 0.00 interrupts.CPU155.NMI:Non-maskable_interrupts 118.17 ± 32% -100.0% 0.00 interrupts.CPU155.PMI:Performance_monitoring_interrupts 121.83 ± 36% -100.0% 0.00 interrupts.CPU156.NMI:Non-maskable_interrupts 121.83 ± 36% -100.0% 0.00 interrupts.CPU156.PMI:Performance_monitoring_interrupts 124.33 ± 18% -100.0% 0.00 interrupts.CPU157.NMI:Non-maskable_interrupts 124.33 ± 18% -100.0% 0.00 interrupts.CPU157.PMI:Performance_monitoring_interrupts 131.67 ± 21% -100.0% 0.00 interrupts.CPU158.NMI:Non-maskable_interrupts 131.67 ± 21% -100.0% 0.00 interrupts.CPU158.PMI:Performance_monitoring_interrupts 123.33 ± 18% -100.0% 0.00 interrupts.CPU159.NMI:Non-maskable_interrupts 123.33 ± 18% -100.0% 0.00 interrupts.CPU159.PMI:Performance_monitoring_interrupts 164.33 ± 58% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 164.33 ± 58% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 125.33 ± 18% -100.0% 0.00 interrupts.CPU160.NMI:Non-maskable_interrupts 125.33 ± 18% -100.0% 0.00 interrupts.CPU160.PMI:Performance_monitoring_interrupts 126.50 ± 21% -100.0% 0.00 interrupts.CPU161.NMI:Non-maskable_interrupts 126.50 ± 21% -100.0% 0.00 interrupts.CPU161.PMI:Performance_monitoring_interrupts 124.00 ± 19% -99.2% 1.00 ±223% interrupts.CPU162.NMI:Non-maskable_interrupts 124.00 ± 19% -99.2% 1.00 ±223% interrupts.CPU162.PMI:Performance_monitoring_interrupts 127.67 ± 21% -100.0% 0.00 interrupts.CPU163.NMI:Non-maskable_interrupts 127.67 ± 21% -100.0% 0.00 interrupts.CPU163.PMI:Performance_monitoring_interrupts 124.83 ± 22% -100.0% 0.00 interrupts.CPU164.NMI:Non-maskable_interrupts 124.83 ± 22% -100.0% 0.00 interrupts.CPU164.PMI:Performance_monitoring_interrupts 112.83 ± 31% -100.0% 0.00 interrupts.CPU165.NMI:Non-maskable_interrupts 112.83 ± 31% -100.0% 0.00 interrupts.CPU165.PMI:Performance_monitoring_interrupts 114.50 ± 31% -100.0% 0.00 interrupts.CPU166.NMI:Non-maskable_interrupts 114.50 ± 31% -100.0% 0.00 interrupts.CPU166.PMI:Performance_monitoring_interrupts 135.83 ± 35% -100.0% 0.00 interrupts.CPU167.NMI:Non-maskable_interrupts 135.83 ± 35% -100.0% 0.00 interrupts.CPU167.PMI:Performance_monitoring_interrupts 119.83 ± 44% -100.0% 0.00 interrupts.CPU168.NMI:Non-maskable_interrupts 119.83 ± 44% -100.0% 0.00 interrupts.CPU168.PMI:Performance_monitoring_interrupts 130.67 ± 16% -100.0% 0.00 interrupts.CPU169.NMI:Non-maskable_interrupts 130.67 ± 16% -100.0% 0.00 interrupts.CPU169.PMI:Performance_monitoring_interrupts 106.50 ± 9% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 106.50 ± 9% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 126.83 ± 17% -100.0% 0.00 interrupts.CPU170.NMI:Non-maskable_interrupts 126.83 ± 17% -100.0% 0.00 interrupts.CPU170.PMI:Performance_monitoring_interrupts 112.50 ± 21% -100.0% 0.00 interrupts.CPU171.NMI:Non-maskable_interrupts 112.50 ± 21% -100.0% 0.00 interrupts.CPU171.PMI:Performance_monitoring_interrupts 123.33 ± 16% -100.0% 0.00 interrupts.CPU172.NMI:Non-maskable_interrupts 123.33 ± 16% -100.0% 0.00 interrupts.CPU172.PMI:Performance_monitoring_interrupts 126.17 ± 15% -100.0% 0.00 interrupts.CPU173.NMI:Non-maskable_interrupts 126.17 ± 15% -100.0% 0.00 interrupts.CPU173.PMI:Performance_monitoring_interrupts 124.00 ± 16% -100.0% 0.00 interrupts.CPU174.NMI:Non-maskable_interrupts 124.00 ± 16% -100.0% 0.00 interrupts.CPU174.PMI:Performance_monitoring_interrupts 161.17 ± 60% -100.0% 0.00 interrupts.CPU175.NMI:Non-maskable_interrupts 161.17 ± 60% -100.0% 0.00 interrupts.CPU175.PMI:Performance_monitoring_interrupts 125.17 ± 18% -100.0% 0.00 interrupts.CPU176.NMI:Non-maskable_interrupts 125.17 ± 18% -100.0% 0.00 interrupts.CPU176.PMI:Performance_monitoring_interrupts 179.67 ± 70% -100.0% 0.00 interrupts.CPU177.NMI:Non-maskable_interrupts 179.67 ± 70% -100.0% 0.00 interrupts.CPU177.PMI:Performance_monitoring_interrupts 124.50 ± 19% -100.0% 0.00 interrupts.CPU178.NMI:Non-maskable_interrupts 124.50 ± 19% -100.0% 0.00 interrupts.CPU178.PMI:Performance_monitoring_interrupts 127.67 ± 18% -100.0% 0.00 interrupts.CPU179.NMI:Non-maskable_interrupts 127.67 ± 18% -100.0% 0.00 interrupts.CPU179.PMI:Performance_monitoring_interrupts 195.67 ±100% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 195.67 ±100% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 125.00 ± 18% -100.0% 0.00 interrupts.CPU180.NMI:Non-maskable_interrupts 125.00 ± 18% -100.0% 0.00 interrupts.CPU180.PMI:Performance_monitoring_interrupts 124.33 ± 17% -100.0% 0.00 interrupts.CPU181.NMI:Non-maskable_interrupts 124.33 ± 17% -100.0% 0.00 interrupts.CPU181.PMI:Performance_monitoring_interrupts 139.33 ± 27% -100.0% 0.00 interrupts.CPU182.NMI:Non-maskable_interrupts 139.33 ± 27% -100.0% 0.00 interrupts.CPU182.PMI:Performance_monitoring_interrupts 132.67 ± 25% -100.0% 0.00 interrupts.CPU183.NMI:Non-maskable_interrupts 132.67 ± 25% -100.0% 0.00 interrupts.CPU183.PMI:Performance_monitoring_interrupts 123.67 ± 17% -100.0% 0.00 interrupts.CPU184.NMI:Non-maskable_interrupts 123.67 ± 17% -100.0% 0.00 interrupts.CPU184.PMI:Performance_monitoring_interrupts 127.50 ± 19% -100.0% 0.00 interrupts.CPU185.NMI:Non-maskable_interrupts 127.50 ± 19% -100.0% 0.00 interrupts.CPU185.PMI:Performance_monitoring_interrupts 128.67 ± 17% -100.0% 0.00 interrupts.CPU186.NMI:Non-maskable_interrupts 128.67 ± 17% -100.0% 0.00 interrupts.CPU186.PMI:Performance_monitoring_interrupts 126.33 ± 17% -100.0% 0.00 interrupts.CPU187.NMI:Non-maskable_interrupts 126.33 ± 17% -100.0% 0.00 interrupts.CPU187.PMI:Performance_monitoring_interrupts 124.50 ± 17% -100.0% 0.00 interrupts.CPU188.NMI:Non-maskable_interrupts 124.50 ± 17% -100.0% 0.00 interrupts.CPU188.PMI:Performance_monitoring_interrupts 128.17 ± 23% -100.0% 0.00 interrupts.CPU189.NMI:Non-maskable_interrupts 128.17 ± 23% -100.0% 0.00 interrupts.CPU189.PMI:Performance_monitoring_interrupts 32977 ±216% -97.3% 887.50 interrupts.CPU19.CAL:Function_call_interrupts 111.33 ± 36% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 111.33 ± 36% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 124.00 ± 17% -100.0% 0.00 interrupts.CPU190.NMI:Non-maskable_interrupts 124.00 ± 17% -100.0% 0.00 interrupts.CPU190.PMI:Performance_monitoring_interrupts 542.50 ± 21% -100.0% 0.00 interrupts.CPU191.NMI:Non-maskable_interrupts 542.50 ± 21% -100.0% 0.00 interrupts.CPU191.PMI:Performance_monitoring_interrupts 7354 ± 36% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 7354 ± 36% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 97.67 ± 39% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 97.67 ± 39% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 87.67 ± 32% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 87.67 ± 32% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 82.50 ± 38% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 82.50 ± 38% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 89.00 ± 31% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 89.00 ± 31% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 98.17 ± 49% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 98.17 ± 49% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 87.50 ± 57% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 87.50 ± 57% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 99.67 ± 53% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 99.67 ± 53% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 115.83 ± 53% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 115.83 ± 53% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 130.83 ± 23% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 130.83 ± 23% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 124.50 ± 47% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 124.50 ± 47% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 5843 ± 41% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 5843 ± 41% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 184.50 ±108% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 184.50 ±108% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 94.33 ± 49% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 94.33 ± 49% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 93.83 ± 50% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 93.83 ± 50% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 88.17 ± 60% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 88.17 ± 60% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 85.67 ± 57% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 85.67 ± 57% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 98.50 ± 52% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 98.50 ± 52% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 96.50 ± 51% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 96.50 ± 51% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 94.67 ± 49% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 94.67 ± 49% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 105.67 ± 40% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 105.67 ± 40% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 116.67 ± 36% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 116.67 ± 36% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 5561 ± 38% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 5561 ± 38% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 121.00 ± 16% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 121.00 ± 16% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 120.83 ± 19% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 120.83 ± 19% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 122.17 ± 20% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 122.17 ± 20% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 119.67 ± 19% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 119.67 ± 19% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 121.00 ± 19% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 121.00 ± 19% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 121.17 ± 19% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 121.17 ± 19% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 127.50 ± 25% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 127.50 ± 25% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 122.50 ± 20% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 122.50 ± 20% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 175.50 ± 30% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 175.50 ± 30% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 120.50 ± 17% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 120.50 ± 17% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 4208 ± 49% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 4208 ± 49% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 123.67 ± 16% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 123.67 ± 16% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 121.67 ± 17% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 121.67 ± 17% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 124.67 ± 16% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 124.67 ± 16% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 126.00 ± 19% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 126.00 ± 19% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 127.50 ± 18% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 127.50 ± 18% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 131.17 ± 20% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 131.17 ± 20% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 124.00 ± 17% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 124.00 ± 17% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 126.00 ± 17% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 126.00 ± 17% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 126.17 ± 24% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 126.17 ± 24% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 132.67 ± 27% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 132.67 ± 27% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 5412 ± 43% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 5412 ± 43% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 123.83 ± 20% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 123.83 ± 20% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 124.33 ± 17% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 124.33 ± 17% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 134.17 ± 25% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 134.17 ± 25% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 122.17 ± 19% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 122.17 ± 19% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 124.00 ± 19% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 124.00 ± 19% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 126.00 ± 23% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 126.00 ± 23% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 124.00 ± 20% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 124.00 ± 20% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 127.83 ± 21% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 127.83 ± 21% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 123.33 ± 20% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 123.33 ± 20% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 122.33 ± 19% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 122.33 ± 19% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 7639 ± 12% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 7639 ± 12% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 124.67 ± 18% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 124.67 ± 18% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 141.83 ± 20% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 141.83 ± 20% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 131.50 ± 18% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 131.50 ± 18% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 134.17 ± 19% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 134.17 ± 19% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 126.83 ± 19% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 126.83 ± 19% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 126.00 ± 19% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 126.00 ± 19% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 123.83 ± 17% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 123.83 ± 17% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 125.17 ± 16% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 125.17 ± 16% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 125.50 ± 18% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 125.50 ± 18% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 162.33 ± 62% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 162.33 ± 62% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 6455 ± 37% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 6455 ± 37% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 103.67 ± 38% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 103.67 ± 38% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 143.83 ± 40% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 143.83 ± 40% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 122.17 ± 17% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 122.17 ± 17% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 125.50 ± 17% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 125.50 ± 17% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 125.33 ± 17% -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 125.33 ± 17% -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 124.17 ± 17% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 124.17 ± 17% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 127.17 ± 17% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 127.17 ± 17% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 124.17 ± 17% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 124.17 ± 17% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 123.00 ± 17% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 123.00 ± 17% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 125.33 ± 17% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 125.33 ± 17% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 7938 ± 13% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 7938 ± 13% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 124.50 ± 34% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 124.50 ± 34% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 125.67 ± 17% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 125.67 ± 17% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 122.83 ± 17% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 122.83 ± 17% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 121.83 ± 17% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 121.83 ± 17% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 112.00 ± 29% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 112.00 ± 29% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 293.33 ± 29% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 293.33 ± 29% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 7124 ± 23% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 7124 ± 23% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 5931 ± 44% -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 5931 ± 44% -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 4942 ± 44% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 4942 ± 44% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 5446 ± 42% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 5446 ± 42% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 206627 ± 9% -100.0% 4.00 ± 70% interrupts.NMI:Non-maskable_interrupts 206627 ± 9% -100.0% 4.00 ± 70% interrupts.PMI:Performance_monitoring_interrupts will-it-scale.16.processes 4.8e+07 +-----------------------------------------------------------------+ |+.++++.++ : + .+++.++++.+| 4.7e+07 |-+ : : +++ | 4.6e+07 |-+ +.+++.++++.+++.++++ | | | 4.5e+07 |-+ | | | 4.4e+07 |-+ | | | 4.3e+07 |-+ | 4.2e+07 |-+ O O O OOO O | | OOO OOO O O | 4.1e+07 |O+ OO O O O O | | | 4e+07 +-----------------------------------------------------------------+ will-it-scale.per_process_ops 3e+06 +----------------------------------------------------------------+ | .+ ++.++ : : + +. | 2.95e+06 |++ + : : ++.+ + ++++.+| 2.9e+06 |-+ ++.+++.++++.++++.++ | | | 2.85e+06 |-+ | 2.8e+06 |-+ | | | 2.75e+06 |-+ | 2.7e+06 |-+ | | | 2.65e+06 |-+ O O O | 2.6e+06 |-+ O O O O O O O O | |O OO O OO O O O O | 2.55e+06 +----------------------------------------------------------------+ will-it-scale.workload 4.8e+07 +-----------------------------------------------------------------+ |+.++++.++ : + .+++.++++.+| 4.7e+07 |-+ : : +++ | 4.6e+07 |-+ +.+++.++++.+++.++++ | | | 4.5e+07 |-+ | | | 4.4e+07 |-+ | | | 4.3e+07 |-+ | 4.2e+07 |-+ O O O OOO O | | OOO OOO O O | 4.1e+07 |O+ OO O O O O | | | 4e+07 +-----------------------------------------------------------------+ perf-sched.total_wait_time.max.ms 10000 +-------------------------------------------------------------------+ | : : : | 9500 |-+ + + :: + + : + : + | | : .+ + : :: ::.+ : : + :: :: + : | | : + : + : + :+ :+ : : : :::: :: : : + | 9000 |-+: : : : +: : : : : : + ::: + + : : + : :+ :: .+| |: : : : + + :: :: :: :: : : : : :+.+ + : + :: :+ | 8500 |:.+ : + + : :: : : :: : : : +: : + :: | |+ +.+ + :: + + :: + : : :: + | 8000 |-+ :: :: : :: | | : : : :: | | + + + : | 7500 |-+ + | | | 7000 +-------------------------------------------------------------------+ perf-sched.total_wait_time.average.ms 235 +---------------------------------------------------------------------+ | +. + | 230 |-+ + : + :+ | 225 |-+ + + :: + : : : : +. | |+ :+ :+ + :: :+ : : : : +.+ + | 220 |-+ :: + + + +. :: : : + .+: + +.: :+ : + +| | ++ : : : : + +: : + + : + + : + + + : : : :| 215 |-+ : : : ::: + + :: : : : ++ : :| | : : : :+ + + : + : : | 210 |-+ :: : : + + :: | 205 |-+ : :: + :: | | : : :: | 200 |-+ + : : | | + + | 195 +---------------------------------------------------------------------+ perf-sched.total_sch_delay.max.ms 1100 +--------------------------------------------------------------------+ 1000 |-+ +| | + + +:| 900 |-+ : : + :| 800 |-+ : : : :| 700 |-+ : : : :| 600 |-+ :: : : :| | :: : : :| 500 |-+ : : :: : | 400 |-+ : : : : : | 300 |-+ : : : : : | 200 |-+ : : : : : | | : : : : : | 100 |-+ : : : : : | 0 +--------------------------------------------------------------------+ perf-sched.total_sch_delay.average.ms 0.1 +--------------------------------------------------------------------+ 0.09 |-+ + + :| | : : ::| 0.08 |-+ : : ::| 0.07 |-+ :: : + :| | :: : : :| 0.06 |-+ :: : : :| 0.05 |-+ :: :: : | 0.04 |-+ : : : : : | | : : : : : | 0.03 |-+ : : : : : | 0.02 |-+ : : : : : | | + : .+ : + : | 0.01 |+.+ +.+++.++. ++.++ .+++.++ .++ .+ :+.+ + :.+ +.+++.++.+ :.++ | 0 +--------------------------------------------------------------------+ perf-sched.total_wait_and_delay.count.ms 12600 +-------------------------------------------------------------------+ 12400 |-+ + | | : | 12200 |-+ + : + | 12000 |-+ : :: + : | 11800 |-+ : :: + : :: | 11600 |-+ :: :: :: :: + : : | | :: :: :: :: ++ :: + : | 11400 |-+ : : : : : +: : : + + : : : : | 11200 |-.+: : : : + : :: : + :: ++. + +.+ + + + + + : :| 11000 |++:: : : :: +: : + : :: : + + :: + :+. : + : :+.+ +| 10800 |-+ : :: : :::: + : + :+ + :.+ + :: | | + :: + :: : + + + + | 10600 |-+ ++ + + | 10400 +-------------------------------------------------------------------+ perf-sched.total_wait_and_delay.max.ms 10000 +-------------------------------------------------------------------+ | : : : | 9500 |-+ + + :: + + : + : + | | : .+ + : :: ::.+ : : + :: :: + : | | : + : + : + :+ :+ : : : :::: :: : : + | 9000 |-+: : : : +: : : : : : + ::: + + : : + : :+ :: .+| |: : : : + + :: :: :: :: : : : : :+.+ + : + :: :+ | 8500 |:.+ : + + : :: : : :: : : : +: : + :: | |+ +.+ + :: + + :: + : : :: + | 8000 |-+ :: :: : :: | | : : : :: | | + + + : | 7500 |-+ + | | | 7000 +-------------------------------------------------------------------+ perf-sched.total_wait_and_delay.average.ms 235 +---------------------------------------------------------------------+ | +. + | 230 |-+ + + : + :+ | 225 |-+ + :: :: + : : : : +. | |+ :+ : + + :: :+ : : : : +.+ + | 220 |-+ :: + + + +. :: : : + .++ + +.+ :+ : + +| | ++ : : : : + +: : + + : + : + + : : : :| 215 |-+ : : : ::: + + :: : : : ++ : :| | : : : :+ + + : + : : | 210 |-+ :: : : + + :: | 205 |-+ : :: + :: | | : : :: | 200 |-+ + : : | | + + | 195 +---------------------------------------------------------------------+ perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 58 +----------------------------------------------------------------------+ | + | 57 |-+ : + | 56 |-++ : : .+ +| | : + : + + :: + : :| 55 |-:: : + : : :: : :: : : ::| | : : :::: : : :: :: : :: :: | 54 |-: : :::: : : : : :: : :: :: | | : : : + : + : + + : + : +. : : : | 53 |:+ + : : :+.+ .+ : : + + : : ++. : ++.+++.+ +.++ : + | 52 |++ +: +.+ ++ : : :: + ++.+ ++.+ :: + | | + : :: + | 51 |-+ : + | | + | 50 +----------------------------------------------------------------------+ perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1.6 +---------------------------------------------------------------------+ | + : | 1.4 |-+ : : | 1.2 |-+ : : | | : :: | 1 |-+ :: :: | | :: :: | 0.8 |-+ :: : : | | : : : : | 0.6 |-+ : : : : | 0.4 |-+ : : : : | | : : : : | 0.2 |-+ : : : : | | .+ .+ .+ : : +. .+ +. .+: + .+| 0 +---------------------------------------------------------------------+ perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.005 +------------------------------------------------------------------+ | : | 0.0045 |-+ : | 0.004 |-+ + + : + + + + | | : : :: : : : : | 0.0035 |-+ : : :: : : : : | | :: :: :: : :: : :: | 0.003 |-++ :: :: :: : :: : : : +| | : :: :: : : : :: : : : :| 0.0025 |-:: : : : : : : : : : : :: : ::| 0.002 |-:: : : : + : : : : : : : :: + ::| | : : : : : : : : : : : : : :: : : | 0.0015 |-: :: : : : : : : : : : : : : :: | |: :: : : : : : : : : : : : : : | 0.001 +------------------------------------------------------------------+ perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 3150 +--------------------------------------------------------------------+ |+ : | 3100 |:+ + :: + | 3050 |:+ + + : :: :: +.+ ++. ++ +.+ +. | | : +: + + +: : : :+.++ : : + : : : + + .+ + + | 3000 |-: : : :+.++ + : + :: : + : + ++.+ + : : : | | : : : : : : + : : : : : :: :: | 2950 |-: : : + : :: :: : : : :: : : | | :: : :: :: :: :: ::: : ::| 2900 |-+: : + :: : : : :+ :| 2850 |-+: + : + + : + +| | + : : | 2800 |-+ : + | | + | 2750 +--------------------------------------------------------------------+ perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 58 +----------------------------------------------------------------------+ | + | 57 |-+ : + | 56 |-++ : : .+ +| | : + : + + :: + : :| 55 |-:: : + : : :: : :: : : ::| | : : :::: : : :: :: : :: :: | 54 |-: : :::: : : : : :: : :: :: | | : : : + : + : + + : + : +. : : : | 53 |:+ + : : :+.+ .+ : : + + : : ++. : ++.+++.+ +.++ : + | 52 |++ +: +.+ ++ : : :: + ++.+ ++.+ :: + | | + : :: + | 51 |-+ : + | | + | 50 +----------------------------------------------------------------------+ perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 5500 +--------------------------------------------------------------------+ | + | 5000 |-+ : | 4500 |-+ : | | : | 4000 |-+ :: | 3500 |-+ :: | | :: | 3000 |-+ : : | 2500 |-+ : : | | +: : + | 2000 |-+ :: : :: | 1500 |-+ : : : :: | | : : : : :| 1000 +--------------------------------------------------------------------+ perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 720 +---------------------------------------------------------------------+ | + +. + | 710 |-+ .+ + + +: + : + :+ | |+ + : : : : : : :: : : : + +. | 700 |:+ : : ++. : +.:: + : :: + : :.+ : + +. ++ : +| | : + + : +++ +: : : +: + + : + +.+ : + : + : | 690 |-: : : : :: + : : : : : : : : : | | : : : : :: : : : : : : : :: + | 680 |-:: :: : : : : : : : :: : : | | : :: + : : :: : : : : | 670 |-++ : : + :+ : + : | | + :+ :: : + | 660 |-+ + + + | | | 650 +---------------------------------------------------------------------+ perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.018 +-------------------------------------------------------------------+ | + + + | 0.016 |-+ : : : | | : : : | 0.014 |-+ :: : : + | | : : :: : : | 0.012 |-+ : : :: : : | | + + : : : : :: :: | 0.01 |-+: : : : : : : : :: + | | :: :: + + + ++.+ : : : : ++ +.+ : :: | 0.008 |-: : : : : :: : : : : : : : : : :: + | | : : +: : : : :: : : : : : : : : :: : +| 0.006 |:+ + + + + +.+ + : ++. +: ++. +.: +.+ +.: ++ : :+ | |+ + + + + + ++ + + + + + | 0.004 +-------------------------------------------------------------------+ perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 2700 +--------------------------------------------------------------------+ 2680 |-+: + + ++ ++ ++ + ++ ++ +.+ +.+ + ++.++ | | :: : : :: :: :: : :: :: : : : : : : : | 2660 |-:: : : :: :: : : : :: :: : : : : : : : | 2640 |-:: :: : :: : :: : :: : : : : : : : : : : : | 2620 |-:: :: : : : : :: : :: : : : : : : : : : : : | 2600 |:+: :: : : : : : : :: : : : : : : : : : : : | |+ :+ :: ::: :: + :: : : : : : : :: : :: : : | 2580 |:+ :: : : : :: :: :: : : :: : : : : : : : : | 2560 |:+ ::: : : : :: :: :: : : :: : : : : : : : : | 2540 |:+ ::: : : : :: :: : : : : :: : : : : : : | 2520 |-+ : : : : : : : : :: : :: : : : : : : | | : :: : : : : + :: : :: : : : : : : | 2500 |-+ + ++ ++.+ + + ++ + ++ + ++.+ +.++.+ +.+| 2480 +--------------------------------------------------------------------+ perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 5500 +--------------------------------------------------------------------+ | + | 5000 |-+ : | 4500 |-+ : | | : | 4000 |-+ :: | 3500 |-+ :: | | :: | 3000 |-+ : : | 2500 |-+ : : | | +: : + | 2000 |-+ :: : :: | 1500 |-+ : : : :: | | : : : : :| 1000 +--------------------------------------------------------------------+ perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 720 +---------------------------------------------------------------------+ | + +. + | 710 |-+ .+ + + +: + : + :+ | |+ + : : : : : : :: : : : + +. | 700 |:+ : : ++. : +.:: + : :: + : :.+ : + +. ++ : +| | : + + : +++ +: : : +: + + : + +.+ : + : + : | 690 |-: : : : :: + : : : : : : : : : | | : : : : :: : : : : : : : :: + | 680 |-:: :: : : : : : : : :: : : | | : :: + : : :: : : : : | 670 |-++ : : + :+ : + : | | + :+ :: : + | 660 |-+ + + + | | | 650 +---------------------------------------------------------------------+ 10 +----------------------------------------------------------------------+ | | 9 |-+ + + | | + ++. : : | 8 |++ : + : + + :: : : + + | | : + +: :: : : + + + + + : +.+ + : + :: :+ | 7 |-: : : : ::: : .+ : : +: : +.+ : : ++ + + ::: : | | + : : : : : :+ : : : : :: :: +: + : + : | 6 |-+ : : : : + :: :: :+ + + :.++ +: : +| | :: :: + : + + + + :| 5 |-+ :: : + : : | | : : :: | 4 |-+ + + + | | | 3 +----------------------------------------------------------------------+ 14 +----------------------------------------------------------------------+ | | 12 |-+ + + + | | : : : | 10 |-+ :: :: :: | | :: :+ :+ | 8 |-+ :: : : : : | | :: : : : : | 6 |-+ : : : : : : | | : : + : : : : + + | 4 |-+ : + : + : : : : +: : | | : : : : : : : : : : :: | 2 |-+ : : : : : :: : : : : : : : | | : : : : : :: : : : : : : :| 0 +----------------------------------------------------------------------+ 2600 +--------------------------------------------------------------------+ | + : | 2400 |-+ + : : | 2200 |-+ : : :: | | : :: :: | 2000 |-+ :: :: : : | | :: :: + : : | 1800 |-+ : : :: + : + + : :| | : : : : + : :: :+ :+ + + :| 1600 |-+ : :: : : : : : :: + + + : +.+ : : +| 1400 |-++: :: ::: : +: : : + :+ :: + : : + : | | + + +: : : : + + + + + ++.+ +.+ +.++ + + ::: : | 1200 |++ : + :++ + : : : : + +.+ | | + + : :: | 1000 +--------------------------------------------------------------------+ 10 +----------------------------------------------------------------------+ | | 9 |-+ + + | | + ++. : : | 8 |++ : + : + + :: : : + ++ | | : + +: :: : : + + + + + : +.+ + : + :: : : | 7 |-: : : : ::: : .+ : : +: : +.+ : : ++ + + ::: : | | + : : : : : :+ : : : : :: :: +: + : + : | 6 |-+ : : : : + :: :: :+ + + :.++ +: + +| | :: :: + : + + + : :| 5 |-+ :: : + :: | | : : :: | 4 |-+ + + + | | | 3 +----------------------------------------------------------------------+ perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 10000 +-------------------------------------------------------------------+ | : : : | 9500 |-+ + + : + + : + : + | | : .+ + : :: ::.+ : : + :: :: + : | | : + : + : + :: :+ : : : :::: :: : : + | 9000 |-+: : : : +: : : :: : + ::: + + : : + : :+ :: .+| |: : : : + + :: :: + : :: : : : : :+.+ + : + :: :+ | 8500 |:.+ : + + : :: +: : :: : : : +: : + :: | |+ +.+ + :: + + :: + : : :: + | 8000 |-+ :: :: : :: | | : : : :: | | + + + : | 7500 |-+ + | | | 7000 +-------------------------------------------------------------------+ perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 500 +---------------------------------------------------------------------+ | + | 480 |-+ : | | + : | 460 |-+ : + :: | | + : : + : | 440 |-+: : : .+++.+ + + : :: : | | :: : : ++ : + :: + : : :: : +| 420 |:: :+. : : + + : +: :: + : :: + + + : + :+ | |: :: + : + :+ :+ + : ++ : :: +.+ :+ :: : : + | 400 |++ : : : : ::+ : : :: : : + : + + : | | + + +.: : + + + + + +: : + | 380 |-+ + + :: | | :: | 360 +---------------------------------------------------------------------+ perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 1000 +--------------------------------------------------------------------+ 900 |-+ + + | | : : + | 800 |-+ : : : | 700 |-+ : : : | | :: : :: | 600 |-+ :: : :: | 500 |-+ :: : :: | 400 |-+ : : : : :: | | : : : : : : | 300 |-+ : : : : : : | 200 |-+ : : : : : :| | : : : : : :| 100 |-+ : : : : : :| 0 +--------------------------------------------------------------------+ perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 1.6 +---------------------------------------------------------------------+ | : : | 1.4 |-+ : : + | 1.2 |-+ : : : | | : :: : | 1 |-+ : :: :: | | : :: :: | 0.8 |-+ :: : : :: | | : : : : :: | 0.6 |-+ : : : : : : | 0.4 |-+ : : : : : : | | : : : : : :| 0.2 |-+ : : : : : :| | : : : : : :| 0 +---------------------------------------------------------------------+ perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 700 +---------------------------------------------------------------------+ | : | 680 |-+ +: | | + :: | 660 |-+ : : : +| | + : + + + : :::| 640 |-+ + : + : : .+ :+ ++. :: + + +: : :: | |+ + :: : ++. : : + + + + + + +. :+ + :+ : + | 620 |-: + + : : ++.+ :+ ++.+ ++ ++ ++.+ :.+ + | | : : +.: + + | 600 |-++ : + | | :: | 580 |-+ : | | + | 560 +---------------------------------------------------------------------+ perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 10000 +-------------------------------------------------------------------+ | : : : | 9500 |-+ + + : + + : + : + | | : .+ + : :: ::.+ : : + :: :: + : | | : + : + : + :: :+ : : : :::: :: : : + | 9000 |-+: : : : +: : : :: : + ::: + + : : + : :+ :: .+| |: : : : + + :: :: + : :: : : : : :+.+ + : + :: :+ | 8500 |:.+ : + + : :: +: : :: : : : +: : + :: | |+ +.+ + :: + + :: + : : :: + | 8000 |-+ :: :: : :: | | : : : :: | | + + + : | 7500 |-+ + | | | 7000 +-------------------------------------------------------------------+ perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 500 +---------------------------------------------------------------------+ | + | 480 |-+ : | | + : | 460 |-+ : + :: | | + : : + : | 440 |-+: : : + .+++.+ + + : :: : | | :: : : : + : + :: + : : :: : +| 420 |:: :+. : : + : : +: :: + : :: + + + : + :+ | |: :: + : + :+ :+ + : ++ : :: +.+ :+ :: : : + | 400 |++ : : : : ::+ : : :: : : + : + + : | | + + +.: : + + + + + +: : + | 380 |-+ + + :: | | :: | 360 +---------------------------------------------------------------------+ 235 +---------------------------------------------------------------------+ | : | 230 |-+ + + .+ + + : + | | +: :: ++ : :: ++: + :: : | 225 |-+ + + : :+ : : :+ : : : :: + :: | | + : : : :: : : : : : : :: + :: : :: | 220 |-+ : :: : : : : : :: : : : : : : :: :: :: | | :: : +.+ + + : : : : :: : : :+ : :: : :: : :| 215 |-+ + :: : : : : : : :: :: :: + + ::: : :: : +| | : : : +.+ : :: : : : :: : :: : : : : : : | 210 |:+ :: : : : :: : : + : + : : : : : :+ | |+.: : : :: : : : + + + :: : : + | 205 |-++ +.+ ++ + + :+ : + + | | + + | 200 +---------------------------------------------------------------------+ 1200 +--------------------------------------------------------------------+ | | 1000 |-+ +| | :| | :| 800 |-+ :| | ::| 600 |-+ ::| | : | 400 |-+ : | | : | | : | 200 |-+ : | | : | 0 +--------------------------------------------------------------------+ 3.5 +---------------------------------------------------------------------+ | +| 3 |-+ :| | :| 2.5 |-+ :| | ::| 2 |-+ ::| | ::| 1.5 |-+ : | | : | 1 |-+ : | | : | 0.5 |-+ : | | : | 0 +---------------------------------------------------------------------+ 307 +---------------------------------------------------------------------+ |: :: : : : : : : : :: : : : | 306 |:+ :: : :: : : : : : :: : : : | |: :: : :: : : : : : :: : : : | |:: : : :: : : :: : : : :: : : :: :: :: | 305 |:: : : :: + : :: : : : :: : : :: :: :: | |:: : : :: : : :: : : : :: : : :: :: :: | 304 |:: : : + : :: : : :+ :: : :: : : : : : : : + : | | :: : : : :: : : :: : : : :: : : : : : : : : : | 303 |-:: : : : :: : : ::: : : : :: : : : : : : :+ : : | | :: : : : : : : ::: : : : :: : : : : : : :: : : | | : : : : : : : : : : : : : : : :: :: ::: : : | 302 |-+: : : : : : : : : : : : : : : :: :: : : : : | | : : : : : : : : : : : : : : : :: :: : :: : | 301 +---------------------------------------------------------------------+ 235 +---------------------------------------------------------------------+ | : | 230 |-+ + + .+ + + : + | | +: :: ++ : :: ++: + :: :: | 225 |-+ + + : :+ : : :+ : : : :: + :: | | + : : : :: : : : : : : :: + :: : : :| 220 |-+ : :: : : : : : :: : : : : : : :: :: : +| | :: : +.+ + + : : : : :: : : :+ : :: : :: : | 215 |-+ + :: : : : : : : :: :: :: + + ::: : :: : | | : : : +.+ : :: : : : :: : :: : : : : : : | 210 |:+ :: : : : :: : : + : + : : : : : :+ | |+.: : : :: : : : + + + :: : : + | 205 |-++ +.+ ++ + + :+ : + + | | + + | 200 +---------------------------------------------------------------------+ perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 16.9 +--------------------------------------------------------------------+ 16.8 |-+ :+.+ | | : : + | 16.7 |:+ : : : + | 16.6 |++ : : :: : | |: : : + :: + + + :: | 16.5 |-: +. : : :: + : :: :+ + + : :: | 16.4 |-: : + : : :: : : :+ : + :: : : :::| 16.3 |-: + : : : :+ : : : : : : : ::: : : :| | : :: : : + : :: :: :+ : : + : : + + : +| 16.2 |-+:+ : : +. : : :: : + + +.: + :.+: + : : | 16.1 |-++ +: + :.+ : : + + + + : + + | | + + + + +. :+ : | 16 |-+ + +: | 15.9 +--------------------------------------------------------------------+ perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.81 +--------------------------------------------------------------------+ 0.8 |-+ + | | + : | 0.79 |-+ : : + + | 0.78 |-+ : : : : + | 0.77 |-+ :: :: :: : : | 0.76 |-++ :: +. : : :: :: : | | : : :: + : : :: : : :: + +| 0.75 |-: : : :: : : + : : : : : ::: ::| 0.74 |-: : : :: : : : + : :+ : :+. ++.: :: : ::| 0.73 |++ +: : + +.++ + : :+ + : ::+. +: :: + + + + ++ | 0.72 |:+ : : + + :+ : : ++ +: + + : + : +. ++ | | + + + + :+ + + :: + | 0.71 |-+ + : | 0.7 +--------------------------------------------------------------------+ perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.16 +--------------------------------------------------------------------+ | + + + | 0.14 |-+ : + : : | 0.12 |-+ : : + : : + +| | + : : : :: : :: :| 0.1 |-+: :: :: : :: :: + :+ :| | : :: :: : :: :: : : : ::| 0.08 |-:: : ::: : :: : : : : : ::| | :: : : : :: : : : : : : : : | 0.06 |-: : : : : : : : : : : ::: :: | 0.04 |-: : : : : : : : : : : : :: : | |: :: : : : : : : : : : : : | 0.02 |:+ :: + : : : : : : : : : + | |+ ++ ++.+++.+++.+ +.+++.+++.+ +++.+++ ++.+++.+++.++.+ + | 0 +--------------------------------------------------------------------+ perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.025 +-------------------------------------------------------------------+ | : | | : + + | 0.02 |-+ : : : | | + :: : : | | : :: :: : | 0.015 |-+ : : : :: : + | | :: + : : : : :: : | 0.01 |-++ : :: : : : : : : : ++ +| | : : ::: : : : : : : :: :: :| | : : : : : : : : : : : : :: :: | 0.005 |-: :: : : : : : : : : : :: :: | |+ ++ + +++.+++.+++ ++.+++.+++ ++.+++.+ +.+++.+++.+++.+ + + | | | 0 +-------------------------------------------------------------------+ perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_6 310 +---------------------------------------------------------------------+ | | 305 |++ + + +.+ + ++.+ + +++ + | |: :: + + :: : +. :: : :+ : : + + : | 300 |-: :+ + :+ + +.++ ++ ++.+ + : + + : ++.+++ ++ + + | | : :: : : : : : : : : : :: :: | 295 |-: : : :: : : : : : : : : :: :: | | :: : ::: : : : : : : : : :: :| 290 |-+: :: : : : : : : : : : ++ +| | : :: + :: :: :: :: | 285 |-++ : :: :: :: : | | + :: :: :: : | 280 |-+ : : : + | | : : : | 275 +---------------------------------------------------------------------+ 16.9 +--------------------------------------------------------------------+ 16.8 |-+ :+.+ | | : : + | 16.7 |:+ : : : + | 16.6 |++ : : :: + : | |: : : + :: + + : :: | 16.5 |-: +. : : +: + : :: :+ + + : :::| 16.4 |-: : + : :+ : : : :+ : + :: : : : :| 16.3 |-: + : : : :: : : : : : : : ::: : : +| | : :: : : + : :: :: :+ : : + : : + + : | 16.2 |-+:+ : : +. : : : : + + +.: + :.+: + : : | 16.1 |-++ +: + :.+ : : + + + + : + + | | + + + + +. :+. : | 16 |-+ + + | 15.9 +--------------------------------------------------------------------+ 0.84 +--------------------------------------------------------------------+ | + | 0.82 |-+ : | | + : + + | 0.8 |-+ : : : : | | : :: :: : + | 0.78 |-+ :: + : : :: :: : | | + :: :+ : : :: : : : + +| 0.76 |-+: : :: + : : : : : : :: : :| | : : : :: : : + : : : : : :: : ::| 0.74 |-: :: : + .+ + : +. + : :++. +: :+.+ ++.: :: + : | |+ +: : + ++ + + : : ++ +: + + : + : +. + + + + | 0.72 |-+ + + + + :+ + + :: +++ | | + + | 0.7 +--------------------------------------------------------------------+ 6 +-----------------------------------------------------------------------+ | | 5 |-+ + | | : | | : | 4 |-+ :: | | :: | 3 |-+ :: | |: :: | 2 |:+ : : | |: : : | |: : : | 1 |:+ : : | |: : : | 0 +-----------------------------------------------------------------------+ 270 +---------------------------------------------------------------------+ 260 |-+ + : | | :: : | 250 |-+ :+ + + + : : | 240 |-+ :: + +: : : : : | | : : :+ : :: : + : | 230 |-+ : : : : : + : : : | 220 |++ : : : : : : + + + + : : : : | 210 |:+ : : : + : : +: :+.+ :++ :: : : : : | | + : : : : :: : : : + : : + :+.: : +.: :| 200 |-+: +. : : : : :: : : : :+ : : :: : + :.+ + :| 190 |-+ :: +: :.++: + + :: :: + : : :+.+ .+ + +| | : : + + : : : : + | 180 |-+ + + + + + + | 170 +---------------------------------------------------------------------+ 6 +-----------------------------------------------------------------------+ | | 5 |-+ + | | : | | : | 4 |-+ :: | | :: | 3 |-+ :: | |: :: | 2 |:+ : : | |: : : | |: : : | 1 |:+ : : | |: : : | 0 +-----------------------------------------------------------------------+ perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 5.2 +---------------------------------------------------------------------+ | | 5 |+.+++. +.++ +++.++.+++.+++.++.+ +++.++ +.+++.++ +.+++.++.+ +.+| | + : : : : : : : : : : | 4.8 |-+ : : : : : : : : : : | | : : : : : : : : : : | 4.6 |-+ : : : : : : : : : : | | :: : : : : : : : | 4.4 |-+ :: : : : : : : : | | :: : : : : : : : | 4.2 |-+ : :: :: :: : | | : :: :: :: + | 4 |-+ + ++ ++ ++ | | | 3.8 +---------------------------------------------------------------------+ perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 3.2 +---------------------------------------------------------------------+ | + + + | 3.1 |-+ : + :: : + | 3 |-+ : : + :+ + :: : | | :: :::: + : : + + :+ : : :: | 2.9 |+.+ : + : + : +.+ : : : + : + : : : : : :: | 2.8 |:+ : :+ : :: : :: : : + : : :+:: : : + + : : :| |: : : + : + + +: : : + + : + + :: : : +.: : +| 2.7 |-+ +.: : : : : :: : : : : : : : + : + | 2.6 |-+ + :: :: : ++ ++.+ + +: + + | | :: + : : : : | 2.5 |-+ ++ + + :: | 2.4 |-+ : | | + | 2.3 +---------------------------------------------------------------------+ perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 220 +---------------------------------------------------------------------+ | + | 200 |-+ : | 180 |-+ : | | :: + + | 160 |-+ :: : : + | | :: :: : + : | 140 |-+ : : : : + : : : :: + | | : : : : :+: : : +.+ +. + : : : +| 120 |+.+: : : : : + : : : : + + : + + + +.++ : + : :| 100 |-+:: :: ++. : +.+ : : :+ : + ::+ + + + : : :: | | : :+ +++.: : : + + : + + : : + +.: :: | 80 |-+ + + + +.+ :: : :: + : | | + + : + | 60 +---------------------------------------------------------------------+ perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 5.2 +---------------------------------------------------------------------+ | + +. | 5 |+.+++.++.++ +++.++.+++.+++.++.+ :++.++ : +++.++ +.+++.++.+ +.+| | : : : : : : : : : : | | : : : : : : : : : : | 4.8 |-+ : : : : : : : : : : | | : : : : : : : : : : | 4.6 |-+ : : : : : : : : :: | | :: : : : : : : : | 4.4 |-+ :: : : : : : : : | | :: : : : : : : : | | :: :: :: :: : | 4.2 |-+ : :: :: :: + | | : :: :: :: | 4 +---------------------------------------------------------------------+ perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 3.2 +---------------------------------------------------------------------+ | + | 3.1 |-+ + + :: + + | 3 |-+ : : + :+ + : : | | :: ::: + : : + + + + :: :: :: | 2.9 |-.+: : + : +.+ : : : : : :: : : + : : : : | |+ : + : : : : :: : : : ::: :: : : : + : : : :| 2.8 |:+ : :+ : : : +: : : : : : : : :: : : + .+ : :| |: : : + : + + : :: : + + : + + :: : : + : +| 2.7 |-+ +.+ : : : : :: : : : : : : : : : | 2.6 |-+ :: :: : ++ ++.+ + +: + : | | :: + : : : + | 2.5 |-+ ++ + + :: | | + | 2.4 +---------------------------------------------------------------------+ perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 505.2 +-------------------------------------------------------------------+ | | 505 |+.+++.+++.+ +.+++.+++.+++.+++.+++. ++.+++.+ +.+++.+ +++.+++.+ +.+| | + + + : : : : | | : : : : | 504.8 |-+ : : : : | | : : : : | 504.6 |-+ : : : : | | : : : | 504.4 |-+ : : : | | : : : | | : : : | 504.2 |-+ :: : | | :: + | 504 +-------------------------------------------------------------------+ perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 479 +-------------------------------------------------------------------+ | .+ | 478.8 |+.++ +++.+ +.+++.+++ ++.+++ ++.+++.+ +.+++.+++ : +.++ + + +.+| | : : : : : : : : : : : : : : : : :| 478.6 |-+ : : : : :: : : : : : : : : : : :| 478.4 |-+ :: : : + : : : : : : : :: :: : :| | :: :: : : : : : : : :: :: : :| 478.2 |-+ : : : : : : :: : :: : : :| | + : : : :: : :: : : : | 478 |-+ : :: : : :: :: : | | + :: : : :: :+ : | 477.8 |-+ : : : : :: + | 477.6 |-+ : : : : : | | : : + + + | 477.4 +-------------------------------------------------------------------+ perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 1.2 +---------------------------------------------------------------------+ | | 1 |-+ + | | : | | : | 0.8 |-+ : | | :: | 0.6 |-+ :: | | : : | 0.4 |-+ : : | | : : | | : : | 0.2 |-+ : : | | .+: : .+| 0 +---------------------------------------------------------------------+ perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.04 +-------------------------------------------------------------------+ | | 0.035 |-+ + | | : | 0.03 |-+ : | | :: | 0.025 |-+ : : +| | : : :| 0.02 |-+ : : ::| | : : : | 0.015 |-++ + + + + + +: + : | | + : + ::+ + : + : :: :: : | 0.01 |++ ++ + + .+++.+ + ++.+++.+++ ++.+++.+ +.+++.+++.+++.+ + + | | ++ + | 0.005 +-------------------------------------------------------------------+ perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 80 +--------------------------------------------------------------------+ | : : : : : : : : : : : : : : : : :| 79.5 |-+ : : : : : : : : : : : : : : : :| 79 |-+ : : : : + : : : : : : : : :: : :| | :: : : : : : : : : : :: :: : :| 78.5 |-+ :: :: : : : : : : : :: : :: :| | : :: : : : : : : : :: : : :| 78 |-+ + :: :: : : : : :: : : : | | : : :: :: :: :: : | 77.5 |-+ : : :: :: :: :: : | 77 |-+ + : :: :: :: :+ + | | : : : : :: | 76.5 |-+ : : : : :: | | : : : : : | 76 +--------------------------------------------------------------------+ perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_for 505.2 +-------------------------------------------------------------------+ | .+ | 505 |+.+++.+++.+ +.+++.+++.+++.+++.+++. ++.+++.+ +.+++.+ +++.+++ : +.+| | + + + : : : : | | : : : : | 504.8 |-+ : : : : | | : : : : | 504.6 |-+ : : :: | | : : : | 504.4 |-+ : : : | | : : : | | : : : | 504.2 |-+ :: : | | :: + | 504 +-------------------------------------------------------------------+ perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_for 479 +-------------------------------------------------------------------+ | +. .+ | 478.8 |+.++ +++.+ +.++ +++ ++.+++ ++.+++.+ +.+++.+++ : +.++ + + +.+| | : : : : : : : : : : : : : : : : :| 478.6 |-+ : : : : :: : : : : : : : : : : :| 478.4 |-+ :: : : + : : : : : : : :: :: : :| | :: :: : : : : : : : :: : :: :| 478.2 |-+ : : : : : : :: : :: : : :| | + : :: : : :: : : : | 478 |-+ : :: : : :: :: : | | + :: : : :: :+ + | 477.8 |-+ : : : : :: | 477.6 |-+ : : : : : | | + + + + + | 477.4 +-------------------------------------------------------------------+ perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.is 0.004 +------------------------------------------------------------------+ | : : | | : : | 0.0035 |-+ : : | | :: :: | | :: :: | | :: :: | 0.003 |-+ :: + + + :+.+ +| | : : : : : : : :| | : : :: : :: : : ::| 0.0025 |-+ : : :: : :: : : ::| | : : : : : : : : : : : | | : : : : : : : : : : : | | : : : : : : : : : :: | 0.002 +------------------------------------------------------------------+ 72 +--------------------------------------------------------------------+ |: :: : : : : :: : : : : :: | |: :: : : : : :: : : : : :: | 71.8 |:+ :: : : : : :: : : : : :: | |:: : : : : :: :: : : : : : :: : : | |:: : : : : :: :: : : : : : :: : : | 71.6 |:: : : : : :: :: : : : : : :: : : | |:: : : : : :: :: : : :: : : : : : : | 71.4 |-:: : : : : : : : : :: : : : : : : : | | :: : : : : : : : : :: : : : : : : : | | :: : : : : : : : : :: : : : : : : : | 71.2 |-+: : : : : : : : : : : : : : : : : | | : : : : : : : : : : : : : : : : : | | : : : : : : : : : : : : : : : : : | 71 +--------------------------------------------------------------------+ 0.004 +------------------------------------------------------------------+ | : : | | : : | 0.0035 |-+ : : | | :: :: | | :: :: | | :: :: | 0.003 |-+ :: + + + :+.+ +| | : : : : : : : :| | : : :: : :: : : ::| 0.0025 |-+ : : :: : :: : : ::| | : : : : : : : : : : : | | : : : : : : : : : : : | | : : : : : : : : : :: | 0.002 +------------------------------------------------------------------+ 7500 +--------------------------------------------------------------------+ |+ ++.++ +.+++.++ .+ +.+++.+++.+ +++.+++ ++.++ +++.++.+++.++ +| 7000 |:+ : +.+ + + : : : : : : : :| |: : : : : : : : : :| | : : : : : : : : : : | 6500 |-: : : : : : : : : : | | : : : : : : : : : : | 6000 |-:: :: :: :: :: | | :: :: :: :: :: | 5500 |-:: :: :: :: :: | | : : : : : | | : : : : : | 5000 |-++ + + + + | | | 4500 +--------------------------------------------------------------------+ 750 +---------------------------------------------------------------------+ | + | 700 |++ :: + + + + + + + | |: : + :+ :+ : : : +: :: + | 650 |:: : : : + : + : : : : :: :+ + | | : : : : :+ : : :: + : + : : : + + : | 600 |-: + ++ +: : : : : :: :: :: .+ : : : : +| | : : : ++. : : : : :+.+ :: :++ : : +: : :| 550 |-+++ : +.+ ++ :.+: : : +.: :: :: + : : | | :: + + : : + :: : :: | 500 |-+ + :: :: + :: | | : :: :: | 450 |-+ : : : | | + + + | 400 +---------------------------------------------------------------------+ 0.045 +-------------------------------------------------------------------+ | + | 0.04 |-+ : | 0.035 |-+ : | | + :: | 0.03 |-+ : +: | | : : : | 0.025 |++ + + + + :: : : | |: :: :: :: : + .+ :: : | 0.02 |-: +++. + : : : : : : + : : + :: + :: : +| 0.015 |-: + + ++ : ++ ++ ++.+ + : + + + : : +: : + : :+ | | : ++ + + : : :: ++ :+ +.+ :+ + + + + | 0.01 |-: : + : + + + | | : + | 0.005 +-------------------------------------------------------------------+ 70 +----------------------------------------------------------------------+ | + | 65 |-+ + : + + + + | | :: :: :+ : : : | | :: :: : : + : + : ++ :: | 60 |-+ : : + :+ + : : : : : :: :: | | : + :: : + : : + : + : : .+ + : : : :| 55 |:+: : : +. : + : : : : .+ :: : + :+ : : : + : +| |: : : : + : + : : :+ + :+ + ++ : :: + : | 50 |+.: :+ ++.+ : + + + +.+ : ::.+ | | + :: : : + + | | + :: | 45 |-+ : | | + | 40 +----------------------------------------------------------------------+ 7500 +--------------------------------------------------------------------+ |+ ++.++ +.+++.++ .+ +.+++.+++.+ +++.+++ ++.++ +++.++.+++.++ +| 7000 |:+ : +.+ + + : : : : : : : :| |: : : : : : : : : :| | : : : : : : : : : : | 6500 |-: : : : : : : : : : | | : : : : : : : : : : | 6000 |-:: :: :: :: :: | | :: :: :: :: :: | 5500 |-:: :: :: :: :: | | : : : : : | | : : : : : | 5000 |-++ + + + + | | | 4500 +--------------------------------------------------------------------+ 750 +---------------------------------------------------------------------+ | + | 700 |++ :: + + + + + + + | |: : + :+ :+ : : : +: :: + | 650 |:: : : : + : + : : : : :: :+ + | | : : : : :+ : : :: + : + : : : + + : | 600 |-: + ++ +: : : : : :: :: :: .+ : : : : +| | : : : ++. : : : : :+.+ :: :++ : : +: : :| 550 |-+++ : +.+ ++ :.+: : : +.: :: :: + : : | | :: + + : : + :: : :: | 500 |-+ + :: :: + :: | | : :: :: | 450 |-+ : : : | | + + + | 400 +---------------------------------------------------------------------+ 1200 +--------------------------------------------------------------------+ | + | 1100 |-+ : | 1000 |-+ :: + + + | | + :: : :: :: | 900 |-+ : :: + :: + :: :: | 800 |-+ + :: : : : ::+: : : : : | | + + :: : :+.+ :.+ : + :+ : + : + | 700 |-+ : + : :+ :: + : : +: : : + : : + | 600 |-+ + ++ :: :: : + : + ::+ : :+: | |+ + + : : :: : : + ++ + : + | 500 |-+ :+.++ + + + :: : ++. + +:| 400 |-++ +. : + + + +| | + | 300 +--------------------------------------------------------------------+ 45 +----------------------------------------------------------------------+ | + | 40 |-:: + + + + + | | :: +: : + : : :: | 35 |++ : + + : + : : : : + :+ + | | :+: : : : :: : :: : : + + : : : | 30 |-+ + : : : :: + :: :: :: : : : : : : :: | | :: :: + : : : :: : : : : :: :: : : : :: | 25 |-+ : :: : :: : : :: : : : : : : :: :: : : : : | | : :: : : :: : : : : : : : : : : : : :: : : :| 20 |-+ + :: : : : :: : : : : : : : : : : : +.: +| | : : : : :: : + : + : : :: : : : + :| 15 |-+ + : : + ++ : + ++ + +.+: ++ +: + | | : ++ :.+ + +. + + + | 10 +----------------------------------------------------------------------+ 1200 +--------------------------------------------------------------------+ | + | 1100 |-+ : | 1000 |-+ :: + + + | | + :: : :: :: | 900 |-+ : :: + :: + :: :: | 800 |-+ + :: : : : ::+: : : : : | | + + :: : :+.+ :.+ : + :+ : + : + | 700 |-+ : + : :+ :: + : : +: : : + : : + | 600 |-+ + ++ :: :: : + : + ::+ : :+: | |+ + + : : :: : : + ++ + : + | 500 |-+ :+.++ + + + :: : ++. + +:| 400 |-++ +. : + + + +| | + | 300 +--------------------------------------------------------------------+ 0.14 +--------------------------------------------------------------------+ | ++ +.+ + + + + + + + : + | 0.12 |-+ :: : : : : : : : : : : : | | :: : :: : : : : : : : : | | : : : : : :: : :: :: :: :: : ::: | 0.1 |-+ : : : + : :: : :: :: : ::: : : : | | : : : : :: : :: :: : ::: : : : | 0.08 |-+ : : : :: + : : : : : : : :: : : : : | | : : +: :: : : : : : : : : : : : + : | 0.06 |-+ +.+ : : :: :: : : : : : : : : + : : : | | : :: :::: : : : : : : : :: :: :| | + + : :: : : :+ +.: : + .+: : + :: :: :| 0.04 |+.++ + : + + : + + +.+ + :+ ++ + :+ ++ ++ +| | + + + + | 0.02 +--------------------------------------------------------------------+ 0.032 +-------------------------------------------------------------------+ | : : + | 0.03 |-+ + : : : + | | +.++ : :: :: : :+ | 0.028 |-+ + : : + :: :: :: :: :: | | :: + : : :: : :: : : +: : : | 0.026 |-+ + : : :: : : : : : : : : + : : : : | |+ :+.+ ++ + : + :: + : : : : : ::: : + : + | 0.024 |:+: + : :: :: : : : : : + + ::: : + | |: + +.: + :: + : ++ : + + ::: : +| 0.022 |-+ + : : .+: : +: + + :: :| | + :+ + :.+ + :: :| 0.02 |-+ + + ++ | | | 0.018 +-------------------------------------------------------------------+ 1000.15 +-----------------------------------------------------------------+ 1000.14 |-+ : + :| | + + : : :| 1000.13 |-+ : : : + + : ::| 1000.12 |-+ : : : : : : ::| | :: : : : : : ::| 1000.11 |-+ :: : : :: :: : ::| 1000.1 |-+ :: : : : :: :: :: : | 1000.09 |-+ : :: : : : : : : : : : : | | : :: : : : : : : : : :: | 1000.08 |-+ : : : : : : : : : + : :: | 1000.07 |-+ : : : .+ + : : + : : + : : :+ +.+ :+ | | : + : + :+ +: :.+ +.+:: :+.++:: :+.+ +.+ + : :: | 1000.06 |:+ ++ +.+ : + + + :+ : + : + ++ ++ : | 1000.05 +-----------------------------------------------------------------+ 860 +---------------------------------------------------------------------+ | : : : : : : : : : : : : :: | 840 |-: :: :: : : : :: : :: : : : : : | | :: : :: : : :: :: :: :: :: : : : : | 820 |-+++ : + : : : :: :: : : :: : : :: ::: : + | | : : : : : :: :: : : : : : : :: ::: : :: | 800 |-+ :: : : : : : : : : : : : : : : : :: : :: | | : :: : : : : : : : : : : : : : : : : :| 780 |-+ + : ++.++ : +++.++.+++ + ++.++ + + + : + : + +: +| | : : : : : : :: :: :: | 760 |-+ : : :: : :: :: : | | : + + + + + + | 740 |-+ : | | : | 720 +---------------------------------------------------------------------+ 25 +--------------------------------------------------------------------+ | : | 24.5 |-+ : | 24 |-+ : + + + + + + | | :: : : :: : : : | 23.5 |-+ :: : :: :: :: :: : | | :: :: : : : : :: :: :: | 23 |-+ + : : ++.++ :: +.+++.+++.+ + +.+++ + + + : + : + + : +| | : : : : : :: : : : : : : : : : : : : :: | 22.5 |-+ : : : : : : : :: : : : : : : :: ::: : :: | 22 |-+++ : + : : : : : :: : : : : : : :: ::: : + | | :: : :: : : :: : :: :: : : :: : : | 21.5 |-: :: : : : : : :: :: : : :: : : | | : : : : : : : : : : : : :: | 21 +--------------------------------------------------------------------+ 1000.2 +-----------------------------------------------------------------+ | + :| 1000.18 |-+ + : :| | : : ::| 1000.16 |-+ + + : + + : ::| 1000.14 |-+ : : : : : : ::| | :: : : : : :: : | 1000.12 |-+ :: : : : :: :: : : : | | : :: : : : :: :: : :: | 1000.1 |-+ : : : + : : + : : + : : ++ .+ :: | | : + : .+ :+: : : : :+. : : :+. : + + :+ | 1000.08 |-+ + +.+ + ++ + +.+ ++.+ : :: ++ : :: + ++.+ ++ :: | 1000.06 |:.++ :: :: : : : : + + | |+ + + + + + + | 1000.04 +-----------------------------------------------------------------+ 860 +---------------------------------------------------------------------+ | : : : : : : : : : : : : :: | 840 |-: :: :: : : : :: : :: : : : : : | | :: : :: : : :: :: :: :: :: : : : : | 820 |-+++ : + : : : :: :: : : :: : : :: ::: : + | | : : : : : :: :: : : : : : : :: ::: : :: | 800 |-+ :: : : : : : : : : : : : : : : : :: : :: | | : :: : : : : : : : : : : : : : : : : :| 780 |-+ + : ++.++ : +++.++.+++ + ++.++ + + + : + : + +: +| | : : : : : : :: :: :: | 760 |-+ : : :: : :: :: : | | : + + + + + + | 740 |-+ : | | : | 720 +---------------------------------------------------------------------+ 999.75 +------------------------------------------------------------------+ | + | 999.7 |-+ : | | :: + | | + +: : + + :: + + + + + | 999.65 |-+ : :: : : +: : ++. :: : : + : : + + :.+: | | .+ :: + : ::: :: + + : + :: : + : .+ + + : +| 999.6 |++ :: :: :: + : + : :: ++ : : :+ : ++ + :::| | + :+ + : : : : :: : : :: + + :: :::| 999.55 |-+ + : : : + :: + + + + | | : : : : | | :: + + | 999.5 |-+ ++ | | | 999.45 +------------------------------------------------------------------+ 900 +---------------------------------------------------------------------+ |: : : | 880 |:+ : : | |: : : | |: : : | 860 |:+ : : | |: :: | 840 |-+ :: | | :: | 820 |-+ :: | | : | | : | 800 |-+ + | | | 780 +---------------------------------------------------------------------+ 10.1 +-------------------------------------------------------------------+ 10.08 |-+ | | | 10.06 |-+ | 10.04 |-+ | 10.02 |-+ | 10 |+.+++.+++.+++.+++.+++.+++.+++.+++.+++.+++.+++.+++.+++.+++.+++.+++.+| | | 9.98 |-+ | 9.96 |-+ | 9.94 |-+ | 9.92 |-+ | | | 9.9 |-+ | 9.88 +-------------------------------------------------------------------+ 999.75 +------------------------------------------------------------------+ | : | 999.7 |-+ : + | | +: : : + | | + :: : + + : :+. + + + + + : ++ | 999.65 |-+ : : + : : + :: + + : : + :: : + : .+ :+ : +| | .+ :: : ::+ + + : : :: : : : : .+ + + + :::| 999.6 |++ + :: + : + : : :++ : : :+ + : + : :::| | :+ : : : + :: + :: + + | 999.55 |-+ + : : : : + | | : : + + | | :: | 999.5 |-+ ++ | | | 999.45 +------------------------------------------------------------------+ 900 +---------------------------------------------------------------------+ |: : : | 880 |:+ : : | |: : : | |: : : | 860 |:+ : : | |: :: | 840 |-+ :: | | :: | 820 |-+ :: | | : | | : | 800 |-+ + | | | 780 +---------------------------------------------------------------------+ perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.06 +--------------------------------------------------------------------+ | : : : : | 0.05 |-+ : : : : | | + :: :: :: : | | : :: :: :: :: | 0.04 |-+ : :: :: :: :: +| | :: :: : : : : :: :| 0.03 |-+ ::: : + : : : : + : : ::| | + : : : : : : : : : : : ::| 0.02 |-+: : : : : : : : : : : : : | | :: : : : : : : : : : : : : : | | : :: : : .+ +.+ : +.+ : ++ : .+ : : :: | 0.01 |++ :+ + ++. ++ + +. + .+ : + +. : + .+++ ++.++.+ + :+ | | + + + + + + + + + | 0 +--------------------------------------------------------------------+ perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 8 +-----------------------------------------------------------------------+ | : : : : | | : : : : | 7 |-:: : :: + + + :: +.+ +| | :: : :: : : : :: : : :| | :: : :: : : : :: : : :| 6 |-: : : ::: : :: :: : :: : ::| |: : : :: : :: : : : : : :: :: | 5 |++ : : :: : :: : : : : : : :: | | : : : : : : : : : : : : :: | | : : : : : : : : : : : : : | 4 |-+ +.+ + : : + ++.++ +.+++.++ ++.++.+++.++.++ + + | | : : : : | | : : : : | 3 +-----------------------------------------------------------------------+ perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 8 +-----------------------------------------------------------------------+ | : : : : | | : : : : | 7 |-:: : :: + + + :: +.+ +| | :: : :: : : : :: : : :| | :: : :: : : : :: : : :| 6 |-: : : ::: : :: :: : :: : ::| |: : : :: : :: : : : : : :: :: | 5 |++ : : :: : :: : : : : : : :: | | : : : : : : : : : : : : :: | | : : : : : : : : : : : : : | 4 |-+ +.+ + : : + ++.++ +.+++.++ ++.++.+++.++.++ + + | | : : : : | | : : : : | 3 +-----------------------------------------------------------------------+ 0.09 +--------------------------------------------------------------------+ | + + + | 0.08 |-++ : : : | 0.07 |-+: : : : | | : :: :: :: | 0.06 |-:: :: :: :: | 0.05 |-:: :: :: :: | | :: : : : : : : + | 0.04 |-: : : : : : : : : | 0.03 |-: : : : : : : : + :: | |: : : : + : : +: : : : : | 0.02 |:+ : : : + :+: : :: : : :: : | 0.01 |++ : .++ ++. + +. ::.++ : + +++.: + ++.+++.+ .+ .+ : :+. | | ++ + +.++ + + +.++ + ++ + + + +| 0 +--------------------------------------------------------------------+ perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 0.09 +--------------------------------------------------------------------+ | + | 0.08 |-+ : | 0.07 |-+ : | | : | 0.06 |-+ : | 0.05 |-+ :: | | : : | 0.04 |-+ : : | 0.03 |-+ : : | |+ + : + + | 0.02 |:+ + : : :: + + :: | 0.01 |-++ +.+ +.+ + + + : +. : : ++. :+ .++ + : : : +.+ +. +.+| | + + +.+ + + ++ + + +++ + ++.++ +++.+ + ++ | 0 +--------------------------------------------------------------------+ perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 0.09 +--------------------------------------------------------------------+ | + | 0.08 |-+ : | 0.07 |-+ : | | : | 0.06 |-+ : | 0.05 |-+ :: | | : : | 0.04 |-+ : : | 0.03 |-+ : : | |+ + : + + | 0.02 |:+ + : : :: + + :: | 0.01 |-++ +.+ +.+ + + + : +. : : ++. :+ .++ + : : : +.+ +. +.+| | + + +.+ + + ++ + + +++ + ++.++ +++.+ + ++ | 0 +--------------------------------------------------------------------+ [*] bisect-good sample [O] bisect-bad sample *************************************************************************************************** lkp-skl-fpga01: 104 threads Skylake with 192G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/process/16/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/malloc1/will-it-scale/0x2006a0a commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 1105101 -11.1% 982154 ± 2% will-it-scale.16.processes 83.20 -1.4% 82.01 will-it-scale.16.processes_idle 69068 -11.1% 61384 ± 2% will-it-scale.per_process_ops 1105101 -11.1% 982154 ± 2% will-it-scale.workload 1.36 ± 15% -0.7 0.67 ± 4% mpstat.cpu.all.irq% 6.653e+08 -11.0% 5.921e+08 ± 2% numa-numastat.node0.local_node 6.653e+08 -11.0% 5.921e+08 ± 2% numa-numastat.node0.numa_hit 626019 ± 3% +223.0% 2022084 ± 22% numa-numastat.node1.local_node 677357 ± 4% +202.5% 2049009 ± 21% numa-numastat.node1.numa_hit 83.00 -1.2% 82.00 vmstat.cpu.id 1358743 +147.9% 3368229 ± 2% vmstat.memory.cache 15.17 ± 2% +12.1% 17.00 vmstat.procs.r 1602 +10.6% 1772 vmstat.system.cs 599024 ± 2% +70.6% 1022074 ± 38% numa-meminfo.node0.FilePages 1251096 ± 5% +48.3% 1855764 ± 20% numa-meminfo.node0.MemUsed 55683 ± 4% +2832.5% 1632907 ± 28% numa-meminfo.node1.Active 55560 ± 4% +2830.2% 1628037 ± 29% numa-meminfo.node1.Active(anon) 661513 ± 2% +238.7% 2240271 ± 20% numa-meminfo.node1.FilePages 1318940 ± 5% +137.5% 3132286 ± 13% numa-meminfo.node1.MemUsed 64592 ± 4% +2434.8% 1637306 ± 28% numa-meminfo.node1.Shmem 86167 ± 72% -40.6% 51159 ± 2% sched_debug.cfs_rq:/.load.max 27024 ± 20% -18.4% 22050 sched_debug.cfs_rq:/.load.stddev 490216 ± 4% +18.4% 580187 ± 5% sched_debug.cfs_rq:/.min_vruntime.avg 225.01 ± 12% -23.1% 172.92 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.avg 348.14 ± 5% -23.6% 266.02 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.stddev 1334 ± 25% -48.8% 682.65 ± 3% sched_debug.cpu.clock_task.stddev 0.16 ± 2% +11.0% 0.18 sched_debug.cpu.nr_running.avg 57321 ± 3% +3485.8% 2055409 ± 3% meminfo.Active 56950 ± 3% +3497.8% 2048955 ± 4% meminfo.Active(anon) 1260522 +158.6% 3260277 ± 2% meminfo.Cached 2367306 +88.5% 4462262 ± 2% meminfo.Committed_AS 2569992 +94.0% 4985892 meminfo.Memused 4496 +11.4% 5009 meminfo.PageTables 76116 ± 2% +2615.4% 2066827 ± 4% meminfo.Shmem 3023094 +64.9% 4985892 meminfo.max_used_kB 149755 ± 2% +70.6% 255457 ± 38% numa-vmstat.node0.nr_file_pages 3.314e+08 -10.4% 2.97e+08 ± 2% numa-vmstat.node0.numa_hit 3.313e+08 -10.4% 2.97e+08 ± 2% numa-vmstat.node0.numa_local 13864 ± 4% +2833.3% 406677 ± 29% numa-vmstat.node1.nr_active_anon 165371 ± 2% +238.5% 559735 ± 20% numa-vmstat.node1.nr_file_pages 16141 ± 4% +2433.9% 408994 ± 28% numa-vmstat.node1.nr_shmem 13864 ± 4% +2833.3% 406677 ± 29% numa-vmstat.node1.nr_zone_active_anon 1129707 ± 2% +59.3% 1799567 ± 9% numa-vmstat.node1.numa_hit 947670 ± 11% +68.0% 1591804 ± 11% numa-vmstat.node1.numa_local 31447 ± 2% +41.8% 44578 ± 3% slabinfo.filp.active_objs 993.50 ± 2% +40.7% 1398 ± 3% slabinfo.filp.active_slabs 31810 ± 2% +40.7% 44750 ± 3% slabinfo.filp.num_objs 993.50 ± 2% +40.7% 1398 ± 3% slabinfo.filp.num_slabs 8576 +52.7% 13093 ± 13% slabinfo.kmalloc-256.active_objs 268.17 +53.3% 411.17 ± 13% slabinfo.kmalloc-256.active_slabs 8605 +53.1% 13173 ± 13% slabinfo.kmalloc-256.num_objs 268.17 +53.3% 411.17 ± 13% slabinfo.kmalloc-256.num_slabs 13727 -35.8% 8815 slabinfo.proc_inode_cache.active_objs 13729 -35.8% 8815 slabinfo.proc_inode_cache.num_objs 22048 +36.6% 30115 slabinfo.radix_tree_node.active_objs 393.00 +36.8% 537.67 slabinfo.radix_tree_node.active_slabs 22048 +36.6% 30118 slabinfo.radix_tree_node.num_objs 393.00 +36.8% 537.67 slabinfo.radix_tree_node.num_slabs 14238 ± 3% +3503.0% 513000 ± 4% proc-vmstat.nr_active_anon 61574 +8.1% 66554 proc-vmstat.nr_anon_pages 4824998 -1.2% 4764837 proc-vmstat.nr_dirty_background_threshold 9661794 -1.2% 9541325 proc-vmstat.nr_dirty_threshold 315136 +158.9% 815833 ± 2% proc-vmstat.nr_file_pages 48538649 -1.2% 47933894 proc-vmstat.nr_free_pages 66233 +7.0% 70862 proc-vmstat.nr_inactive_anon 1122 +11.3% 1249 proc-vmstat.nr_page_table_pages 19033 ± 2% +2618.8% 517468 ± 4% proc-vmstat.nr_shmem 47871 +3.3% 49439 proc-vmstat.nr_slab_unreclaimable 14238 ± 3% +3503.0% 513000 ± 4% proc-vmstat.nr_zone_active_anon 66233 +7.0% 70862 proc-vmstat.nr_zone_inactive_anon 2897 ± 91% +114.1% 6202 ± 39% proc-vmstat.numa_hint_faults 6.656e+08 -10.8% 5.94e+08 proc-vmstat.numa_hit 6.655e+08 -10.8% 5.939e+08 proc-vmstat.numa_local 20412 ± 4% +236.7% 68727 ± 3% proc-vmstat.pgactivate 6.661e+08 -10.6% 5.952e+08 proc-vmstat.pgalloc_normal 3.335e+08 -11.0% 2.967e+08 ± 2% proc-vmstat.pgfault 6.661e+08 -10.8% 5.94e+08 proc-vmstat.pgfree 8040 ± 9% +16.9% 9395 ± 9% softirqs.CPU1.RCU 7061 ± 7% +30.9% 9240 ± 12% softirqs.CPU16.RCU 7511 ± 10% +43.0% 10743 ± 19% softirqs.CPU17.RCU 7696 ± 10% +17.0% 9002 ± 11% softirqs.CPU2.RCU 7261 ± 13% +32.3% 9605 ± 15% softirqs.CPU20.RCU 6764 ± 13% +28.7% 8706 ± 8% softirqs.CPU21.RCU 5969 ± 8% +95.5% 11670 ± 30% softirqs.CPU27.RCU 5875 ± 4% +52.7% 8971 ± 31% softirqs.CPU29.RCU 6642 ± 7% +29.9% 8626 ± 13% softirqs.CPU45.RCU 6641 ± 7% +35.4% 8991 ± 22% softirqs.CPU47.RCU 6629 ± 7% +30.5% 8648 ± 15% softirqs.CPU48.RCU 6314 ± 8% +53.3% 9679 ± 31% softirqs.CPU49.RCU 17807 ± 7% +120.2% 39216 ± 4% softirqs.CPU51.SCHED 6839 ± 7% +38.3% 9462 ± 5% softirqs.CPU58.RCU 6665 ± 7% +33.8% 8921 ± 7% softirqs.CPU59.RCU 7542 ± 9% +17.3% 8847 ± 8% softirqs.CPU66.RCU 4833 ± 17% +78.2% 8612 ± 28% softirqs.CPU77.RCU 7677 ± 13% +24.4% 9552 ± 12% softirqs.CPU78.RCU 41305 -12.3% 36206 ± 8% softirqs.CPU78.SCHED 7588 ± 10% +21.5% 9222 ± 19% softirqs.CPU84.RCU 780375 ± 6% +19.5% 932332 ± 4% softirqs.RCU 43219 ± 7% +23.9% 53545 ± 2% softirqs.TIMER 52.18 ± 10% -52.2 0.00 perf-profile.calltrace.cycles-pp.__munmap 51.08 ± 10% -51.1 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap 50.08 ± 10% -50.1 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap 50.05 ± 10% -50.1 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap 49.94 ± 10% -49.9 0.00 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap 49.77 ± 10% -49.8 0.00 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe 48.99 ± 10% -49.0 0.00 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64 37.69 ± 10% -37.7 0.00 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap 37.63 ± 10% -37.6 0.00 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap 34.89 ± 19% -34.9 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 33.92 ± 17% -33.9 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 33.92 ± 17% -33.9 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 33.92 ± 17% -33.9 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 33.57 ± 17% -33.6 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 33.46 ± 17% -33.5 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 33.02 ± 21% -33.0 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 31.26 ± 9% -31.3 0.00 perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap 30.90 ± 9% -30.9 0.00 perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region 30.60 ± 9% -30.6 0.00 perf-profile.calltrace.cycles-pp.native_flush_tlb_one_user.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu 5.94 ± 11% -5.9 0.00 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap 5.90 ± 10% -5.9 0.00 perf-profile.calltrace.cycles-pp.asm_exc_page_fault 5.77 ± 10% -5.8 0.00 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap 5.60 ± 10% -5.6 0.00 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap 5.03 ± 10% -5.0 0.00 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault 4.99 ± 10% -5.0 0.00 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 54.84 ± 10% -54.8 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 52.90 ± 10% -52.9 0.00 perf-profile.children.cycles-pp.do_syscall_64 52.27 ± 10% -52.3 0.00 perf-profile.children.cycles-pp.__munmap 50.05 ± 10% -50.1 0.00 perf-profile.children.cycles-pp.__x64_sys_munmap 49.95 ± 10% -49.9 0.00 perf-profile.children.cycles-pp.__vm_munmap 49.78 ± 10% -49.8 0.00 perf-profile.children.cycles-pp.__do_munmap 49.00 ± 10% -49.0 0.00 perf-profile.children.cycles-pp.unmap_region 37.69 ± 10% -37.7 0.00 perf-profile.children.cycles-pp.tlb_finish_mmu 37.64 ± 10% -37.6 0.00 perf-profile.children.cycles-pp.tlb_flush_mmu 34.89 ± 19% -34.9 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 34.89 ± 19% -34.9 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 34.89 ± 19% -34.9 0.00 perf-profile.children.cycles-pp.do_idle 34.53 ± 19% -34.5 0.00 perf-profile.children.cycles-pp.cpuidle_enter 34.53 ± 19% -34.5 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 33.92 ± 17% -33.9 0.00 perf-profile.children.cycles-pp.start_secondary 33.11 ± 21% -33.1 0.00 perf-profile.children.cycles-pp.intel_idle 31.28 ± 9% -31.3 0.00 perf-profile.children.cycles-pp.flush_tlb_mm_range 30.95 ± 9% -30.9 0.00 perf-profile.children.cycles-pp.flush_tlb_func_common 30.65 ± 9% -30.7 0.00 perf-profile.children.cycles-pp.native_flush_tlb_one_user 6.63 ± 12% -6.6 0.00 perf-profile.children.cycles-pp.release_pages 5.96 ± 10% -6.0 0.00 perf-profile.children.cycles-pp.asm_exc_page_fault 5.78 ± 10% -5.8 0.00 perf-profile.children.cycles-pp.unmap_vmas 5.71 ± 10% -5.7 0.00 perf-profile.children.cycles-pp.unmap_page_range 5.05 ± 10% -5.1 0.00 perf-profile.children.cycles-pp.exc_page_fault 5.01 ± 10% -5.0 0.00 perf-profile.children.cycles-pp.do_user_addr_fault 4.68 ± 11% -4.7 0.00 perf-profile.children.cycles-pp.__mmap 33.11 ± 21% -33.1 0.00 perf-profile.self.cycles-pp.intel_idle 30.60 ± 9% -30.6 0.00 perf-profile.self.cycles-pp.native_flush_tlb_one_user 5.794e+09 +10.5% 6.403e+09 perf-stat.i.branch-instructions 0.67 ± 37% +11.0 11.68 ± 6% perf-stat.i.cache-miss-rate% 1466858 ± 45% +1979.5% 30503394 ± 6% perf-stat.i.cache-misses 2.278e+08 ± 8% +14.8% 2.616e+08 ± 3% perf-stat.i.cache-references 1573 +10.8% 1742 perf-stat.i.context-switches 4.718e+10 +11.7% 5.27e+10 perf-stat.i.cpu-cycles 108.07 -1.8% 106.11 perf-stat.i.cpu-migrations 48430 ± 64% -96.3% 1798 ± 9% perf-stat.i.cycles-between-cache-misses 0.08 ± 11% -0.0 0.06 perf-stat.i.dTLB-load-miss-rate% 6.336e+09 +10.0% 6.967e+09 perf-stat.i.dTLB-loads 0.15 -0.0 0.11 perf-stat.i.dTLB-store-miss-rate% 4511491 -12.9% 3930479 perf-stat.i.dTLB-store-misses 2.97e+09 +16.4% 3.457e+09 perf-stat.i.dTLB-stores 3.67 ± 5% +0.6 4.28 ± 8% perf-stat.i.iTLB-load-miss-rate% 97452404 -9.9% 87781036 perf-stat.i.iTLB-loads 2.63e+10 +11.9% 2.944e+10 perf-stat.i.instructions 0.85 ± 13% -86.7% 0.11 ± 60% perf-stat.i.major-faults 0.45 +11.7% 0.51 perf-stat.i.metric.GHz 0.94 ± 20% -69.9% 0.28 ± 5% perf-stat.i.metric.K/sec 148.39 +11.4% 165.31 perf-stat.i.metric.M/sec 1104170 -11.0% 982353 ± 2% perf-stat.i.minor-faults 124679 ± 19% +4411.6% 5625000 ± 4% perf-stat.i.node-load-misses 18816 ± 18% +3435.6% 665298 ± 6% perf-stat.i.node-loads 91.87 +7.0 98.92 perf-stat.i.node-store-miss-rate% 31781 ± 7% +10338.3% 3317480 ± 8% perf-stat.i.node-store-misses 5872 ± 9% +393.6% 28983 ± 5% perf-stat.i.node-stores 1104171 -11.0% 982353 ± 2% perf-stat.i.page-faults 0.63 ± 41% +11.0 11.67 ± 6% perf-stat.overall.cache-miss-rate% 43058 ± 57% -96.0% 1734 ± 6% perf-stat.overall.cycles-between-cache-misses 0.08 ± 11% -0.0 0.06 perf-stat.overall.dTLB-load-miss-rate% 0.15 -0.0 0.11 perf-stat.overall.dTLB-store-miss-rate% 3.66 ± 5% +0.6 4.26 ± 8% perf-stat.overall.iTLB-load-miss-rate% 86.88 +2.5 89.40 perf-stat.overall.node-load-miss-rate% 84.40 +14.7 99.13 perf-stat.overall.node-store-miss-rate% 7160811 +26.0% 9025782 perf-stat.overall.path-length 5.774e+09 +10.5% 6.382e+09 perf-stat.ps.branch-instructions 1461552 ± 45% +1979.8% 30397813 ± 6% perf-stat.ps.cache-misses 2.271e+08 ± 8% +14.8% 2.608e+08 ± 3% perf-stat.ps.cache-references 1568 +10.8% 1737 perf-stat.ps.context-switches 4.702e+10 +11.7% 5.252e+10 perf-stat.ps.cpu-cycles 107.68 -1.8% 105.75 perf-stat.ps.cpu-migrations 6.315e+09 +10.0% 6.943e+09 perf-stat.ps.dTLB-loads 4496294 -12.9% 3917284 ± 2% perf-stat.ps.dTLB-store-misses 2.96e+09 +16.4% 3.446e+09 perf-stat.ps.dTLB-stores 97123947 -9.9% 87486204 ± 2% perf-stat.ps.iTLB-loads 2.621e+10 +11.9% 2.934e+10 perf-stat.ps.instructions 0.84 ± 13% -86.7% 0.11 ± 60% perf-stat.ps.major-faults 1100449 -11.0% 979055 ± 2% perf-stat.ps.minor-faults 124211 ± 19% +4413.3% 5605997 ± 4% perf-stat.ps.node-load-misses 18709 ± 18% +3444.3% 663126 ± 6% perf-stat.ps.node-loads 31654 ± 7% +10343.2% 3305726 ± 8% perf-stat.ps.node-store-misses 5835 ± 9% +394.9% 28881 ± 5% perf-stat.ps.node-stores 1100450 -11.0% 979055 ± 2% perf-stat.ps.page-faults 7.914e+12 +12.0% 8.863e+12 perf-stat.total.instructions 0.01 ± 58% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 48% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 41% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 29% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 ± 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 74% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 64% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.00 ± 38% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ±173% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 32% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 ± 51% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 24% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ±111% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 89% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 0.01 ± 41% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.02 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.01 ± 79% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ±127% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 55% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.02 ± 89% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.06 ± 62% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.06 ± 85% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.03 ±121% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.07 ± 65% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.04 ± 43% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.02 ±168% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.04 ± 80% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 24% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 6.20 ±108% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 85% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 1.42 ±217% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 3.20 -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.01 ± 56% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 8.10 ± 68% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 205.13 ± 2% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 8507 ± 2% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8963 ± 5% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 205.12 ± 2% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8963 ± 5% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.67 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1963 ± 41% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 833.08 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1963 ± 41% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 272.32 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.91 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 69.10 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.__vm_munmap 8.95 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.66 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 580.43 ± 25% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 519.81 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.81 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 6.18 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 697.60 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 543.97 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 5.17 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 21.67 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 5.17 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 247.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 238.33 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 2342 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 265.00 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.__vm_munmap 1200 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 63.67 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 29.00 ± 33% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 64.83 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 40.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1611 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1408 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 72.00 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 726.33 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.73 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 5749 ± 44% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 5749 ± 44% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 4683 ± 54% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 16.38 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 5751 ± 44% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.11 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.__vm_munmap 1807 ± 44% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.85 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 5338 ± 48% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 6078 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.00 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 265.84 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1193 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 51% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 8783 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.66 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1963 ± 41% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 833.07 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1963 ± 41% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 272.31 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.90 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.02 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 0.02 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.08 ±163% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 69.09 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 47% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.pte_alloc_one.__pte_alloc 0.02 ± 17% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__anon_vma_prepare.do_anonymous_page.__handle_mm_fault 72.45 ±223% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.02 ± 72% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.__vm_munmap.__x64_sys_munmap 0.02 ± 60% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff 0.02 ± 55% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__anon_vma_prepare.do_anonymous_page 0.01 ± 58% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.02 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.__vm_munmap 8.95 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.02 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.unmap_region 0.02 ± 48% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 0.01 ± 63% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 2.64 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 580.43 ± 25% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 519.81 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.80 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.03 ±100% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 6.17 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 697.59 -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 543.95 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.72 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 5749 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 5749 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 4683 ± 54% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 16.38 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.04 ± 11% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 0.07 ± 57% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 1.22 ±208% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 5751 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.04 ± 41% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.pte_alloc_one.__pte_alloc 0.07 ± 65% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__anon_vma_prepare.do_anonymous_page.__handle_mm_fault 1086 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.05 ±112% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.__vm_munmap.__x64_sys_munmap 0.06 ± 83% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff 0.03 ± 37% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__anon_vma_prepare.do_anonymous_page 0.03 ± 59% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.11 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.__vm_munmap 1807 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.08 ± 59% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.unmap_region 0.09 ± 70% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 0.02 ± 75% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 4.83 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 5338 ± 48% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 6078 ± 15% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.00 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.03 ±100% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 265.82 ± 15% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1193 ± 31% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 8783 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 676.83 ±104% +927.6% 6954 ±136% interrupts.39:PCI-MSI.67633154-edge.eth0-TxRx-1 5953 ± 35% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 5953 ± 35% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 6263 ± 21% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 6263 ± 21% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 6259 ± 36% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 6259 ± 36% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 101.00 ± 33% -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 101.00 ± 33% -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 124.33 ± 52% -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 124.33 ± 52% -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 122.67 ± 64% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 122.67 ± 64% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 418.33 ± 32% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 418.33 ± 32% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 5750 ± 38% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 5750 ± 38% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 5182 ± 53% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 5182 ± 53% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 4527 ± 38% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 4527 ± 38% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 5809 ± 45% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 5809 ± 45% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 4868 ± 41% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 4868 ± 41% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 2.00 ±119% +4000.0% 82.00 ±131% interrupts.CPU19.RES:Rescheduling_interrupts 4642 ± 50% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 4642 ± 50% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 1.00 ±115% +7700.0% 78.00 ±138% interrupts.CPU20.RES:Rescheduling_interrupts 75.83 ± 29% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 75.83 ± 29% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 77.83 ± 25% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 77.83 ± 25% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 121.00 ± 41% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 121.00 ± 41% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 193.50 ± 94% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 193.50 ± 94% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 99.00 ± 35% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 99.00 ± 35% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 191.33 ±108% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 191.33 ±108% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 4235 ± 46% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 4235 ± 46% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 195.33 ±116% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 195.33 ±116% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 676.83 ±104% +927.6% 6954 ±136% interrupts.CPU31.39:PCI-MSI.67633154-edge.eth0-TxRx-1 95.50 ± 34% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 95.50 ± 34% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 215.50 ±111% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 215.50 ±111% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 548.33 ± 5% +62.7% 892.17 ± 30% interrupts.CPU33.CAL:Function_call_interrupts 106.50 ± 29% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 106.50 ± 29% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 98.50 ± 36% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 98.50 ± 36% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 97.50 ± 41% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 97.50 ± 41% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 179.00 ±102% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 179.00 ±102% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 108.50 ± 23% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 108.50 ± 23% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 103.00 ± 28% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 103.00 ± 28% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 93.50 ± 33% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 93.50 ± 33% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 3954 ± 46% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 3954 ± 46% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 93.67 ± 31% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 93.67 ± 31% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 102.33 ± 43% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 102.33 ± 43% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 93.00 ± 29% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 93.00 ± 29% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 99.83 ± 33% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 99.83 ± 33% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 124.83 ± 38% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 124.83 ± 38% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 101.83 ± 37% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 101.83 ± 37% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 102.00 ± 41% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 102.00 ± 41% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 96.50 ± 34% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 96.50 ± 34% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 105.33 ± 28% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 105.33 ± 28% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 139.67 ± 56% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 139.67 ± 56% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 4293 ± 48% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 4293 ± 48% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 173.17 ±113% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 173.17 ±113% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 270.33 ± 28% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 270.33 ± 28% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 5508 ± 39% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 5508 ± 39% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 4786 ± 40% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 4786 ± 40% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 6828 ± 17% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 6828 ± 17% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 6512 ± 24% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 6512 ± 24% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 6860 ± 26% -100.0% 1.00 ±223% interrupts.CPU56.NMI:Non-maskable_interrupts 6860 ± 26% -100.0% 1.00 ±223% interrupts.CPU56.PMI:Performance_monitoring_interrupts 6435 ± 27% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 6435 ± 27% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 5567 ± 36% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 5567 ± 36% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 5703 ± 39% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 5703 ± 39% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 5098 ± 43% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 5098 ± 43% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 6022 ± 27% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 6022 ± 27% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 5896 ± 37% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 5896 ± 37% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 5451 ± 36% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 5451 ± 36% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 5960 ± 27% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 5960 ± 27% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 6291 ± 28% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 6291 ± 28% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 5867 ± 38% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 5867 ± 38% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 5665 ± 33% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 5665 ± 33% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 5844 ± 27% -100.0% 1.00 ±223% interrupts.CPU67.NMI:Non-maskable_interrupts 5844 ± 27% -100.0% 1.00 ±223% interrupts.CPU67.PMI:Performance_monitoring_interrupts 5506 ± 36% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 5506 ± 36% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 76.50 ± 28% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 76.50 ± 28% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 77.50 ± 21% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 77.50 ± 21% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 101.17 ± 52% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 101.17 ± 52% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 79.83 ± 26% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 79.83 ± 26% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 83.33 ± 12% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 83.33 ± 12% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 126.83 ± 30% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 126.83 ± 30% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 164.33 ± 77% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 164.33 ± 77% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 5231 ± 44% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 5231 ± 44% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 98.50 ± 47% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 98.50 ± 47% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 184.83 ± 91% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 184.83 ± 91% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 144.50 ± 78% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 144.50 ± 78% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 102.83 ± 40% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 102.83 ± 40% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 138.50 ± 75% -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 138.50 ± 75% -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 99.17 ± 49% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 99.17 ± 49% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 92.33 ± 46% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 92.33 ± 46% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 98.67 ± 53% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 98.67 ± 53% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 143.50 ± 74% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 143.50 ± 74% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 97.50 ± 34% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 97.50 ± 34% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 5814 ± 35% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 5814 ± 35% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 94.67 ± 37% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 94.67 ± 37% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 98.33 ± 38% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 98.33 ± 38% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 98.33 ± 35% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 98.33 ± 35% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 98.67 ± 37% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 98.67 ± 37% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 100.33 ± 25% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 100.33 ± 25% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 122.17 ± 20% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 122.17 ± 20% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 128.00 ± 44% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 128.00 ± 44% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 120.33 ± 40% -99.2% 1.00 ±223% interrupts.CPU97.NMI:Non-maskable_interrupts 120.33 ± 40% -99.2% 1.00 ±223% interrupts.CPU97.PMI:Performance_monitoring_interrupts 100.50 ± 38% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 100.50 ± 38% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 97.33 ± 36% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 97.33 ± 36% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 186821 ± 14% -100.0% 4.00 ± 70% interrupts.NMI:Non-maskable_interrupts 186821 ± 14% -100.0% 4.00 ± 70% interrupts.PMI:Performance_monitoring_interrupts *************************************************************************************************** lkp-skl-fpga01: 104 threads Skylake with 192G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/process/100%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/eventfd1/will-it-scale/0x2006a0a commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 53817815 -6.3% 50434581 will-it-scale.104.processes 517478 -6.3% 484947 will-it-scale.per_process_ops 53817815 -6.3% 50434581 will-it-scale.workload 3.61 ± 4% +2.8 6.38 mpstat.cpu.all.idle% 0.02 ± 15% +0.0 0.03 mpstat.cpu.all.soft% 1335406 ± 6% +43.6% 1917517 ± 15% numa-meminfo.node0.MemUsed 26350 ± 8% -32.8% 17695 ± 18% numa-meminfo.node1.Mapped 351261 ± 8% +91.6% 672866 ± 13% numa-numastat.node0.local_node 396995 +84.4% 731886 ± 12% numa-numastat.node0.numa_hit 965410 ± 9% +23.1% 1188249 ± 7% numa-vmstat.node0.numa_hit 6639 ± 8% -32.7% 4466 ± 18% numa-vmstat.node1.nr_mapped 14273927 ± 17% +200.1% 42839522 ±114% cpuidle.C6.time 46336 ± 7% +63.0% 75510 ± 68% cpuidle.C6.usage 12753 ± 6% +100.7% 25591 ± 73% cpuidle.POLL.time 55.00 -3.6% 53.00 vmstat.cpu.sy 43.00 +4.7% 45.00 vmstat.cpu.us 1537032 +38.4% 2127542 vmstat.memory.cache 1502 ± 5% +40.3% 2107 vmstat.system.cs 212844 -2.4% 207815 vmstat.system.in 8605 ± 2% +13.3% 9747 ± 3% slabinfo.kmalloc-256.active_objs 8654 ± 2% +13.2% 9795 ± 3% slabinfo.kmalloc-256.num_objs 14367 -37.7% 8957 slabinfo.proc_inode_cache.active_objs 299.50 -38.0% 185.83 slabinfo.proc_inode_cache.active_slabs 14395 -37.8% 8957 slabinfo.proc_inode_cache.num_objs 299.50 -38.0% 185.83 slabinfo.proc_inode_cache.num_slabs 224400 +269.5% 829186 ± 2% meminfo.Active 223868 +270.3% 828946 ± 2% meminfo.Active(anon) 1437431 +41.2% 2029667 meminfo.Cached 2408715 +27.3% 3065135 meminfo.Committed_AS 51176 -26.0% 37863 meminfo.Mapped 2783939 +35.1% 3760068 meminfo.Memused 251360 +237.0% 846957 ± 2% meminfo.Shmem 3254687 +15.5% 3760328 meminfo.max_used_kB 217046 ± 5% +122.4% 482701 ± 45% sched_debug.cfs_rq:/.min_vruntime.stddev 0.27 ±122% +481.4% 1.57 ± 44% sched_debug.cfs_rq:/.removed.runnable_avg.avg 2.64 ±122% +303.8% 10.65 ± 19% sched_debug.cfs_rq:/.removed.runnable_avg.stddev 0.27 ±122% +481.4% 1.57 ± 44% sched_debug.cfs_rq:/.removed.util_avg.avg 2.64 ±122% +303.8% 10.65 ± 19% sched_debug.cfs_rq:/.removed.util_avg.stddev 1499 ± 4% +30.4% 1955 ± 4% sched_debug.cfs_rq:/.runnable_avg.max 95.35 ± 7% +87.9% 179.14 ± 7% sched_debug.cfs_rq:/.runnable_avg.stddev 216964 ± 5% +122.5% 482737 ± 45% sched_debug.cfs_rq:/.spread0.stddev 1093 ± 3% +19.2% 1304 ± 7% sched_debug.cfs_rq:/.util_avg.max 796.11 ± 5% -25.3% 594.42 ± 6% sched_debug.cfs_rq:/.util_avg.min 56.73 ± 10% +62.3% 92.05 ± 8% sched_debug.cfs_rq:/.util_avg.stddev 668.86 ± 4% +90.2% 1272 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.max 78.80 ± 3% +74.1% 137.19 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.stddev 11.98 ± 5% +26.0% 15.09 ± 6% sched_debug.cpu.clock.stddev 0.16 ± 11% +27.3% 0.20 ± 6% sched_debug.cpu.nr_running.stddev 3824 ± 3% +22.4% 4681 sched_debug.cpu.nr_switches.avg 1470 ± 4% +34.5% 1978 ± 12% sched_debug.cpu.nr_switches.min 55875 +271.0% 207304 ± 2% proc-vmstat.nr_active_anon 70608 +6.7% 75350 proc-vmstat.nr_anon_pages 359234 +41.3% 507487 proc-vmstat.nr_file_pages 77322 +3.1% 79694 proc-vmstat.nr_inactive_anon 12886 -25.6% 9591 proc-vmstat.nr_mapped 2518 +4.6% 2634 proc-vmstat.nr_page_table_pages 62716 +237.7% 211808 ± 2% proc-vmstat.nr_shmem 24871 -4.3% 23800 proc-vmstat.nr_slab_reclaimable 55875 +271.0% 207304 ± 2% proc-vmstat.nr_zone_active_anon 77322 +3.1% 79694 proc-vmstat.nr_zone_inactive_anon 14803 ± 29% -62.5% 5557 ± 62% proc-vmstat.numa_hint_faults 13291 ± 28% -78.1% 2913 ± 64% proc-vmstat.numa_hint_faults_local 943105 +45.2% 1369352 proc-vmstat.numa_hit 849367 +50.2% 1275624 ± 2% proc-vmstat.numa_local 148542 ± 12% -64.5% 52803 ± 31% proc-vmstat.numa_pte_updates 86501 -67.2% 28371 ± 2% proc-vmstat.pgactivate 993722 +55.6% 1546560 ± 2% proc-vmstat.pgalloc_normal 977248 -12.4% 856153 proc-vmstat.pgfault 883680 +7.2% 947019 ± 4% proc-vmstat.pgfree 55.30 -55.3 0.00 perf-profile.calltrace.cycles-pp.read 45.47 -45.5 0.00 perf-profile.calltrace.cycles-pp.write 32.09 -32.1 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read 25.67 -25.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write 18.87 -18.9 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read 17.08 -17.1 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read 15.43 -15.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.read 13.05 -13.0 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.write 12.95 -12.9 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read 12.64 -12.6 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.read 12.10 -12.1 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 10.85 -10.8 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 9.56 -9.6 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 8.90 -8.9 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.write 8.07 -8.1 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 7.57 -7.6 0.00 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.write 7.48 -7.5 0.00 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.read 6.75 -6.8 0.00 perf-profile.calltrace.cycles-pp.eventfd_read.new_sync_read.vfs_read.ksys_read.do_syscall_64 6.32 -6.3 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.read 6.30 -6.3 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.write 5.92 -5.9 0.00 perf-profile.calltrace.cycles-pp.eventfd_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 57.92 -57.9 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 55.52 -55.5 0.00 perf-profile.children.cycles-pp.read 45.70 -45.7 0.00 perf-profile.children.cycles-pp.write 31.08 -31.1 0.00 perf-profile.children.cycles-pp.do_syscall_64 25.84 -25.8 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 22.59 -22.6 0.00 perf-profile.children.cycles-pp.__entry_text_start 17.14 -17.1 0.00 perf-profile.children.cycles-pp.ksys_read 16.66 -16.7 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 13.08 -13.1 0.00 perf-profile.children.cycles-pp.vfs_read 12.68 -12.7 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack 10.91 -10.9 0.00 perf-profile.children.cycles-pp.ksys_write 9.69 -9.7 0.00 perf-profile.children.cycles-pp.vfs_write 8.13 -8.1 0.00 perf-profile.children.cycles-pp.new_sync_read 6.96 -7.0 0.00 perf-profile.children.cycles-pp.eventfd_read 6.05 -6.1 0.00 perf-profile.children.cycles-pp.eventfd_write 5.20 -5.2 0.00 perf-profile.children.cycles-pp.security_file_permission 24.51 -24.5 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode 20.88 -20.9 0.00 perf-profile.self.cycles-pp.__entry_text_start 16.64 -16.6 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 0.06 ± 4% +338.4% 0.26 ± 3% perf-stat.i.MPKI 1.897e+10 -3.8% 1.825e+10 perf-stat.i.branch-instructions 1.73 -0.0 1.69 perf-stat.i.branch-miss-rate% 3.262e+08 -5.6% 3.078e+08 perf-stat.i.branch-misses 555892 ± 4% +628.5% 4049656 ± 44% perf-stat.i.cache-misses 4726936 ± 5% +389.3% 23128136 ± 4% perf-stat.i.cache-references 1469 ± 5% +41.4% 2078 perf-stat.i.context-switches 2.98 +4.6% 3.11 perf-stat.i.cpi 649902 ± 5% -84.5% 100578 ± 65% perf-stat.i.cycles-between-cache-misses 0.39 -0.0 0.38 perf-stat.i.dTLB-load-miss-rate% 1.074e+08 -6.1% 1.008e+08 perf-stat.i.dTLB-load-misses 2.749e+10 -4.5% 2.625e+10 perf-stat.i.dTLB-loads 0.00 +0.0 0.00 perf-stat.i.dTLB-store-miss-rate% 102626 +3.6% 106273 perf-stat.i.dTLB-store-misses 1.86e+10 -4.6% 1.774e+10 perf-stat.i.dTLB-stores 1.084e+08 -5.9% 1.02e+08 perf-stat.i.iTLB-load-misses 9.457e+10 -3.8% 9.094e+10 perf-stat.i.instructions 1036 -7.8% 955.05 perf-stat.i.instructions-per-iTLB-miss 0.34 -4.3% 0.32 perf-stat.i.ipc 0.91 ± 5% -68.9% 0.28 ± 22% perf-stat.i.major-faults 1.99 ± 4% -65.9% 0.68 ± 36% perf-stat.i.metric.K/sec 625.66 -4.3% 598.87 perf-stat.i.metric.M/sec 3112 -13.0% 2709 perf-stat.i.minor-faults 103775 ± 3% +559.2% 684048 ± 43% perf-stat.i.node-load-misses 21080 ± 5% +1052.0% 242839 ± 61% perf-stat.i.node-store-misses 9471 ± 5% +92.5% 18236 ± 6% perf-stat.i.node-stores 3113 -13.0% 2709 perf-stat.i.page-faults 0.05 ± 5% +403.5% 0.25 ± 4% perf-stat.overall.MPKI 1.72 -0.0 1.69 perf-stat.overall.branch-miss-rate% 2.98 +4.6% 3.12 perf-stat.overall.cpi 500646 ± 3% -80.4% 98361 ± 64% perf-stat.overall.cycles-between-cache-misses 0.39 -0.0 0.38 perf-stat.overall.dTLB-load-miss-rate% 0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate% 872.51 +2.2% 891.54 perf-stat.overall.instructions-per-iTLB-miss 0.34 -4.4% 0.32 perf-stat.overall.ipc 69.85 +17.4 87.30 ± 3% perf-stat.overall.node-load-miss-rate% 529805 +2.4% 542598 perf-stat.overall.path-length 1.891e+10 -3.8% 1.819e+10 perf-stat.ps.branch-instructions 3.251e+08 -5.6% 3.068e+08 perf-stat.ps.branch-misses 562046 ± 3% +618.1% 4036217 ± 44% perf-stat.ps.cache-misses 4761203 ± 5% +384.2% 23051468 ± 4% perf-stat.ps.cache-references 1464 ± 5% +41.4% 2071 perf-stat.ps.context-switches 1.07e+08 -6.1% 1.005e+08 perf-stat.ps.dTLB-load-misses 2.74e+10 -4.5% 2.617e+10 perf-stat.ps.dTLB-loads 102534 +3.3% 105923 perf-stat.ps.dTLB-store-misses 1.854e+10 -4.6% 1.768e+10 perf-stat.ps.dTLB-stores 1.08e+08 -5.9% 1.017e+08 perf-stat.ps.iTLB-load-misses 9.425e+10 -3.8% 9.064e+10 perf-stat.ps.instructions 0.91 ± 5% -68.9% 0.28 ± 22% perf-stat.ps.major-faults 3112 -13.2% 2701 perf-stat.ps.minor-faults 103489 ± 3% +558.8% 681779 ± 43% perf-stat.ps.node-load-misses 21019 ± 5% +1051.4% 242030 ± 61% perf-stat.ps.node-store-misses 9655 ± 5% +88.4% 18191 ± 6% perf-stat.ps.node-stores 3113 -13.2% 2701 perf-stat.ps.page-faults 2.851e+13 -4.0% 2.737e+13 perf-stat.total.instructions 29272 ± 20% +217.2% 92856 ± 68% softirqs.CPU0.RCU 28556 ± 22% +75.4% 50096 ± 14% softirqs.CPU1.RCU 25887 ± 26% +81.2% 46903 ± 8% softirqs.CPU10.RCU 25505 ± 21% +109.4% 53408 ± 28% softirqs.CPU100.RCU 25573 ± 21% +162.1% 67035 ± 77% softirqs.CPU101.RCU 25640 ± 21% +79.3% 45965 ± 10% softirqs.CPU102.RCU 26246 ± 22% +70.8% 44817 ± 9% softirqs.CPU103.RCU 26572 ± 23% +95.4% 51920 ± 16% softirqs.CPU11.RCU 26735 ± 22% +79.1% 47872 ± 13% softirqs.CPU12.RCU 26905 ± 23% +81.3% 48783 ± 15% softirqs.CPU13.RCU 26212 ± 25% +81.4% 47540 ± 17% softirqs.CPU14.RCU 30184 ± 21% +79.6% 54204 ± 14% softirqs.CPU15.RCU 30139 ± 21% +70.9% 51501 ± 5% softirqs.CPU16.RCU 30448 ± 21% +67.6% 51036 ± 6% softirqs.CPU17.RCU 30542 ± 22% +77.7% 54274 ± 13% softirqs.CPU18.RCU 30572 ± 21% +75.3% 53595 ± 8% softirqs.CPU19.RCU 27505 ± 23% +59.9% 43983 softirqs.CPU2.RCU 30494 ± 21% +69.0% 51532 ± 10% softirqs.CPU20.RCU 30404 ± 22% +74.1% 52921 ± 10% softirqs.CPU21.RCU 30124 ± 21% +58.1% 47615 ± 9% softirqs.CPU22.RCU 30208 ± 22% +70.2% 51406 ± 7% softirqs.CPU23.RCU 30500 ± 22% +80.9% 55170 ± 11% softirqs.CPU24.RCU 30498 ± 22% +70.0% 51841 ± 9% softirqs.CPU25.RCU 30289 ± 20% +67.2% 50632 ± 16% softirqs.CPU26.RCU 28512 ± 22% +57.4% 44882 ± 6% softirqs.CPU27.RCU 27424 ± 24% +71.2% 46950 ± 19% softirqs.CPU28.RCU 27144 ± 24% +85.1% 50233 ± 17% softirqs.CPU29.RCU 27428 ± 23% +167.5% 73379 ± 71% softirqs.CPU3.RCU 28605 ± 23% +80.0% 51501 ± 19% softirqs.CPU30.RCU 28492 ± 22% +68.5% 48006 ± 6% softirqs.CPU31.RCU 25999 ± 6% +96.0% 50956 ± 12% softirqs.CPU32.RCU 29537 ± 22% +98.2% 58554 ± 42% softirqs.CPU33.RCU 27675 ± 26% +67.1% 46239 softirqs.CPU34.RCU 28291 ± 23% +63.6% 46274 softirqs.CPU35.RCU 28351 ± 22% +93.7% 54930 ± 19% softirqs.CPU36.RCU 28464 ± 22% +81.1% 51539 ± 14% softirqs.CPU37.RCU 27484 ± 24% +70.2% 46785 ± 15% softirqs.CPU38.RCU 28024 ± 22% +62.7% 45599 softirqs.CPU39.RCU 26917 ± 22% +61.0% 43345 ± 5% softirqs.CPU4.RCU 28337 ± 22% +84.7% 52325 ± 12% softirqs.CPU40.RCU 28016 ± 22% +62.4% 45501 softirqs.CPU41.RCU 28151 ± 22% +98.1% 55757 ± 15% softirqs.CPU42.RCU 28436 ± 22% +76.5% 50190 ± 12% softirqs.CPU43.RCU 28569 ± 21% +172.5% 77851 ± 74% softirqs.CPU44.RCU 27278 ± 23% +71.2% 46688 ± 10% softirqs.CPU45.RCU 27129 ± 26% +78.2% 48341 ± 19% softirqs.CPU46.RCU 26888 ± 22% +71.4% 46081 ± 14% softirqs.CPU47.RCU 27117 ± 22% +100.5% 54364 ± 27% softirqs.CPU48.RCU 27341 ± 22% +147.5% 67674 ± 77% softirqs.CPU49.RCU 26690 ± 23% +67.4% 44668 ± 17% softirqs.CPU5.RCU 27347 ± 23% +69.9% 46451 ± 8% softirqs.CPU50.RCU 27667 ± 23% +65.8% 45881 ± 10% softirqs.CPU51.RCU 31835 ± 18% +179.5% 88967 ± 64% softirqs.CPU52.RCU 33461 ± 18% +63.1% 54589 ± 8% softirqs.CPU53.RCU 30080 ± 24% +59.8% 48060 ± 2% softirqs.CPU54.RCU 30214 ± 20% +158.6% 78143 ± 64% softirqs.CPU55.RCU 29822 ± 20% +57.3% 46914 ± 13% softirqs.CPU56.RCU 29623 ± 20% +72.3% 51053 ± 15% softirqs.CPU57.RCU 29602 ± 20% +66.1% 49167 ± 2% softirqs.CPU58.RCU 29167 ± 21% +68.3% 49077 ± 12% softirqs.CPU59.RCU 26548 ± 22% +65.7% 43979 ± 3% softirqs.CPU6.RCU 26874 ± 22% +70.5% 45825 ± 5% softirqs.CPU60.RCU 26636 ± 22% +78.1% 47440 ± 14% softirqs.CPU61.RCU 26156 ± 25% +85.3% 48456 ± 10% softirqs.CPU62.RCU 26814 ± 23% +92.8% 51700 ± 14% softirqs.CPU63.RCU 27189 ± 22% +84.2% 50080 ± 16% softirqs.CPU64.RCU 26912 ± 23% +88.2% 50645 ± 14% softirqs.CPU65.RCU 26746 ± 23% +81.1% 48444 ± 12% softirqs.CPU66.RCU 27265 ± 21% +80.9% 49332 ± 11% softirqs.CPU67.RCU 27233 ± 22% +75.9% 47915 ± 8% softirqs.CPU68.RCU 26988 ± 23% +73.6% 46859 ± 5% softirqs.CPU69.RCU 27187 ± 22% +59.3% 43299 ± 15% softirqs.CPU7.RCU 27143 ± 22% +81.7% 49309 ± 10% softirqs.CPU70.RCU 27192 ± 22% +91.1% 51955 ± 15% softirqs.CPU71.RCU 26927 ± 23% +76.0% 47381 ± 11% softirqs.CPU72.RCU 26744 ± 23% +77.8% 47540 ± 7% softirqs.CPU73.RCU 26600 ± 23% +72.3% 45837 ± 4% softirqs.CPU74.RCU 25739 ± 20% +82.1% 46883 ± 11% softirqs.CPU75.RCU 25979 ± 19% +79.8% 46706 ± 9% softirqs.CPU76.RCU 26063 ± 19% +81.5% 47295 ± 14% softirqs.CPU77.RCU 32477 ± 18% +59.1% 51671 ± 25% softirqs.CPU78.RCU 32395 ± 18% +50.0% 48577 ± 6% softirqs.CPU79.RCU 27059 ± 22% +65.2% 44713 ± 4% softirqs.CPU8.RCU 29857 ± 20% +68.1% 50192 ± 19% softirqs.CPU80.RCU 28839 ± 21% +84.8% 53291 ± 20% softirqs.CPU81.RCU 28496 ± 20% +76.8% 50394 ± 18% softirqs.CPU82.RCU 28388 ± 20% +66.6% 47286 ± 4% softirqs.CPU83.RCU 27419 ± 15% +83.1% 50211 ± 10% softirqs.CPU84.RCU 27740 ± 21% +105.0% 56870 ± 41% softirqs.CPU85.RCU 27446 ± 23% +69.0% 46371 softirqs.CPU86.RCU 28118 ± 22% +63.5% 45970 softirqs.CPU87.RCU 27945 ± 21% +100.2% 55955 ± 23% softirqs.CPU88.RCU 28060 ± 21% +86.2% 52260 ± 14% softirqs.CPU89.RCU 26821 ± 22% +71.5% 45988 ± 14% softirqs.CPU9.RCU 25391 ± 22% +75.7% 44618 ± 14% softirqs.CPU90.RCU 25372 ± 22% +66.8% 42331 softirqs.CPU91.RCU 25418 ± 21% +85.1% 47051 ± 10% softirqs.CPU92.RCU 25647 ± 21% +65.2% 42374 softirqs.CPU93.RCU 25635 ± 21% +90.0% 48706 ± 10% softirqs.CPU94.RCU 25658 ± 21% +85.6% 47613 ± 16% softirqs.CPU95.RCU 25595 ± 21% +193.0% 74990 ± 79% softirqs.CPU96.RCU 25644 ± 21% +80.8% 46355 ± 11% softirqs.CPU97.RCU 25404 ± 21% +87.6% 47661 ± 21% softirqs.CPU98.RCU 25554 ± 21% +72.6% 44106 ± 10% softirqs.CPU99.RCU 2897508 ± 21% +83.5% 5316315 softirqs.RCU 40871 ± 12% +43.0% 58456 ± 2% softirqs.TIMER 0.02 ± 2% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.92 ±217% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 4.58 ± 60% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.60 ± 18% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.03 ± 66% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.03 ± 26% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.01 ± 9% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 9.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.02 ± 2% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.28 ±134% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.02 ± 9% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.97 ± 15% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.04 ± 24% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.02 ± 7% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2.07 ±220% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 24% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 14.48 ± 56% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 14.35 ± 26% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 2.30 ±219% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 5.48 ±102% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.04 ± 46% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.22 ± 36% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.04 ± 45% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 22.37 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.03 ± 17% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 4.26 ±131% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.02 ± 10% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.03 ± 28% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.05 ± 66% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.04 ± 61% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 14.51 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 9.11 ± 43% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.18 ± 6% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 22.51 ± 19% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 145.21 ± 9% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 12308 ± 6% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8429 ± 9% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 145.02 ± 9% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8429 ± 9% -100.0% 0.00 perf-sched.total_wait_time.max.ms 882.81 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 917.42 ± 28% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 803.52 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1002 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 265.93 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 109.91 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.24 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.16 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.12 ± 35% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.27 ± 44% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 208.74 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 8.98 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 9.02 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 2.64 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 526.58 ± 47% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 157.83 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.58 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 6.98 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 586.89 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 584.74 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 4.67 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 22.50 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 4.33 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 236.83 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 268.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1662 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 238.17 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 181.33 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 3252 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 624.83 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 1248 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 212.33 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 111.00 ± 47% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 19.50 ± 51% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 158.50 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 39.67 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1469 ± 26% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1834 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 601.00 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.44 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2750 ± 28% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2763 ± 28% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1841 ±102% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1021 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 23.20 ± 62% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 8.54 ± 46% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 6.14 ± 99% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 206.18 ±172% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2752 ± 28% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 2177 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 22.37 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 5.02 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 4004 ± 60% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5816 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.01 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 362.67 ± 42% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 7190 ± 18% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 7974 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 882.79 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 916.50 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 803.50 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 997.70 ± 31% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 265.33 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 109.88 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.24 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.16 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.10 ± 86% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_irq_work.[unknown] 0.12 ± 35% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.27 ± 45% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 208.72 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 9.91 ± 20% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 439.75 ±115% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 8.98 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.62 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 526.30 ± 47% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 157.83 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.56 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 3.13 ± 44% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 6.96 ± 20% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 586.88 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 2.18 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 584.70 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.42 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2748 ± 28% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2754 ± 28% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1841 ±102% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1021 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 23.20 ± 62% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 8.54 ± 46% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 1.24 ±105% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_irq_work.[unknown] 6.14 ± 99% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 206.17 ±172% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2752 ± 28% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 11.86 ± 16% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 3351 ± 99% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 2177 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 5.00 -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 4004 ± 60% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5816 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.99 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 3.13 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 362.65 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 7190 ± 18% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 12.80 -100.0% 0.00 perf-sched.wait_time.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 7974 ± 11% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 104127 -31.8% 71024 ± 2% interrupts.CAL:Function_call_interrupts 7546 -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 7546 -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 1652 ± 23% -54.3% 755.83 ± 37% interrupts.CPU1.CAL:Function_call_interrupts 5650 ± 33% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 5650 ± 33% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 677.33 ± 29% -38.5% 416.83 ± 16% interrupts.CPU1.RES:Rescheduling_interrupts 6911 ± 20% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 6911 ± 20% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 329.33 +99.6% 657.33 ± 38% interrupts.CPU10.RES:Rescheduling_interrupts 7180 ± 20% -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 7180 ± 20% -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 331.33 ± 7% +56.6% 519.00 ± 40% interrupts.CPU100.RES:Rescheduling_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 324.67 ± 4% +40.1% 454.83 ± 13% interrupts.CPU101.RES:Rescheduling_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 7177 ± 20% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 7177 ± 20% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 6911 ± 20% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 6911 ± 20% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 6909 ± 20% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 6909 ± 20% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 339.33 ± 6% +38.2% 468.83 ± 35% interrupts.CPU12.RES:Rescheduling_interrupts 6914 ± 20% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 6914 ± 20% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 340.83 ± 7% +22.1% 416.17 ± 11% interrupts.CPU13.RES:Rescheduling_interrupts 7549 -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 7549 -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 7542 -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 7542 -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 7555 -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 7555 -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 324.83 +29.8% 421.50 ± 13% interrupts.CPU16.RES:Rescheduling_interrupts 6289 ± 28% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 6289 ± 28% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 321.67 +18.7% 381.83 ± 4% interrupts.CPU17.RES:Rescheduling_interrupts 6919 ± 20% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 6919 ± 20% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 324.17 +73.3% 561.67 ± 33% interrupts.CPU18.RES:Rescheduling_interrupts 6914 ± 20% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 6914 ± 20% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 321.17 +28.2% 411.83 ± 12% interrupts.CPU19.RES:Rescheduling_interrupts 1214 ± 42% -55.4% 541.50 ± 16% interrupts.CPU2.CAL:Function_call_interrupts 5651 ± 33% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 5651 ± 33% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 501.17 ± 18% -24.5% 378.50 ± 13% interrupts.CPU2.RES:Rescheduling_interrupts 6914 ± 20% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 6914 ± 20% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 319.17 +19.1% 380.17 ± 5% interrupts.CPU20.RES:Rescheduling_interrupts 7544 -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 7544 -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 332.00 ± 7% +23.4% 409.67 ± 15% interrupts.CPU21.RES:Rescheduling_interrupts 7544 -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 7544 -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 318.00 +52.5% 485.00 ± 41% interrupts.CPU22.RES:Rescheduling_interrupts 7544 -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 7544 -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 328.17 ± 6% +17.1% 384.33 ± 7% interrupts.CPU23.RES:Rescheduling_interrupts 6914 ± 20% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 6914 ± 20% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 322.33 +17.9% 380.17 ± 7% interrupts.CPU24.RES:Rescheduling_interrupts 5655 ± 33% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 5655 ± 33% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 1325 ± 23% -46.9% 704.17 ± 39% interrupts.CPU26.CAL:Function_call_interrupts 6526 ± 28% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 6526 ± 28% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 1712 ± 39% -66.3% 577.83 ± 3% interrupts.CPU27.CAL:Function_call_interrupts 6527 ± 28% -100.0% 1.00 ±223% interrupts.CPU27.NMI:Non-maskable_interrupts 6527 ± 28% -100.0% 1.00 ±223% interrupts.CPU27.PMI:Performance_monitoring_interrupts 732.83 ± 41% -54.4% 334.50 ± 5% interrupts.CPU27.RES:Rescheduling_interrupts 1110 ± 26% -47.9% 578.67 ± 3% interrupts.CPU28.CAL:Function_call_interrupts 6526 ± 28% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 6526 ± 28% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 461.67 ± 17% -27.7% 334.00 ± 5% interrupts.CPU28.RES:Rescheduling_interrupts 675.50 ± 5% -16.7% 563.00 ± 4% interrupts.CPU29.CAL:Function_call_interrupts 7180 ± 20% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 7180 ± 20% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 6281 ± 28% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 6281 ± 28% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 7181 ± 20% -100.0% 1.00 ±223% interrupts.CPU30.NMI:Non-maskable_interrupts 7181 ± 20% -100.0% 1.00 ±223% interrupts.CPU30.PMI:Performance_monitoring_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 7180 ± 20% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 7180 ± 20% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 7182 ± 20% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 7182 ± 20% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 5875 ± 33% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 5875 ± 33% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 5874 ± 33% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 5874 ± 33% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 5873 ± 33% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 5873 ± 33% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 5873 ± 33% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 5873 ± 33% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 5222 ± 35% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 5222 ± 35% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 7538 -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 7538 -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 315.00 +17.4% 369.83 ± 12% interrupts.CPU40.RES:Rescheduling_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 314.33 +25.9% 395.67 ± 28% interrupts.CPU43.RES:Rescheduling_interrupts 6529 ± 28% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 6529 ± 28% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 313.17 +10.8% 347.00 ± 7% interrupts.CPU44.RES:Rescheduling_interrupts 6527 ± 28% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 6527 ± 28% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 6527 ± 28% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 6527 ± 28% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 317.00 ± 3% +9.1% 346.00 ± 5% interrupts.CPU48.RES:Rescheduling_interrupts 5874 ± 33% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 5874 ± 33% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 318.67 ± 3% +23.7% 394.33 ± 25% interrupts.CPU49.RES:Rescheduling_interrupts 6910 ± 20% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 6910 ± 20% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 5874 ± 33% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 5874 ± 33% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 7179 ± 20% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 7179 ± 20% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 1361 ± 16% -49.9% 682.00 ± 15% interrupts.CPU52.CAL:Function_call_interrupts 6916 ± 20% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 6916 ± 20% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 5463 ± 43% -85.3% 801.83 ± 59% interrupts.CPU53.CAL:Function_call_interrupts 7545 -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 7545 -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 6538 ± 41% -89.4% 694.00 ± 28% interrupts.CPU54.CAL:Function_call_interrupts 7541 -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 7541 -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 2701 ± 33% -70.1% 808.67 ± 33% interrupts.CPU55.CAL:Function_call_interrupts 6913 ± 20% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 6913 ± 20% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 6921 ± 20% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 6921 ± 20% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 6912 ± 20% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 6912 ± 20% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 6913 ± 20% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 6913 ± 20% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 6283 ± 28% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 6283 ± 28% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 398.83 ± 11% +385.6% 1936 ± 87% interrupts.CPU59.RES:Rescheduling_interrupts 6913 ± 20% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 6913 ± 20% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 6910 ± 20% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 6910 ± 20% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 6910 ± 20% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 6910 ± 20% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 581.67 ± 42% +177.0% 1611 ± 46% interrupts.CPU61.RES:Rescheduling_interrupts 6281 ± 28% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 6281 ± 28% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 547.83 ± 14% +41.8% 777.00 ± 20% interrupts.CPU63.CAL:Function_call_interrupts 7536 -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 7536 -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 347.83 ± 10% +470.8% 1985 ± 46% interrupts.CPU63.RES:Rescheduling_interrupts 7537 -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 7537 -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 7537 -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 7537 -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 7538 -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 7538 -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 7543 -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 7543 -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 385.17 ± 28% +410.0% 1964 ± 55% interrupts.CPU67.RES:Rescheduling_interrupts 7536 -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 7536 -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 347.00 ± 13% +370.2% 1631 ± 79% interrupts.CPU68.RES:Rescheduling_interrupts 7537 -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 7537 -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 376.67 ± 30% +524.3% 2351 ±108% interrupts.CPU69.RES:Rescheduling_interrupts 6911 ± 20% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 6911 ± 20% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 6276 ± 28% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 6276 ± 28% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 439.17 ± 52% +374.2% 2082 ± 43% interrupts.CPU70.RES:Rescheduling_interrupts 571.83 +81.9% 1040 ± 59% interrupts.CPU71.CAL:Function_call_interrupts 6276 ± 28% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 6276 ± 28% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 325.50 ± 2% +525.2% 2035 ± 43% interrupts.CPU71.RES:Rescheduling_interrupts 6276 ± 28% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 6276 ± 28% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 6276 ± 28% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 6276 ± 28% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 567.50 +52.8% 867.17 ± 34% interrupts.CPU74.CAL:Function_call_interrupts 6908 ± 20% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 6908 ± 20% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 321.00 +685.6% 2521 ± 68% interrupts.CPU74.RES:Rescheduling_interrupts 6907 ± 20% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 6907 ± 20% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 6906 ± 20% -100.0% 1.00 ±223% interrupts.CPU76.NMI:Non-maskable_interrupts 6906 ± 20% -100.0% 1.00 ±223% interrupts.CPU76.PMI:Performance_monitoring_interrupts 323.00 ± 2% +724.5% 2663 ±110% interrupts.CPU76.RES:Rescheduling_interrupts 7534 -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 7534 -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 348.50 ± 18% +833.9% 3254 ± 63% interrupts.CPU77.RES:Rescheduling_interrupts 1637 ± 25% -57.0% 703.50 ± 29% interrupts.CPU78.CAL:Function_call_interrupts 7833 -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 7833 -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 6155 ± 29% -86.1% 858.50 ± 49% interrupts.CPU79.CAL:Function_call_interrupts 7834 -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 6915 ± 20% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 6915 ± 20% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 6996 ± 48% -88.5% 805.50 ± 43% interrupts.CPU80.CAL:Function_call_interrupts 7835 -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 7835 -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 2300 ± 58% -71.8% 649.67 ± 58% interrupts.CPU80.RES:Rescheduling_interrupts 1938 ± 43% -70.1% 579.17 ± 5% interrupts.CPU81.CAL:Function_call_interrupts 7834 -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 1473 ± 58% -58.7% 608.17 ± 12% interrupts.CPU82.CAL:Function_call_interrupts 7834 -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 801.67 ± 24% -27.1% 584.67 ± 7% interrupts.CPU83.CAL:Function_call_interrupts 7834 -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 7833 -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 7833 -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 7182 ± 20% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 7182 ± 20% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 7181 ± 20% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 6528 ± 28% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 7180 ± 20% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 7180 ± 20% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 7834 -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 7539 -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 7539 -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 7835 -100.0% 1.00 ±223% interrupts.CPU90.NMI:Non-maskable_interrupts 7835 -100.0% 1.00 ±223% interrupts.CPU90.PMI:Performance_monitoring_interrupts 7835 -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 7835 -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 7836 -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 7836 -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 333.17 ± 5% +89.7% 632.00 ± 38% interrupts.CPU92.RES:Rescheduling_interrupts 7834 -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 334.67 ± 9% +59.2% 532.83 ± 26% interrupts.CPU93.RES:Rescheduling_interrupts 7834 -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 7834 -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 7834 -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 326.33 ± 2% +205.5% 996.83 ±103% interrupts.CPU96.RES:Rescheduling_interrupts 7833 -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 7833 -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 7834 -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 7834 -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 7833 -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 7833 -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 192.67 ± 4% -100.0% 0.00 interrupts.IWI:IRQ_work_interrupts 729455 ± 4% -100.0% 4.00 ± 70% interrupts.NMI:Non-maskable_interrupts 729455 ± 4% -100.0% 4.00 ± 70% interrupts.PMI:Performance_monitoring_interrupts 51460 ± 7% +86.2% 95806 ± 2% interrupts.RES:Rescheduling_interrupts 314.50 ± 3% -28.1% 226.17 ± 30% interrupts.TLB:TLB_shootdowns *************************************************************************************************** lkp-ivb-2ep1: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/thread/100%/debian-10.4-x86_64-20200603.cgz/lkp-ivb-2ep1/poll1/will-it-scale/0x42e commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 59738282 -2.2% 58422246 will-it-scale.48.threads 1244547 -2.2% 1217129 will-it-scale.per_thread_ops 59738282 -2.2% 58422246 will-it-scale.workload 2.54 +3.5 6.04 ± 2% mpstat.cpu.all.idle% 0.02 ± 14% +0.0 0.04 mpstat.cpu.all.soft% 1379034 +56.6% 2159868 vmstat.memory.cache 1272 ± 2% +50.3% 1912 vmstat.system.cs 328771 ± 6% +75.5% 577027 ± 20% numa-numastat.node0.local_node 364262 ± 5% +65.8% 603798 ± 21% numa-numastat.node0.numa_hit 356834 ± 5% +68.2% 600293 ± 20% numa-numastat.node1.local_node 364821 ± 5% +69.1% 616999 ± 21% numa-numastat.node1.numa_hit 101943 +765.8% 882625 ± 4% meminfo.Active 101943 +765.8% 882625 ± 4% meminfo.Active(anon) 1297433 +60.0% 2075764 meminfo.Cached 815318 +100.9% 1638128 ± 2% meminfo.Committed_AS 2185803 +43.6% 3138095 meminfo.Memused 116458 +668.3% 894794 ± 4% meminfo.Shmem 2409774 +30.2% 3138133 meminfo.max_used_kB 13355 ± 41% +822.5% 123204 ± 15% numa-vmstat.node0.nr_active_anon 161959 ± 3% +67.0% 270513 ± 7% numa-vmstat.node0.nr_file_pages 14956 ± 39% +736.5% 125114 ± 15% numa-vmstat.node0.nr_shmem 13355 ± 41% +822.5% 123204 ± 15% numa-vmstat.node0.nr_zone_active_anon 12128 ± 46% +704.6% 97588 ± 11% numa-vmstat.node1.nr_active_anon 162378 ± 3% +53.1% 248564 ± 5% numa-vmstat.node1.nr_file_pages 14137 ± 42% +598.3% 98721 ± 11% numa-vmstat.node1.nr_shmem 12128 ± 46% +704.6% 97588 ± 11% numa-vmstat.node1.nr_zone_active_anon 5488 ± 3% +33.7% 7337 ± 3% slabinfo.kmalloc-256.active_objs 5529 ± 3% +33.7% 7392 ± 3% slabinfo.kmalloc-256.num_objs 9068 ± 2% -9.6% 8198 slabinfo.proc_inode_cache.active_objs 9068 ± 2% -9.6% 8198 slabinfo.proc_inode_cache.num_objs 17898 +15.2% 20618 slabinfo.radix_tree_node.active_objs 638.83 +15.2% 736.00 slabinfo.radix_tree_node.active_slabs 17898 +15.2% 20618 slabinfo.radix_tree_node.num_objs 638.83 +15.2% 736.00 slabinfo.radix_tree_node.num_slabs 53322 ± 41% +823.0% 492188 ± 15% numa-meminfo.node0.Active 53322 ± 41% +823.0% 492188 ± 15% numa-meminfo.node0.Active(anon) 647688 ± 3% +67.0% 1081422 ± 7% numa-meminfo.node0.FilePages 1128100 ± 3% +42.7% 1609970 ± 5% numa-meminfo.node0.MemUsed 59674 ± 39% +737.6% 499822 ± 15% numa-meminfo.node0.Shmem 48465 ± 46% +704.4% 389842 ± 11% numa-meminfo.node1.Active 48465 ± 46% +704.4% 389842 ± 11% numa-meminfo.node1.Active(anon) 649545 ± 3% +53.0% 993748 ± 5% numa-meminfo.node1.FilePages 1057395 ± 3% +44.5% 1527564 ± 4% numa-meminfo.node1.MemUsed 56584 ± 42% +597.0% 394377 ± 11% numa-meminfo.node1.Shmem 9204 ± 93% +100.9% 18488 ± 64% sched_debug.cfs_rq:/.load.stddev 1403 ± 4% +39.6% 1958 ± 2% sched_debug.cfs_rq:/.runnable_avg.max 104.09 ± 5% +127.0% 236.30 ± 3% sched_debug.cfs_rq:/.runnable_avg.stddev 1112 ± 2% +19.1% 1325 ± 2% sched_debug.cfs_rq:/.util_avg.max 835.58 -28.3% 598.81 ± 5% sched_debug.cfs_rq:/.util_avg.min 61.94 ± 5% +87.5% 116.16 ± 5% sched_debug.cfs_rq:/.util_avg.stddev 1022 +60.9% 1645 ± 12% sched_debug.cfs_rq:/.util_est_enqueued.max 105.35 ± 65% +112.0% 223.36 ± 12% sched_debug.cfs_rq:/.util_est_enqueued.stddev 707908 ± 9% +24.2% 879248 ± 5% sched_debug.cpu.avg_idle.avg 1063850 ± 13% +37.8% 1465737 ± 6% sched_debug.cpu.avg_idle.max 126226 ± 11% +26.1% 159118 ± 15% sched_debug.cpu.avg_idle.stddev 0.00 ± 20% -18.6% 0.00 ± 2% sched_debug.cpu.next_balance.stddev 0.23 ± 5% +28.7% 0.29 ± 4% sched_debug.cpu.nr_running.stddev 6079 ± 2% +32.7% 8069 sched_debug.cpu.nr_switches.avg 20560 ± 5% -25.0% 15421 ± 7% sched_debug.cpu.nr_switches.max 2039 ± 17% +95.6% 3988 ± 13% sched_debug.cpu.nr_switches.min 4387 ± 3% -42.5% 2520 ± 19% sched_debug.cpu.nr_switches.stddev 25464 +766.1% 220557 ± 4% proc-vmstat.nr_active_anon 63374 +2.9% 65240 proc-vmstat.nr_anon_pages 324325 +60.0% 518843 proc-vmstat.nr_file_pages 66898 +1.8% 68101 proc-vmstat.nr_inactive_anon 9338 -5.6% 8819 proc-vmstat.nr_mapped 1084 +9.0% 1182 proc-vmstat.nr_page_table_pages 29081 +668.9% 223601 ± 4% proc-vmstat.nr_shmem 25464 +766.1% 220557 ± 4% proc-vmstat.nr_zone_active_anon 66898 +1.8% 68101 proc-vmstat.nr_zone_inactive_anon 10303 ± 28% -56.9% 4446 ± 52% proc-vmstat.numa_hint_faults_local 760864 +64.3% 1249911 ± 2% proc-vmstat.numa_hit 717323 +68.2% 1206367 ± 2% proc-vmstat.numa_local 145574 ± 13% -42.7% 83394 ± 28% proc-vmstat.numa_pte_updates 37598 -20.5% 29891 ± 4% proc-vmstat.pgactivate 816352 +71.9% 1403351 ± 2% proc-vmstat.pgalloc_normal 893418 -6.9% 831535 proc-vmstat.pgfault 762113 +14.3% 871330 ± 2% proc-vmstat.pgfree 50605 -6.2% 47486 ± 3% proc-vmstat.pgreuse 98.56 -98.6 0.00 perf-profile.calltrace.cycles-pp.__poll 51.59 -51.6 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__poll 32.22 -32.2 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.__poll 26.60 -26.6 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__poll 23.39 -23.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__poll 20.60 -20.6 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe.__poll 17.01 -17.0 0.00 perf-profile.calltrace.cycles-pp.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe.__poll 14.54 -14.5 0.00 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__poll 13.78 -13.8 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.__poll 98.80 -98.8 0.00 perf-profile.children.cycles-pp.__poll 51.95 -52.0 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 27.71 -27.7 0.00 perf-profile.children.cycles-pp.__entry_text_start 26.72 -26.7 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 23.59 -23.6 0.00 perf-profile.children.cycles-pp.do_syscall_64 20.71 -20.7 0.00 perf-profile.children.cycles-pp.__x64_sys_poll 17.47 -17.5 0.00 perf-profile.children.cycles-pp.do_sys_poll 16.54 -16.5 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack 16.32 -16.3 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 25.93 -25.9 0.00 perf-profile.self.cycles-pp.__entry_text_start 25.01 -25.0 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode 16.28 -16.3 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 6.99 -7.0 0.00 perf-profile.self.cycles-pp.do_sys_poll 40603 ± 14% +119.9% 89296 ± 21% softirqs.CPU0.RCU 39894 ± 14% +72.5% 68812 ± 16% softirqs.CPU1.RCU 35093 ± 15% +80.3% 63288 ± 14% softirqs.CPU10.RCU 35614 ± 15% +90.9% 67979 ± 11% softirqs.CPU11.RCU 35000 ± 12% +79.5% 62812 ± 7% softirqs.CPU12.RCU 31579 ± 11% +98.5% 62671 ± 12% softirqs.CPU13.RCU 32174 ± 10% +114.7% 69079 ± 13% softirqs.CPU14.RCU 31104 ± 11% +109.4% 65136 ± 9% softirqs.CPU15.RCU 36318 ± 14% +92.3% 69837 ± 7% softirqs.CPU16.RCU 32951 ± 18% +105.5% 67709 ± 13% softirqs.CPU17.RCU 36326 ± 14% +90.5% 69212 ± 6% softirqs.CPU18.RCU 34802 ± 13% +89.3% 65899 ± 15% softirqs.CPU19.RCU 38578 ± 20% +90.1% 73339 ± 19% softirqs.CPU2.RCU 35302 ± 14% +87.9% 66341 ± 7% softirqs.CPU20.RCU 35113 ± 16% +93.3% 67861 softirqs.CPU21.RCU 35292 ± 15% +90.3% 67170 ± 3% softirqs.CPU22.RCU 35612 ± 16% +82.5% 64992 ± 5% softirqs.CPU23.RCU 31301 ± 16% +149.5% 78086 ± 21% softirqs.CPU24.RCU 34412 ± 14% +87.9% 64657 ± 20% softirqs.CPU25.RCU 32752 ± 14% +101.7% 66067 ± 15% softirqs.CPU26.RCU 34903 ± 16% +79.1% 62519 ± 15% softirqs.CPU27.RCU 34158 ± 16% +76.9% 60417 ± 9% softirqs.CPU28.RCU 33748 ± 17% +89.0% 63791 ± 20% softirqs.CPU29.RCU 38876 ± 15% +73.5% 67453 ± 9% softirqs.CPU3.RCU 33417 ± 16% +113.8% 71458 ± 17% softirqs.CPU30.RCU 33281 ± 16% +83.8% 61177 ± 15% softirqs.CPU31.RCU 29887 ± 16% +80.3% 53893 ± 12% softirqs.CPU32.RCU 30644 ± 16% +94.5% 59592 ± 6% softirqs.CPU33.RCU 31044 ± 20% +69.1% 52510 ± 18% softirqs.CPU34.RCU 29665 ± 14% +86.4% 55307 ± 12% softirqs.CPU35.RCU 35628 ± 16% +83.5% 65376 ± 12% softirqs.CPU36.RCU 32107 ± 18% +103.8% 65444 ± 10% softirqs.CPU37.RCU 34464 ± 14% +100.6% 69129 ± 7% softirqs.CPU38.RCU 34575 ± 15% +92.1% 66419 ± 13% softirqs.CPU39.RCU 37352 ± 16% +75.6% 65588 ± 9% softirqs.CPU4.RCU 38060 ± 15% +87.5% 71346 ± 4% softirqs.CPU40.RCU 32929 ± 23% +101.2% 66264 ± 17% softirqs.CPU41.RCU 37962 ± 13% +90.0% 72112 ± 5% softirqs.CPU42.RCU 36473 ± 14% +88.1% 68587 ± 9% softirqs.CPU43.RCU 36698 ± 14% +81.1% 66468 ± 3% softirqs.CPU44.RCU 36159 ± 16% +85.3% 67014 ± 3% softirqs.CPU45.RCU 36154 ± 16% +86.9% 67582 ± 5% softirqs.CPU46.RCU 36546 ± 16% +76.5% 64495 ± 4% softirqs.CPU47.RCU 36519 ± 15% +91.3% 69871 ± 22% softirqs.CPU5.RCU 36807 ± 17% +92.5% 70871 ± 13% softirqs.CPU6.RCU 35242 ± 16% +83.1% 64543 ± 13% softirqs.CPU7.RCU 36761 ± 16% +72.9% 63562 ± 7% softirqs.CPU8.RCU 37277 ± 11% +92.7% 71838 ± 9% softirqs.CPU9.RCU 1677186 ± 14% +90.5% 3194899 softirqs.RCU 33514 ± 3% +52.1% 50986 softirqs.TIMER 0.09 ± 10% +204.1% 0.26 ± 6% perf-stat.i.MPKI 8.85e+09 +3.4% 9.151e+09 perf-stat.i.branch-instructions 63173486 +4.0% 65686220 perf-stat.i.branch-misses 7.05 +2.4 9.48 ± 9% perf-stat.i.cache-miss-rate% 246018 ± 2% +332.2% 1063300 ± 7% perf-stat.i.cache-misses 2755412 ± 3% +299.4% 11004931 ± 6% perf-stat.i.cache-references 1244 ± 2% +51.6% 1886 perf-stat.i.context-switches 3.31 -3.3% 3.20 perf-stat.i.cpi 111.57 ± 6% +37.7% 153.66 ± 2% perf-stat.i.cpu-migrations 955562 ± 3% -85.0% 143699 ± 8% perf-stat.i.cycles-between-cache-misses 1.17 -0.0 1.13 perf-stat.i.dTLB-load-miss-rate% 1.108e+10 +2.5% 1.136e+10 perf-stat.i.dTLB-loads 1.14 -0.0 1.11 perf-stat.i.dTLB-store-miss-rate% 1.323e+08 -1.9% 1.298e+08 perf-stat.i.dTLB-store-misses 4.299e+10 +3.4% 4.445e+10 perf-stat.i.instructions 0.30 +3.5% 0.31 perf-stat.i.ipc 0.78 ± 25% -64.3% 0.28 ± 47% perf-stat.i.major-faults 1.20 ± 7% -83.2% 0.20 ± 52% perf-stat.i.metric.K/sec 655.44 +2.2% 669.80 perf-stat.i.metric.M/sec 2858 -7.1% 2654 perf-stat.i.minor-faults 46.94 +1.3 48.26 perf-stat.i.node-load-miss-rate% 93609 ± 2% +365.9% 436129 ± 11% perf-stat.i.node-load-misses 116813 ± 2% +301.2% 468642 ± 11% perf-stat.i.node-loads 33.38 ± 6% +9.2 42.59 perf-stat.i.node-store-miss-rate% 50561 ± 5% +725.2% 417237 ± 7% perf-stat.i.node-store-misses 116603 ± 11% +383.2% 563385 ± 6% perf-stat.i.node-stores 2859 -7.2% 2654 perf-stat.i.page-faults 0.06 ± 3% +285.5% 0.25 ± 6% perf-stat.overall.MPKI 3.31 -3.3% 3.21 perf-stat.overall.cpi 576612 ± 2% -76.6% 134705 ± 7% perf-stat.overall.cycles-between-cache-misses 1.17 -0.0 1.14 perf-stat.overall.dTLB-load-miss-rate% 1.15 -0.0 1.12 perf-stat.overall.dTLB-store-miss-rate% 0.30 +3.4% 0.31 perf-stat.overall.ipc 44.41 +3.8 48.20 perf-stat.overall.node-load-miss-rate% 30.38 ± 8% +12.1 42.53 perf-stat.overall.node-store-miss-rate% 216892 +5.7% 229166 perf-stat.overall.path-length 8.821e+09 +3.4% 9.12e+09 perf-stat.ps.branch-instructions 62993650 +3.9% 65475101 perf-stat.ps.branch-misses 246469 ± 2% +330.2% 1060339 ± 7% perf-stat.ps.cache-misses 2753111 ± 3% +298.5% 10970294 ± 6% perf-stat.ps.cache-references 1240 ± 2% +51.5% 1880 perf-stat.ps.context-switches 111.34 ± 6% +37.6% 153.15 ± 2% perf-stat.ps.cpu-migrations 1.105e+10 +2.5% 1.132e+10 perf-stat.ps.dTLB-loads 1.319e+08 -1.9% 1.293e+08 perf-stat.ps.dTLB-store-misses 4.285e+10 +3.4% 4.43e+10 perf-stat.ps.instructions 0.78 ± 25% -64.3% 0.28 ± 47% perf-stat.ps.major-faults 2851 -7.2% 2646 perf-stat.ps.minor-faults 93481 ± 2% +365.0% 434677 ± 11% perf-stat.ps.node-load-misses 117037 ± 2% +299.1% 467127 ± 11% perf-stat.ps.node-loads 50493 ± 5% +723.7% 415907 ± 7% perf-stat.ps.node-store-misses 116604 ± 11% +381.7% 561730 ± 6% perf-stat.ps.node-stores 2852 -7.2% 2646 perf-stat.ps.page-faults 1.296e+13 +3.3% 1.339e+13 perf-stat.total.instructions 128311 -24.4% 96996 ± 2% interrupts.CAL:Function_call_interrupts 7980 ± 7% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 7980 ± 7% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 5151 ± 26% -59.6% 2080 ± 11% interrupts.CPU1.CAL:Function_call_interrupts 5489 ± 27% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 5489 ± 27% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 5906 ± 31% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 5906 ± 31% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 6677 ± 27% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 6677 ± 27% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 372.50 ± 11% +139.5% 892.17 ± 47% interrupts.CPU11.RES:Rescheduling_interrupts 3246 ± 6% -35.5% 2093 ± 12% interrupts.CPU12.CAL:Function_call_interrupts 5948 ± 25% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 5948 ± 25% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 5726 ± 26% -64.9% 2011 ± 5% interrupts.CPU13.CAL:Function_call_interrupts 7394 ± 20% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 7394 ± 20% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 7596 ± 15% -75.0% 1901 ± 3% interrupts.CPU14.CAL:Function_call_interrupts 5949 ± 31% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 5949 ± 31% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 1372 ± 41% -57.9% 578.33 ± 22% interrupts.CPU14.RES:Rescheduling_interrupts 3586 ± 32% -46.9% 1903 ± 6% interrupts.CPU15.CAL:Function_call_interrupts 6700 ± 27% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 6700 ± 27% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 5647 ± 27% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 5647 ± 27% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 6007 ± 31% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 6007 ± 31% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 6206 ± 33% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 6206 ± 33% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 413.67 ± 17% +96.3% 812.17 ± 24% interrupts.CPU18.RES:Rescheduling_interrupts 5518 ± 35% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 5518 ± 35% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 399.33 ± 24% +92.7% 769.33 ± 37% interrupts.CPU19.RES:Rescheduling_interrupts 6808 ± 13% -70.0% 2044 ± 11% interrupts.CPU2.CAL:Function_call_interrupts 5454 ± 28% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 5454 ± 28% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 1100 ± 4% -9.1% 1000 ± 3% interrupts.CPU2.TLB:TLB_shootdowns 7581 ± 20% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 7581 ± 20% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 7109 ± 21% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 7109 ± 21% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 7036 ± 21% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 7036 ± 21% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 414.33 ± 27% +135.4% 975.50 ± 38% interrupts.CPU22.RES:Rescheduling_interrupts 7434 ± 20% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 7434 ± 20% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 333.00 ± 4% +132.5% 774.33 ± 59% interrupts.CPU23.RES:Rescheduling_interrupts 1966 ± 4% -9.9% 1770 ± 5% interrupts.CPU24.CAL:Function_call_interrupts 6889 ± 28% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 6889 ± 28% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 1082 ± 4% -15.5% 914.83 ± 6% interrupts.CPU24.TLB:TLB_shootdowns 2259 ± 9% -13.3% 1957 ± 6% interrupts.CPU25.CAL:Function_call_interrupts 7215 ± 22% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 7215 ± 22% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 2705 ± 14% -27.5% 1960 ± 9% interrupts.CPU26.CAL:Function_call_interrupts 7554 ± 18% -100.0% 1.17 ±223% interrupts.CPU26.NMI:Non-maskable_interrupts 7554 ± 18% -100.0% 1.17 ±223% interrupts.CPU26.PMI:Performance_monitoring_interrupts 6887 ± 28% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 6887 ± 28% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 7579 ± 20% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 7579 ± 20% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 6761 ± 27% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 6761 ± 27% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 5819 ± 17% -66.0% 1978 ± 8% interrupts.CPU3.CAL:Function_call_interrupts 6654 ± 27% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 6654 ± 27% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 1105 ± 3% -10.9% 985.50 ± 6% interrupts.CPU3.TLB:TLB_shootdowns 7517 ± 19% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 7517 ± 19% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 5579 ± 34% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 5579 ± 34% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 378.50 ± 20% +448.7% 2077 ± 62% interrupts.CPU31.RES:Rescheduling_interrupts 5652 ± 33% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 5652 ± 33% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 450.50 ± 54% +251.9% 1585 ± 86% interrupts.CPU32.RES:Rescheduling_interrupts 1092 ± 4% -8.6% 998.33 ± 4% interrupts.CPU32.TLB:TLB_shootdowns 6900 ± 28% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 6900 ± 28% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 1097 -10.9% 978.50 ± 7% interrupts.CPU33.TLB:TLB_shootdowns 6518 ± 28% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 6518 ± 28% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 366.33 ± 14% +303.8% 1479 ± 35% interrupts.CPU34.RES:Rescheduling_interrupts 6948 ± 27% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 6948 ± 27% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 496.17 ± 65% +218.7% 1581 ± 68% interrupts.CPU35.RES:Rescheduling_interrupts 7540 ± 20% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 7540 ± 20% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 401.33 ± 14% +243.2% 1377 ± 71% interrupts.CPU36.RES:Rescheduling_interrupts 1107 ± 2% -11.6% 979.17 ± 5% interrupts.CPU36.TLB:TLB_shootdowns 3479 ± 30% -44.0% 1947 ± 12% interrupts.CPU37.CAL:Function_call_interrupts 6836 ± 27% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 6836 ± 27% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 1093 ± 4% -8.4% 1001 ± 4% interrupts.CPU37.TLB:TLB_shootdowns 6880 ± 27% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 6880 ± 27% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 407.17 ± 14% +130.5% 938.67 ± 53% interrupts.CPU38.RES:Rescheduling_interrupts 6901 ± 28% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 6901 ± 28% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 3726 ± 21% -48.3% 1928 ± 3% interrupts.CPU4.CAL:Function_call_interrupts 6048 ± 32% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 6048 ± 32% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 6901 ± 28% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 6901 ± 28% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 6895 ± 27% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 6895 ± 27% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 7580 ± 20% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 7580 ± 20% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 349.83 ± 8% +188.9% 1010 ± 40% interrupts.CPU42.RES:Rescheduling_interrupts 7660 ± 17% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 7660 ± 17% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 6208 ± 33% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 6208 ± 33% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 395.50 ± 15% +223.7% 1280 ± 57% interrupts.CPU44.RES:Rescheduling_interrupts 1104 ± 3% -8.9% 1005 ± 4% interrupts.CPU44.TLB:TLB_shootdowns 5515 ± 35% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 5515 ± 35% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 374.17 ± 10% +227.8% 1226 ± 72% interrupts.CPU45.RES:Rescheduling_interrupts 7557 ± 20% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 7557 ± 20% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 346.33 ± 7% +251.3% 1216 ± 71% interrupts.CPU46.RES:Rescheduling_interrupts 6123 ± 32% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 6123 ± 32% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 360.67 ± 5% +183.0% 1020 ± 79% interrupts.CPU47.RES:Rescheduling_interrupts 4732 ± 28% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 4732 ± 28% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 5158 ± 30% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 5158 ± 30% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 4821 ± 31% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 4821 ± 31% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 7091 ± 20% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 7091 ± 20% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 5671 ± 28% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 5671 ± 28% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 383.50 ± 9% +133.6% 895.67 ± 51% interrupts.CPU9.RES:Rescheduling_interrupts 314820 ± 7% -100.0% 1.17 ±223% interrupts.NMI:Non-maskable_interrupts 314820 ± 7% -100.0% 1.17 ±223% interrupts.PMI:Performance_monitoring_interrupts 27370 ± 6% +104.9% 56082 ± 2% interrupts.RES:Rescheduling_interrupts 0.02 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2.03 ±101% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 2% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3.23 ±120% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 2.68 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.02 ± 2% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.19 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.03 ± 22% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.02 ± 10% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 10.55 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.08 ±155% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.05 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.02 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 2% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.03 ±126% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 8.47 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.06 ± 43% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.02 ± 9% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 10.11 ±101% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.03 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 11.64 ± 94% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 19.20 ± 8% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.05 ± 49% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 18.89 ± 11% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.04 ± 28% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.33 ± 28% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.03 ± 5% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 35.38 ±105% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 5.07 ±199% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.42 ± 4% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.02 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.02 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 11% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.21 ± 84% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 12.35 ±217% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 20.33 ± 4% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 15.52 ± 54% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.21 ± 8% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 48.71 ± 73% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 98.46 ± 9% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 10953 ± 10% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 7483 ± 5% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 98.25 ± 9% -100.0% 0.00 perf-sched.total_wait_time.average.ms 7483 ± 5% -100.0% 0.00 perf-sched.total_wait_time.max.ms 882.95 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 487.24 ± 60% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 821.03 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 506.66 ± 77% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 268.27 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 99.97 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.37 ± 71% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.22 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.18 ± 57% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.41 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 83.78 ± 18% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 18.86 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 284.95 ± 40% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 239.54 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.81 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 6.28 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 653.01 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 8.87 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 565.29 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 5.50 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 22.00 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 5.67 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 228.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 264.00 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1517 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 178.17 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 104.50 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 3237 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 1581 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 576.00 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 23.33 ± 46% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 105.50 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 40.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1598 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 749.00 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 66.17 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 436.17 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.61 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2337 ± 50% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2368 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1989 ± 84% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1022 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 193.25 ±186% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 9.91 ± 74% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 9.67 ±113% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 28.27 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2338 ± 50% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1849 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 1999 ± 57% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5216 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.01 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 254.85 ± 29% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 5585 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 20.33 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 7413 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 882.94 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 485.21 ± 60% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 821.01 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 503.43 ± 77% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 265.59 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 99.95 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.36 ± 71% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.21 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 1.02 ±216% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_irq_work.[unknown] 0.17 ± 62% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.22 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 300.42 ±204% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 83.76 ± 18% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 10.87 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 741.75 ±104% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 3.83 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.khugepaged.kthread.ret_from_fork 18.86 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.77 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 284.90 ± 40% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 239.53 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.79 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 6.71 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 6.26 ± 15% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 652.98 -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.40 ± 53% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 565.24 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.59 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2337 ± 50% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2357 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1986 ± 84% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1022 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 193.25 ±186% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 7.96 ± 94% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 10.10 ±219% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_irq_work.[unknown] 9.53 ±115% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 27.50 ± 25% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 869.52 ±211% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 2338 ± 50% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 12.88 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 3266 ± 99% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 6.44 ± 21% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.khugepaged.kthread.ret_from_fork 1849 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 10.00 ±116% -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 1999 ± 57% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5216 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.99 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 6.71 ± 12% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 254.82 ± 29% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 5585 ± 20% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 7.41 ± 59% -100.0% 0.00 perf-sched.wait_time.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 7413 ± 6% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork *************************************************************************************************** lkp-ivb-2ep1: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-ivb-2ep1/pread1/will-it-scale/0x42e commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 32174247 -2.6% 31345253 will-it-scale.24.processes 49.34 -8.3% 45.25 will-it-scale.24.processes_idle 1340593 -2.6% 1306051 will-it-scale.per_process_ops 32174247 -2.6% 31345253 will-it-scale.workload 0.03 +0.0 0.04 ± 3% mpstat.cpu.all.soft% 327098 ± 5% +118.6% 715075 ± 5% numa-numastat.node0.local_node 357493 ± 2% +106.8% 739292 ± 4% numa-numastat.node0.numa_hit 372736 ± 4% +93.8% 722461 ± 3% numa-numastat.node1.local_node 385821 +92.2% 741716 numa-numastat.node1.numa_hit 49.00 -8.2% 45.00 vmstat.cpu.id 1337175 +92.9% 2579543 vmstat.memory.cache 1225 ± 2% +47.6% 1808 ± 2% vmstat.system.cs 97244 -9.5% 87974 ± 20% vmstat.system.in 61834 +2002.0% 1299755 meminfo.Active 61834 +2002.0% 1299755 meminfo.Active(anon) 1255800 +98.5% 2492662 meminfo.Cached 441192 +290.3% 1722149 meminfo.Committed_AS 2148004 +67.0% 3586556 meminfo.Memused 74825 +1653.0% 1311691 meminfo.Shmem 2339292 +53.3% 3586556 meminfo.max_used_kB 0.53 +8.7% 0.58 ± 2% sched_debug.cfs_rq:/.nr_running.avg 225.94 ± 2% +15.3% 260.59 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.avg 584.12 ± 4% +70.5% 995.71 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.max 231.12 ± 2% +13.1% 261.49 sched_debug.cfs_rq:/.util_est_enqueued.stddev 611.20 ± 3% +12.9% 689.89 ± 7% sched_debug.cpu.clock_task.stddev 5469 ± 7% +26.7% 6932 ± 6% sched_debug.cpu.nr_switches.avg 15262 ± 15% +72.5% 26320 ± 12% sched_debug.cpu.nr_switches.max 3393 ± 13% +56.6% 5313 ± 7% sched_debug.cpu.nr_switches.stddev 14380 ± 3% +17.7% 16924 ± 4% slabinfo.filp.active_objs 14621 ± 2% +17.5% 17182 ± 4% slabinfo.filp.num_objs 5668 ± 2% +11.9% 6341 ± 2% slabinfo.kmalloc-256.active_objs 5756 ± 2% +13.0% 6503 ± 2% slabinfo.kmalloc-256.num_objs 9145 -9.5% 8277 ± 2% slabinfo.proc_inode_cache.active_objs 9145 -9.5% 8277 ± 2% slabinfo.proc_inode_cache.num_objs 17859 +26.4% 22573 slabinfo.radix_tree_node.active_objs 637.33 +26.4% 805.83 slabinfo.radix_tree_node.active_slabs 17859 +26.4% 22573 slabinfo.radix_tree_node.num_objs 637.33 +26.4% 805.83 slabinfo.radix_tree_node.num_slabs 4183 ± 76% +16432.3% 691573 ± 18% numa-meminfo.node0.Active 4183 ± 76% +16432.3% 691573 ± 18% numa-meminfo.node0.Active(anon) 594660 ± 2% +117.1% 1290807 ± 9% numa-meminfo.node0.FilePages 1089205 ± 7% +74.9% 1905557 ± 4% numa-meminfo.node0.MemUsed 2271 ± 15% +19.0% 2702 ± 6% numa-meminfo.node0.PageTables 10495 ± 55% +6560.5% 699019 ± 18% numa-meminfo.node0.Shmem 57656 ± 5% +958.4% 610240 ± 18% numa-meminfo.node1.Active 57656 ± 5% +958.4% 610240 ± 18% numa-meminfo.node1.Active(anon) 661155 ± 2% +82.1% 1203912 ± 9% numa-meminfo.node1.FilePages 1058311 ± 7% +59.0% 1683051 ± 4% numa-meminfo.node1.MemUsed 64345 ± 8% +855.4% 614730 ± 18% numa-meminfo.node1.Shmem 1045 ± 76% +16398.7% 172521 ± 18% numa-vmstat.node0.nr_active_anon 148663 ± 2% +116.8% 322329 ± 9% numa-vmstat.node0.nr_file_pages 567.50 ± 16% +19.2% 676.33 ± 6% numa-vmstat.node0.nr_page_table_pages 2621 ± 55% +6551.1% 174381 ± 18% numa-vmstat.node0.nr_shmem 1045 ± 76% +16398.7% 172521 ± 18% numa-vmstat.node0.nr_zone_active_anon 663612 ± 11% +34.6% 893276 ± 4% numa-vmstat.node0.numa_hit 630366 ± 10% +37.7% 867909 ± 5% numa-vmstat.node0.numa_local 14426 ± 5% +955.1% 152209 ± 18% numa-vmstat.node1.nr_active_anon 165264 ± 2% +81.9% 300627 ± 9% numa-vmstat.node1.nr_file_pages 16062 ± 8% +854.6% 153332 ± 18% numa-vmstat.node1.nr_shmem 14426 ± 5% +955.1% 152208 ± 18% numa-vmstat.node1.nr_zone_active_anon 721032 ± 10% +19.7% 862855 ± 6% numa-vmstat.node1.numa_hit 15452 +2002.0% 324798 proc-vmstat.nr_active_anon 59520 +4.1% 61985 proc-vmstat.nr_anon_pages 2812461 -1.3% 2776554 proc-vmstat.nr_dirty_background_threshold 5631801 -1.3% 5559898 proc-vmstat.nr_dirty_threshold 313924 +98.5% 623025 proc-vmstat.nr_file_pages 28307278 -1.3% 27947675 proc-vmstat.nr_free_pages 62640 +3.5% 64853 proc-vmstat.nr_inactive_anon 1155 +8.2% 1250 proc-vmstat.nr_page_table_pages 18681 +1654.6% 327783 proc-vmstat.nr_shmem 20318 +1.9% 20696 proc-vmstat.nr_slab_reclaimable 25125 +2.8% 25818 proc-vmstat.nr_slab_unreclaimable 15452 +2002.0% 324798 proc-vmstat.nr_zone_active_anon 62640 +3.5% 64853 proc-vmstat.nr_zone_inactive_anon 771109 +95.3% 1505616 proc-vmstat.numa_hit 727606 +100.9% 1462123 proc-vmstat.numa_local 62333 ± 25% -44.5% 34565 ± 48% proc-vmstat.numa_pte_updates 23012 +88.6% 43401 proc-vmstat.pgactivate 819919 +104.4% 1676106 ± 2% proc-vmstat.pgalloc_normal 901629 -3.4% 870576 proc-vmstat.pgfault 783570 +19.4% 935666 ± 3% proc-vmstat.pgfree 72.90 ± 2% -72.9 0.00 perf-profile.calltrace.cycles-pp.__libc_pread 53.42 ± 2% -53.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_pread 35.64 ± 2% -35.6 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_pread 33.46 ± 2% -33.5 0.00 perf-profile.calltrace.cycles-pp.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_pread 31.79 ± 2% -31.8 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_pread 26.04 ± 5% -26.0 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 25.98 ± 5% -26.0 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 25.65 ± 2% -25.7 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe 25.08 ± 5% -25.1 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 25.08 ± 5% -25.1 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 25.08 ± 5% -25.1 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 25.06 ± 5% -25.1 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 25.06 ± 5% -25.1 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 24.49 ± 2% -24.5 0.00 perf-profile.calltrace.cycles-pp.shmem_file_read_iter.new_sync_read.vfs_read.ksys_pread64.do_syscall_64 16.81 -16.8 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__libc_pread 11.46 -11.5 0.00 perf-profile.calltrace.cycles-pp.copy_page_to_iter.shmem_file_read_iter.new_sync_read.vfs_read.ksys_pread64 9.80 -9.8 0.00 perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.shmem_file_read_iter.new_sync_read.vfs_read 9.66 -9.7 0.00 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.shmem_file_read_iter.new_sync_read 9.16 ± 2% -9.2 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_pread 8.47 -8.5 0.00 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__libc_pread 6.27 ± 2% -6.3 0.00 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_file_read_iter.new_sync_read.vfs_read.ksys_pread64 73.26 ± 2% -73.3 0.00 perf-profile.children.cycles-pp.__libc_pread 53.67 ± 2% -53.7 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 35.82 ± 2% -35.8 0.00 perf-profile.children.cycles-pp.do_syscall_64 33.53 ± 2% -33.5 0.00 perf-profile.children.cycles-pp.ksys_pread64 31.94 ± 2% -31.9 0.00 perf-profile.children.cycles-pp.vfs_read 26.04 ± 5% -26.0 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 26.04 ± 5% -26.0 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 26.04 ± 5% -26.0 0.00 perf-profile.children.cycles-pp.do_idle 26.02 ± 5% -26.0 0.00 perf-profile.children.cycles-pp.cpuidle_enter 26.02 ± 5% -26.0 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 25.99 ± 5% -26.0 0.00 perf-profile.children.cycles-pp.intel_idle 25.69 ± 2% -25.7 0.00 perf-profile.children.cycles-pp.new_sync_read 25.08 ± 5% -25.1 0.00 perf-profile.children.cycles-pp.start_secondary 24.59 ± 2% -24.6 0.00 perf-profile.children.cycles-pp.shmem_file_read_iter 16.90 -16.9 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 11.56 -11.6 0.00 perf-profile.children.cycles-pp.copy_page_to_iter 9.87 -9.9 0.00 perf-profile.children.cycles-pp.copyout 9.71 -9.7 0.00 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string 9.52 ± 2% -9.5 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 8.98 -9.0 0.00 perf-profile.children.cycles-pp.__entry_text_start 6.29 ± 2% -6.3 0.00 perf-profile.children.cycles-pp.shmem_getpage_gfp 25.99 ± 5% -26.0 0.00 perf-profile.self.cycles-pp.intel_idle 16.30 -16.3 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode 9.60 -9.6 0.00 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string 9.48 ± 2% -9.5 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 7.91 ± 2% -7.9 0.00 perf-profile.self.cycles-pp.__entry_text_start 19897 ± 14% +948.1% 208538 ±107% softirqs.CPU0.RCU 22322 ± 25% +435.8% 119597 ±145% softirqs.CPU1.RCU 25408 ± 22% +86.4% 47368 ± 14% softirqs.CPU10.RCU 21039 ± 19% +90.9% 40158 ± 18% softirqs.CPU11.RCU 15984 ± 17% +706.3% 128875 ±137% softirqs.CPU12.RCU 20208 ± 8% +705.7% 162828 ±117% softirqs.CPU13.RCU 20333 ± 22% +64.4% 33427 ± 13% softirqs.CPU14.RCU 20027 ± 19% +63.8% 32798 ± 24% softirqs.CPU15.RCU 23328 ± 19% +79.4% 41842 ± 24% softirqs.CPU16.RCU 19957 ± 22% +106.7% 41245 ± 24% softirqs.CPU17.RCU 18919 ± 31% +108.0% 39358 ± 26% softirqs.CPU18.RCU 21789 ± 18% +67.6% 36521 ± 16% softirqs.CPU19.RCU 24863 ± 29% +82.5% 45387 ± 24% softirqs.CPU2.RCU 20013 ± 18% +98.5% 39732 ± 14% softirqs.CPU20.RCU 20732 ± 18% +112.8% 44129 ± 16% softirqs.CPU21.RCU 19935 ± 17% +501.2% 119845 ±149% softirqs.CPU22.RCU 19639 ± 13% +138.7% 46875 ± 16% softirqs.CPU23.RCU 22718 ± 16% +78.6% 40573 ± 14% softirqs.CPU24.RCU 16177 ± 28% +131.3% 37422 ± 15% softirqs.CPU25.RCU 17400 ± 18% +97.5% 34370 ± 19% softirqs.CPU26.RCU 16930 ± 13% +588.6% 116579 ±153% softirqs.CPU27.RCU 16811 ± 21% +117.9% 36632 ± 18% softirqs.CPU28.RCU 18904 ± 21% +537.8% 120568 ±149% softirqs.CPU29.RCU 23340 ± 24% +88.6% 44009 ± 9% softirqs.CPU3.RCU 16236 ± 19% +100.2% 32511 ± 26% softirqs.CPU30.RCU 17583 ± 22% +87.9% 33031 ± 18% softirqs.CPU31.RCU 15133 ± 17% +97.0% 29817 ± 10% softirqs.CPU32.RCU 14155 ± 6% +142.6% 34335 ± 19% softirqs.CPU33.RCU 14907 ± 8% +105.2% 30596 ± 17% softirqs.CPU34.RCU 17290 ± 26% +106.8% 35749 ± 18% softirqs.CPU35.RCU 17657 ± 26% +89.5% 33455 ± 13% softirqs.CPU36.RCU 15973 ± 12% +123.0% 35617 ± 18% softirqs.CPU37.RCU 16076 ± 7% +625.1% 116569 ±156% softirqs.CPU38.RCU 17260 ± 13% +123.5% 38585 ± 12% softirqs.CPU39.RCU 22100 ± 24% +116.2% 47774 ± 33% softirqs.CPU4.RCU 17436 ± 15% +128.0% 39762 ± 17% softirqs.CPU40.RCU 18958 ± 30% +538.8% 121115 ±145% softirqs.CPU41.RCU 19060 ± 14% +110.9% 40191 ± 17% softirqs.CPU42.RCU 17989 ± 14% +123.7% 40249 ± 21% softirqs.CPU43.RCU 18994 ± 16% +106.2% 39168 ± 22% softirqs.CPU44.RCU 18178 ± 11% +104.4% 37160 ± 23% softirqs.CPU45.RCU 19523 ± 12% +90.3% 37155 ± 11% softirqs.CPU46.RCU 19627 ± 20% +61.9% 31774 ± 15% softirqs.CPU47.RCU 20905 ± 17% +104.9% 42825 ± 19% softirqs.CPU5.RCU 21597 ± 24% +90.4% 41126 ± 27% softirqs.CPU6.RCU 21684 ± 19% +365.1% 100860 ±117% softirqs.CPU7.RCU 22081 ± 14% +112.4% 46911 ± 22% softirqs.CPU8.RCU 933853 ± 8% +198.4% 2786674 ± 6% softirqs.RCU 25875 ± 5% +66.9% 43189 ± 4% softirqs.TIMER 0.24 ± 15% +108.0% 0.51 ± 6% perf-stat.i.MPKI 1.079e+10 +4.6% 1.128e+10 perf-stat.i.branch-instructions 0.94 -0.0 0.91 perf-stat.i.branch-miss-rate% 1.003e+08 +2.0% 1.023e+08 perf-stat.i.branch-misses 1.90 ± 13% +10.3 12.23 ± 9% perf-stat.i.cache-miss-rate% 271081 ± 3% +1125.1% 3321051 ± 8% perf-stat.i.cache-misses 12121406 ± 15% +124.5% 27215732 ± 7% perf-stat.i.cache-references 1200 ± 2% +48.6% 1783 ± 2% perf-stat.i.context-switches 1.38 +3.2% 1.42 perf-stat.i.cpi 7.223e+10 +8.0% 7.801e+10 perf-stat.i.cpu-cycles 71.77 -2.8% 69.73 perf-stat.i.cpu-migrations 391619 ± 2% -92.1% 30968 ± 50% perf-stat.i.cycles-between-cache-misses 0.34 -0.0 0.33 perf-stat.i.dTLB-store-miss-rate% 64281882 -1.2% 63510875 perf-stat.i.dTLB-store-misses 33566536 -4.4% 32101494 perf-stat.i.iTLB-load-misses 5.247e+10 +4.6% 5.489e+10 perf-stat.i.instructions 1577 +9.5% 1726 perf-stat.i.instructions-per-iTLB-miss 0.73 -3.1% 0.70 perf-stat.i.ipc 0.40 ± 4% -78.5% 0.09 ± 34% perf-stat.i.major-faults 1.50 +8.0% 1.63 perf-stat.i.metric.GHz 0.72 ± 9% -78.5% 0.15 perf-stat.i.metric.K/sec 1123 +1.4% 1138 perf-stat.i.metric.M/sec 2891 -3.6% 2786 perf-stat.i.minor-faults 47.36 +1.6 48.98 perf-stat.i.node-load-miss-rate% 107976 ± 3% +1660.0% 1900404 ± 11% perf-stat.i.node-load-misses 130609 +1396.4% 1954489 ± 11% perf-stat.i.node-loads 34.59 ± 6% +11.5 46.05 ± 4% perf-stat.i.node-store-miss-rate% 59311 ± 7% +2382.6% 1472493 ± 27% perf-stat.i.node-store-misses 119688 +1282.0% 1654044 ± 25% perf-stat.i.node-stores 2891 -3.6% 2786 perf-stat.i.page-faults 0.23 ± 15% +114.6% 0.50 ± 7% perf-stat.overall.MPKI 0.93 -0.0 0.91 perf-stat.overall.branch-miss-rate% 2.29 ± 15% +10.0 12.25 ± 9% perf-stat.overall.cache-miss-rate% 1.38 +3.2% 1.42 perf-stat.overall.cpi 266141 ± 3% -91.1% 23675 ± 8% perf-stat.overall.cycles-between-cache-misses 0.34 -0.0 0.33 perf-stat.overall.dTLB-store-miss-rate% 1563 +9.4% 1710 perf-stat.overall.instructions-per-iTLB-miss 0.73 -3.1% 0.70 perf-stat.overall.ipc 45.24 +4.1 49.29 perf-stat.overall.node-load-miss-rate% 33.09 ± 4% +13.9 46.98 perf-stat.overall.node-store-miss-rate% 490908 +7.4% 527414 perf-stat.overall.path-length 1.075e+10 +4.6% 1.124e+10 perf-stat.ps.branch-instructions 99957025 +2.0% 1.02e+08 perf-stat.ps.branch-misses 270839 ± 3% +1122.1% 3309953 ± 8% perf-stat.ps.cache-misses 12084255 ± 15% +124.5% 27126455 ± 7% perf-stat.ps.cache-references 1196 ± 2% +48.5% 1777 ± 2% perf-stat.ps.context-switches 7.199e+10 +8.0% 7.774e+10 perf-stat.ps.cpu-cycles 71.54 -2.8% 69.51 perf-stat.ps.cpu-migrations 64064667 -1.2% 63294229 perf-stat.ps.dTLB-store-misses 33453056 -4.4% 31991765 perf-stat.ps.iTLB-load-misses 5.229e+10 +4.6% 5.471e+10 perf-stat.ps.instructions 0.40 ± 4% -78.4% 0.09 ± 34% perf-stat.ps.major-faults 2882 -3.6% 2777 perf-stat.ps.minor-faults 107680 ± 3% +1658.7% 1893794 ± 11% perf-stat.ps.node-load-misses 130298 +1394.8% 1947735 ± 11% perf-stat.ps.node-loads 59232 ± 7% +2377.4% 1467437 ± 27% perf-stat.ps.node-store-misses 119549 +1278.9% 1648476 ± 25% perf-stat.ps.node-stores 2882 -3.7% 2777 perf-stat.ps.page-faults 1.579e+13 +4.7% 1.653e+13 perf-stat.total.instructions 57963 -6.5% 54206 interrupts.CAL:Function_call_interrupts 2708 ± 34% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 2708 ± 34% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 3773 ± 61% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 3773 ± 61% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 5117 ± 48% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 5117 ± 48% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 3932 ± 31% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 3932 ± 31% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 4655 ± 60% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 4655 ± 60% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 4903 ± 51% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 4903 ± 51% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 4380 ± 64% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 4380 ± 64% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 3342 ± 54% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 3342 ± 54% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 4933 ± 50% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 4933 ± 50% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 4272 ± 67% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 4272 ± 67% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 4640 ± 37% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 4640 ± 37% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 3997 ± 28% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 3997 ± 28% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 4229 ± 47% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 4229 ± 47% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 3888 ± 55% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 3888 ± 55% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 5098 ± 48% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 5098 ± 48% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 4788 ± 51% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 4788 ± 51% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 4535 ± 42% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 4535 ± 42% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 6687 ± 34% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 6687 ± 34% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 3988 ± 52% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 3988 ± 52% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 2905 ± 37% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 2905 ± 37% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 4086 ± 66% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 4086 ± 66% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 5137 ± 53% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 5137 ± 53% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 4698 ± 43% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 4698 ± 43% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 6014 ± 39% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 6014 ± 39% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 3091 ± 30% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 3091 ± 30% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 4042 ± 52% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 4042 ± 52% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 2411 ± 43% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 2411 ± 43% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 1840 ± 26% -37.6% 1148 ± 23% interrupts.CPU33.CAL:Function_call_interrupts 2570 ± 96% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 2570 ± 96% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 100.00 ± 16% +77.2% 177.17 ± 28% interrupts.CPU33.RES:Rescheduling_interrupts 2373 ± 42% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 2373 ± 42% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 3532 ± 73% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 3532 ± 73% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 3293 ± 74% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 3293 ± 74% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 2805 ± 42% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 2805 ± 42% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 4824 ± 42% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 4824 ± 42% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 120.00 ± 52% +89.4% 227.33 ± 23% interrupts.CPU38.RES:Rescheduling_interrupts 4038 ± 52% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 4038 ± 52% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 3067 ± 39% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 3067 ± 39% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 4436 ± 60% -100.0% 1.00 ±223% interrupts.CPU40.NMI:Non-maskable_interrupts 4436 ± 60% -100.0% 1.00 ±223% interrupts.CPU40.PMI:Performance_monitoring_interrupts 4425 ± 40% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 4425 ± 40% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 3077 ± 79% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 3077 ± 79% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 2626 ± 32% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 2626 ± 32% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 5302 ± 43% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 5302 ± 43% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 2654 ± 35% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 2654 ± 35% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 3091 ± 41% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 3091 ± 41% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 3730 ± 44% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 3730 ± 44% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 4877 ± 53% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 4877 ± 53% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 3268 ± 29% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 3268 ± 29% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 5376 ± 50% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 5376 ± 50% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 3774 ± 23% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 3774 ± 23% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 6783 ± 32% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 6783 ± 32% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 289.33 ± 5% -21.8% 226.17 ± 15% interrupts.CPU9.RES:Rescheduling_interrupts 196194 ± 3% -100.0% 1.00 ±223% interrupts.NMI:Non-maskable_interrupts 196194 ± 3% -100.0% 1.00 ±223% interrupts.PMI:Performance_monitoring_interrupts 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 8% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 12% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.01 ± 40% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 26% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 40% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 8% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.04 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.01 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 9% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 45% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.02 ± 17% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.03 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.04 ± 44% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.04 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 11% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.02 ± 48% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.01 ± 30% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 10% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 41% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.03 ± 31% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 6.02 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.02 ± 7% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 12.45 -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.01 ± 9% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 12.45 -100.0% 0.00 perf-sched.total_sch_delay.max.ms 162.59 ± 7% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 6367 ± 9% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 7735 ± 4% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 162.58 ± 7% -100.0% 0.00 perf-sched.total_wait_time.average.ms 7735 ± 4% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.64 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 469.21 ± 73% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 808.95 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 469.22 ± 73% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 272.98 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.43 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.09 ± 61% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.04 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 71.33 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.04 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_file_read_iter 17.39 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 587.99 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.78 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 9.25 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 652.76 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 528.39 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 7.33 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 22.33 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 7.33 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 246.67 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 245.67 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 47.83 ± 35% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 185.00 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2200 -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 113.33 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_file_read_iter 576.00 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 50.67 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 40.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1261 ± 52% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 680.33 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 72.00 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 456.50 -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.63 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3039 ± 72% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3039 ± 72% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3228 ± 60% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 31.13 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.00 ±178% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.58 ±115% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3041 ± 72% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.17 ± 52% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_file_read_iter 1029 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 6493 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.51 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 289.34 ± 25% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2708 ± 61% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.02 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 6444 ± 17% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.63 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 469.20 ± 73% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 808.94 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 469.20 ± 73% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 272.97 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.41 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.09 ± 61% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.06 ± 90% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.04 ± 21% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.04 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 537.84 ±134% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 71.33 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.03 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.copy_page_to_iter.shmem_file_read_iter.new_sync_read 0.04 ± 22% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_file_read_iter 17.39 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.56 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 365.76 ± 36% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 587.98 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.78 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 39% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 9.25 ± 31% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 652.75 -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 528.35 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.62 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3039 ± 72% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3039 ± 72% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3228 ± 60% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 31.12 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.00 ±178% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.06 ± 90% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.09 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.58 ±115% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2081 ±138% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 3041 ± 72% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.10 ± 41% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.copy_page_to_iter.shmem_file_read_iter.new_sync_read 0.17 ± 52% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_file_read_iter 1029 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.49 ± 11% -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 3794 ± 70% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 6493 ± 16% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.50 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 39% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 289.33 ± 25% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2708 ± 61% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 6444 ± 17% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork *************************************************************************************************** lkp-skl-fpga01: 104 threads Skylake with 192G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/brk1/will-it-scale/0x2006a0a commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 66531819 -15.3% 56377631 will-it-scale.52.processes 49.47 -3.8% 47.59 will-it-scale.52.processes_idle 1279457 -15.3% 1084184 will-it-scale.per_process_ops 9487 +0.9% 9569 will-it-scale.time.maximum_resident_set_size 66531819 -15.3% 56377631 will-it-scale.workload 52376 ± 7% +32.8% 69564 ± 27% cpuidle.POLL.time 0.04 ± 5% +0.0 0.05 ± 6% mpstat.cpu.all.soft% 27.94 +3.2 31.18 mpstat.cpu.all.sys% 20.15 -3.0 17.13 mpstat.cpu.all.usr% 534.28 ± 5% -41.2% 314.38 ± 6% sched_debug.cfs_rq:/.util_est_enqueued.avg 423.00 ± 2% -29.6% 297.65 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.stddev 3100 ± 17% +42.9% 4430 ± 16% sched_debug.cpu.nr_switches.stddev 357400 ± 8% +116.1% 772492 ± 11% numa-numastat.node0.local_node 408607 +101.1% 821517 ± 8% numa-numastat.node0.numa_hit 431312 ± 6% +73.2% 747209 ± 7% numa-numastat.node1.local_node 473757 +67.1% 791872 ± 4% numa-numastat.node1.numa_hit 494.33 ± 9% +35972.6% 178318 ± 34% numa-vmstat.node0.nr_active_anon 151965 ± 2% +118.3% 331721 ± 18% numa-vmstat.node0.nr_file_pages 3549 ± 25% +5006.2% 181234 ± 33% numa-vmstat.node0.nr_shmem 494.33 ± 9% +35972.6% 178318 ± 34% numa-vmstat.node0.nr_zone_active_anon 994186 ± 6% +25.6% 1248383 ± 10% numa-vmstat.node0.numa_hit 50.00 -4.0% 48.00 vmstat.cpu.id 20.00 -11.7% 17.67 ± 2% vmstat.cpu.us 1433837 +79.8% 2577361 vmstat.memory.cache 1572 ± 5% +31.2% 2063 ± 4% vmstat.system.cs 211278 -1.8% 207503 vmstat.system.in 2088 ± 9% +34105.3% 714491 ± 34% numa-meminfo.node0.Active 1982 ± 9% +35929.8% 714170 ± 34% numa-meminfo.node0.Active(anon) 607865 ± 2% +118.4% 1327776 ± 18% numa-meminfo.node0.FilePages 1359425 ± 3% +65.5% 2249289 ± 12% numa-meminfo.node0.MemUsed 14202 ± 25% +5010.6% 725830 ± 33% numa-meminfo.node0.Shmem 1323966 ± 3% +50.8% 1996033 ± 12% numa-meminfo.node1.MemUsed 130438 ± 3% +876.0% 1273092 ± 3% meminfo.Active 130230 ± 3% +877.2% 1272567 ± 3% meminfo.Active(anon) 1334231 +85.6% 2476409 meminfo.Cached 971466 +123.4% 2170088 ± 2% meminfo.Committed_AS 2683686 +58.2% 4245240 meminfo.Memused 151555 ± 2% +751.5% 1290535 ± 3% meminfo.Shmem 3145259 +35.0% 4245265 meminfo.max_used_kB 8674 ± 3% +9.2% 9469 ± 2% slabinfo.kmalloc-256.active_objs 8722 ± 3% +9.7% 9564 ± 2% slabinfo.kmalloc-256.num_objs 959.83 ± 8% +15.3% 1107 ± 5% slabinfo.mnt_cache.active_objs 959.83 ± 8% +15.3% 1107 ± 5% slabinfo.mnt_cache.num_objs 14118 -36.3% 8989 slabinfo.proc_inode_cache.active_objs 293.50 -36.4% 186.67 slabinfo.proc_inode_cache.active_slabs 14118 -36.3% 8989 slabinfo.proc_inode_cache.num_objs 293.50 -36.4% 186.67 slabinfo.proc_inode_cache.num_slabs 22359 +19.9% 26804 slabinfo.radix_tree_node.active_objs 22359 +19.9% 26804 slabinfo.radix_tree_node.num_objs 32514 ± 3% +877.6% 317847 ± 3% proc-vmstat.nr_active_anon 65254 +8.2% 70573 proc-vmstat.nr_anon_pages 333544 +85.5% 618811 proc-vmstat.nr_file_pages 70493 +6.3% 74935 proc-vmstat.nr_inactive_anon 10402 -7.8% 9585 proc-vmstat.nr_mapped 1551 +7.4% 1665 proc-vmstat.nr_page_table_pages 37874 ± 2% +751.1% 322340 ± 3% proc-vmstat.nr_shmem 24876 -2.6% 24238 proc-vmstat.nr_slab_reclaimable 48365 +1.5% 49089 proc-vmstat.nr_slab_unreclaimable 32514 ± 3% +877.6% 317847 ± 3% proc-vmstat.nr_zone_active_anon 70493 +6.3% 74935 proc-vmstat.nr_zone_inactive_anon 7610 ± 40% -41.4% 4459 ± 53% proc-vmstat.numa_hint_faults 4133 ± 8% -48.5% 2129 ± 9% proc-vmstat.numa_hint_faults_local 912421 +80.1% 1643260 ± 2% proc-vmstat.numa_hit 818742 +89.3% 1549535 ± 2% proc-vmstat.numa_local 85725 ± 18% -59.3% 34864 ± 30% proc-vmstat.numa_pte_updates 49420 ± 2% -13.4% 42800 ± 3% proc-vmstat.pgactivate 969571 +91.4% 1856181 ± 4% proc-vmstat.pgalloc_normal 941570 -7.5% 871253 proc-vmstat.pgfault 898227 +14.9% 1031957 ± 6% proc-vmstat.pgfree 19059 ± 33% +287.5% 73861 ±128% softirqs.CPU100.RCU 25846 ± 36% -65.1% 9030 ± 79% softirqs.CPU16.SCHED 25015 ± 30% +51.9% 38003 ± 17% softirqs.CPU19.RCU 22651 ± 15% +36.8% 30997 ± 15% softirqs.CPU2.RCU 23715 ± 22% +69.0% 40085 ± 9% softirqs.CPU24.RCU 24337 ± 28% +74.8% 42545 ± 47% softirqs.CPU31.RCU 20850 ± 19% +66.3% 34668 ± 20% softirqs.CPU32.RCU 22721 ± 21% +57.4% 35756 ± 17% softirqs.CPU40.RCU 17821 ± 27% +87.9% 33480 ± 12% softirqs.CPU46.RCU 21298 ± 22% +49.7% 31874 ± 15% softirqs.CPU50.RCU 18539 ± 21% +58.8% 29435 ± 21% softirqs.CPU55.RCU 19198 ± 20% +48.3% 28475 ± 19% softirqs.CPU57.RCU 17888 ± 25% +61.7% 28930 ± 25% softirqs.CPU60.RCU 19030 ± 25% +66.1% 31604 ± 24% softirqs.CPU61.RCU 18768 ± 23% +60.9% 30204 ± 24% softirqs.CPU63.RCU 20254 ± 30% +431.3% 107621 ±162% softirqs.CPU64.RCU 19895 ± 20% +47.9% 29429 ± 20% softirqs.CPU65.RCU 18800 ± 23% +65.0% 31020 ± 21% softirqs.CPU66.RCU 19145 ± 22% +63.5% 31294 ± 25% softirqs.CPU74.RCU 18727 ± 25% +367.0% 87458 ±151% softirqs.CPU77.RCU 22560 ± 22% +51.6% 34208 ± 14% softirqs.CPU81.RCU 17142 ± 25% +65.2% 28315 ± 23% softirqs.CPU85.RCU 18137 ± 23% +58.5% 28747 ± 28% softirqs.CPU89.RCU 17197 ± 25% +83.1% 31484 ± 34% softirqs.CPU91.RCU 18710 ± 22% +61.9% 30283 ± 19% softirqs.CPU94.RCU 18735 ± 20% +33.9% 25080 ± 21% softirqs.CPU95.RCU 17769 ± 51% -60.3% 7047 ± 44% softirqs.NET_RX 2222678 ± 29% +80.1% 4003205 ± 12% softirqs.RCU 35500 ± 12% +44.8% 51399 ± 3% softirqs.TIMER 67.45 ± 9% -67.4 0.00 perf-profile.calltrace.cycles-pp.brk 48.31 ± 9% -48.3 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk 32.16 ± 20% -32.2 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 32.09 ± 20% -32.1 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 31.51 ± 20% -31.5 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 31.51 ± 20% -31.5 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 31.51 ± 20% -31.5 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 31.51 ± 20% -31.5 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 31.51 ± 20% -31.5 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 31.32 ± 9% -31.3 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk 30.75 ± 9% -30.8 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk 16.21 ± 9% -16.2 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.brk 15.04 ± 9% -15.0 0.00 perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk 11.50 ± 9% -11.5 0.00 perf-profile.calltrace.cycles-pp.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk 8.87 ± 9% -8.9 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.brk 7.98 ± 9% -8.0 0.00 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.brk 5.55 ± 9% -5.6 0.00 perf-profile.calltrace.cycles-pp.perf_event_mmap.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe 5.21 ± 9% -5.2 0.00 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe 67.49 ± 9% -67.5 0.00 perf-profile.children.cycles-pp.brk 48.47 ± 9% -48.5 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 32.16 ± 20% -32.2 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 32.16 ± 20% -32.2 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 32.16 ± 20% -32.2 0.00 perf-profile.children.cycles-pp.do_idle 32.16 ± 20% -32.2 0.00 perf-profile.children.cycles-pp.cpuidle_enter 32.16 ± 20% -32.2 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 32.16 ± 20% -32.2 0.00 perf-profile.children.cycles-pp.intel_idle 31.51 ± 20% -31.5 0.00 perf-profile.children.cycles-pp.start_secondary 31.43 ± 9% -31.4 0.00 perf-profile.children.cycles-pp.do_syscall_64 30.84 ± 9% -30.8 0.00 perf-profile.children.cycles-pp.__x64_sys_brk 16.29 ± 9% -16.3 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 15.10 ± 9% -15.1 0.00 perf-profile.children.cycles-pp.do_brk_flags 11.55 ± 9% -11.6 0.00 perf-profile.children.cycles-pp.__do_munmap 9.08 ± 10% -9.1 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 8.47 ± 9% -8.5 0.00 perf-profile.children.cycles-pp.__entry_text_start 5.63 ± 9% -5.6 0.00 perf-profile.children.cycles-pp.perf_event_mmap 5.29 ± 9% -5.3 0.00 perf-profile.children.cycles-pp.unmap_region 32.16 ± 20% -32.2 0.00 perf-profile.self.cycles-pp.intel_idle 15.96 ± 9% -16.0 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode 9.05 ± 9% -9.0 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 7.34 ± 9% -7.3 0.00 perf-profile.self.cycles-pp.__entry_text_start 0.06 ± 23% +849.3% 0.52 ± 27% perf-stat.i.MPKI 2.817e+10 -7.5% 2.605e+10 perf-stat.i.branch-instructions 0.70 +0.1 0.79 perf-stat.i.branch-miss-rate% 1.965e+08 +4.2% 2.047e+08 perf-stat.i.branch-misses 1348212 ± 93% +927.5% 13853011 ± 20% perf-stat.i.cache-misses 6337510 ± 28% +909.9% 64003716 ± 27% perf-stat.i.cache-references 1541 ± 5% +31.9% 2033 ± 4% perf-stat.i.context-switches 1.08 +12.9% 1.22 perf-stat.i.cpi 1.448e+11 +4.4% 1.512e+11 perf-stat.i.cpu-cycles 132.05 -4.9% 125.55 ± 4% perf-stat.i.cpu-migrations 224930 ± 19% -94.6% 12223 ± 28% perf-stat.i.cycles-between-cache-misses 0.17 -0.0 0.16 perf-stat.i.dTLB-load-miss-rate% 66339784 -14.7% 56566077 perf-stat.i.dTLB-load-misses 3.941e+10 -8.0% 3.624e+10 perf-stat.i.dTLB-loads 52761 -11.0% 46938 perf-stat.i.dTLB-store-misses 2.163e+10 -7.8% 1.994e+10 perf-stat.i.dTLB-stores 68108407 -15.1% 57828873 ± 2% perf-stat.i.iTLB-load-misses 1.34e+11 -7.6% 1.239e+11 perf-stat.i.instructions 1997 +8.9% 2174 ± 2% perf-stat.i.instructions-per-iTLB-miss 0.93 -11.4% 0.82 perf-stat.i.ipc 0.72 -94.5% 0.04 ± 33% perf-stat.i.major-faults 1.39 +4.4% 1.45 perf-stat.i.metric.GHz 857.73 -7.7% 791.38 perf-stat.i.metric.M/sec 3021 -7.7% 2788 perf-stat.i.minor-faults 89.68 -3.9 85.77 perf-stat.i.node-load-miss-rate% 315756 ±104% +546.6% 2041712 ± 20% perf-stat.i.node-load-misses 55314 ±104% +530.4% 348680 ± 25% perf-stat.i.node-loads 34490 ± 45% +2319.4% 834464 ± 31% perf-stat.i.node-store-misses 8436 ± 5% +145.9% 20748 ± 6% perf-stat.i.node-stores 3021 -7.7% 2788 perf-stat.i.page-faults 0.05 ± 28% +987.0% 0.52 ± 27% perf-stat.overall.MPKI 0.70 +0.1 0.79 perf-stat.overall.branch-miss-rate% 1.08 +13.0% 1.22 perf-stat.overall.cpi 166572 ± 41% -93.1% 11460 ± 23% perf-stat.overall.cycles-between-cache-misses 0.17 -0.0 0.16 perf-stat.overall.dTLB-load-miss-rate% 0.00 -0.0 0.00 perf-stat.overall.dTLB-store-miss-rate% 1968 +8.9% 2143 perf-stat.overall.instructions-per-iTLB-miss 0.93 -11.5% 0.82 perf-stat.overall.ipc 78.34 ± 6% +18.6 96.97 ± 2% perf-stat.overall.node-store-miss-rate% 606198 +9.2% 661776 perf-stat.overall.path-length 2.807e+10 -7.5% 2.597e+10 perf-stat.ps.branch-instructions 1.959e+08 +4.1% 2.04e+08 perf-stat.ps.branch-misses 1346058 ± 93% +925.6% 13805032 ± 20% perf-stat.ps.cache-misses 6345091 ± 28% +905.4% 63791617 ± 27% perf-stat.ps.cache-references 1536 ± 5% +31.9% 2026 ± 4% perf-stat.ps.context-switches 1.443e+11 +4.4% 1.507e+11 perf-stat.ps.cpu-cycles 132.04 -5.2% 125.17 ± 4% perf-stat.ps.cpu-migrations 66096851 -14.7% 56375461 perf-stat.ps.dTLB-load-misses 3.927e+10 -8.0% 3.612e+10 perf-stat.ps.dTLB-loads 52611 -11.1% 46790 perf-stat.ps.dTLB-store-misses 2.155e+10 -7.8% 1.987e+10 perf-stat.ps.dTLB-stores 67859804 -15.1% 57635424 ± 2% perf-stat.ps.iTLB-load-misses 1.336e+11 -7.5% 1.235e+11 perf-stat.ps.instructions 0.72 -94.4% 0.04 ± 33% perf-stat.ps.major-faults 3011 -7.7% 2779 perf-stat.ps.minor-faults 314425 ±104% +547.1% 2034667 ± 20% perf-stat.ps.node-load-misses 55599 ±103% +525.0% 347495 ± 25% perf-stat.ps.node-loads 34364 ± 45% +2319.6% 831507 ± 31% perf-stat.ps.node-store-misses 8502 ± 5% +143.5% 20703 ± 6% perf-stat.ps.node-stores 3012 -7.7% 2779 perf-stat.ps.page-faults 4.033e+13 -7.5% 3.731e+13 perf-stat.total.instructions 0.01 ± 9% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 13% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 15% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 21% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.02 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.01 ± 53% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.02 ± 27% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.03 ± 64% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.02 ± 12% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.03 ± 10% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 15% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.01 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.03 ± 78% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 29% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 42% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 26% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 0.02 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.02 ± 66% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 5.62 ± 2% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.01 -100.0% 0.00 perf-sched.total_sch_delay.average.ms 5.62 ± 2% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 195.80 ± 6% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 8759 ± 6% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8233 ± 6% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 195.80 ± 6% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8233 ± 6% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.53 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1969 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 809.06 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1969 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 269.47 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.43 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.04 ± 18% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.03 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 64.05 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.04 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.__x64_sys_brk.do_syscall_64 0.04 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.do_brk_flags 7.87 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.03 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 2.56 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 595.03 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 326.53 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.79 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 8.32 ± 25% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 676.61 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 623.29 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 4.00 -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 22.33 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 4.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 247.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 246.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 118.83 ± 38% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 342.83 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2518 -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 115.33 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.down_write_killable.__x64_sys_brk.do_syscall_64 61.83 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.do_brk_flags 1248 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 54.17 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 143.67 ± 55% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 23.67 ± 45% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 83.83 ± 25% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 40.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1272 ± 37% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1464 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 72.00 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 575.00 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.50 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 7815 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 7815 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3095 ± 56% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 33.25 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.88 ±146% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.19 ±106% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 7819 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.78 ±149% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.__x64_sys_brk.do_syscall_64 0.25 ±144% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.do_brk_flags 1172 ± 32% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.06 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 5.00 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 7407 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5250 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.84 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 243.33 ± 50% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2729 ± 47% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.03 ± 63% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 7699 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.53 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1969 -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 809.06 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1969 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 269.46 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.42 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.04 ± 18% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.03 ± 29% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.03 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 28.97 ± 27% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 64.05 -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.04 ± 30% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.__x64_sys_brk.do_syscall_64 0.04 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.do_brk_flags 0.03 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.__x64_sys_brk 7.87 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.04 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.unmap_region 0.03 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 2.56 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 595.02 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 326.53 ± 27% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.79 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 63% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 8.31 ± 25% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 676.60 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 623.27 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.50 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 7815 -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 7815 -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3095 ± 56% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 33.24 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.88 ±146% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.10 ±106% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.19 ±106% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 57.94 ± 27% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 7819 -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.78 ±149% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.__x64_sys_brk.do_syscall_64 0.25 ±144% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.do_brk_flags 0.05 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.__x64_sys_brk 1172 ± 32% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.09 ± 37% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.unmap_region 0.06 ± 27% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 4.99 -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 7407 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5250 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.83 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 63% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 243.32 ± 50% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2729 ± 47% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 7699 ± 14% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 76758 ± 2% -8.6% 70119 ± 2% interrupts.CAL:Function_call_interrupts 4021 ± 30% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 4021 ± 30% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 5976 ± 37% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 5976 ± 37% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 5375 ± 45% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 5375 ± 45% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 4174 ± 47% -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 4174 ± 47% -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 5327 ± 42% -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 5327 ± 42% -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 4581 ± 38% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 4581 ± 38% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 5337 ± 26% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 5337 ± 26% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 3615 ± 45% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 3615 ± 45% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 4389 ± 57% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 4389 ± 57% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 5230 ± 44% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 5230 ± 44% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 4857 ± 37% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 4857 ± 37% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 4571 ± 51% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 4571 ± 51% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 769.50 ± 20% -28.1% 553.00 ± 8% interrupts.CPU16.CAL:Function_call_interrupts 3950 ± 54% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 3950 ± 54% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 4485 ± 30% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 4485 ± 30% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 5139 ± 35% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 5139 ± 35% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 5525 ± 45% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 5525 ± 45% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 6327 ± 30% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 6327 ± 30% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 732.67 ± 24% -21.5% 575.17 ± 12% interrupts.CPU20.CAL:Function_call_interrupts 3597 ± 54% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 3597 ± 54% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 4856 ± 42% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 4856 ± 42% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 6101 ± 38% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 6101 ± 38% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 4139 ± 57% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 4139 ± 57% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 647.00 ± 14% -15.8% 544.67 ± 3% interrupts.CPU24.CAL:Function_call_interrupts 5617 ± 37% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 5617 ± 37% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 6282 ± 27% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 6282 ± 27% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 5144 ± 54% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 5144 ± 54% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 6256 ± 27% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 6256 ± 27% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 5805 ± 38% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 5805 ± 38% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 3944 ± 55% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 3944 ± 55% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 5923 ± 33% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 5923 ± 33% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 4233 ± 47% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 4233 ± 47% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 7332 ± 9% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 7332 ± 9% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 4973 ± 44% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 4973 ± 44% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 7439 ± 10% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 7439 ± 10% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 7900 -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 7900 -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 6148 ± 31% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 6148 ± 31% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 6697 ± 28% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 6697 ± 28% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 6696 ± 28% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 6696 ± 28% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 7901 -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 7901 -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 7061 ± 26% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 7061 ± 26% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 5894 ± 24% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 5894 ± 24% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 6487 ± 30% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 6487 ± 30% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 6538 ± 29% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 6538 ± 29% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 6221 ± 38% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 6221 ± 38% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 6657 ± 28% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 6657 ± 28% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 6069 ± 32% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 6069 ± 32% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 7011 ± 26% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 7011 ± 26% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 5375 ± 35% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 5375 ± 35% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 5413 ± 35% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 5413 ± 35% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 7061 ± 26% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 7061 ± 26% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 5134 ± 33% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 5134 ± 33% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 5876 ± 36% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 5876 ± 36% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 6389 ± 35% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 6389 ± 35% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 5871 ± 26% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 5871 ± 26% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 6226 ± 27% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 6226 ± 27% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 4904 ± 33% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 4904 ± 33% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 3705 ± 42% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 3705 ± 42% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 4316 ± 42% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 4316 ± 42% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 4496 ± 50% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 4496 ± 50% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 5172 ± 50% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 5172 ± 50% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 4153 ± 45% -100.0% 1.00 ±223% interrupts.CPU58.NMI:Non-maskable_interrupts 4153 ± 45% -100.0% 1.00 ±223% interrupts.CPU58.PMI:Performance_monitoring_interrupts 4949 ± 44% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 4949 ± 44% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 6086 ± 28% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 6086 ± 28% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 4156 ± 45% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 4156 ± 45% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 4486 ± 50% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 4486 ± 50% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 1459 ±121% -62.1% 553.67 ± 11% interrupts.CPU62.CAL:Function_call_interrupts 4623 ± 40% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 4623 ± 40% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 5745 ± 32% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 5745 ± 32% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 5320 ± 35% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 5320 ± 35% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 4127 ± 36% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 4127 ± 36% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 4024 ± 51% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 4024 ± 51% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 4786 ± 36% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 4786 ± 36% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 6864 ± 26% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 6864 ± 26% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 183.00 ± 39% -65.5% 63.17 ± 80% interrupts.CPU68.RES:Rescheduling_interrupts 5280 ± 41% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 5280 ± 41% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 5936 ± 33% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 5936 ± 33% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 3813 ± 35% -100.0% 1.00 ±223% interrupts.CPU70.NMI:Non-maskable_interrupts 3813 ± 35% -100.0% 1.00 ±223% interrupts.CPU70.PMI:Performance_monitoring_interrupts 5284 ± 44% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 5284 ± 44% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 6574 ± 27% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 6574 ± 27% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 4859 ± 35% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 4859 ± 35% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 4462 ± 36% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 4462 ± 36% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 6674 ± 26% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 6674 ± 26% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 4385 ± 49% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 4385 ± 49% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 730.50 ± 17% -21.4% 574.50 ± 6% interrupts.CPU77.CAL:Function_call_interrupts 5248 ± 45% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 5248 ± 45% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 6331 ± 28% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 6331 ± 28% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 4801 ± 38% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 4801 ± 38% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 5608 ± 37% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 5608 ± 37% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 5432 ± 45% -100.0% 1.00 ±223% interrupts.CPU80.NMI:Non-maskable_interrupts 5432 ± 45% -100.0% 1.00 ±223% interrupts.CPU80.PMI:Performance_monitoring_interrupts 6875 ± 22% -100.0% 1.00 ±223% interrupts.CPU81.NMI:Non-maskable_interrupts 6875 ± 22% -100.0% 1.00 ±223% interrupts.CPU81.PMI:Performance_monitoring_interrupts 6375 ± 34% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 6375 ± 34% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 3909 ± 43% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 3909 ± 43% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 6750 ± 26% -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 6750 ± 26% -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 4277 ± 47% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 4277 ± 47% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 3813 ± 35% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 3813 ± 35% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 4906 ± 43% -100.0% 1.00 ±223% interrupts.CPU87.NMI:Non-maskable_interrupts 4906 ± 43% -100.0% 1.00 ±223% interrupts.CPU87.PMI:Performance_monitoring_interrupts 5015 ± 45% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 5015 ± 45% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 4539 ± 52% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 4539 ± 52% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 4803 ± 43% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 4803 ± 43% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 3339 ± 31% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 3339 ± 31% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 3704 ± 50% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 3704 ± 50% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 4157 ± 44% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 4157 ± 44% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 3429 ± 37% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 3429 ± 37% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 4539 ± 52% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 4539 ± 52% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 4583 ± 41% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 4583 ± 41% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 5170 ± 44% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 5170 ± 44% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 4205 ± 46% -100.0% 1.00 ±223% interrupts.CPU97.NMI:Non-maskable_interrupts 4205 ± 46% -100.0% 1.00 ±223% interrupts.CPU97.PMI:Performance_monitoring_interrupts 5019 ± 39% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 5019 ± 39% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 4982 ± 38% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 4982 ± 38% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 107.67 ± 16% -100.0% 0.00 interrupts.IWI:IRQ_work_interrupts 550173 ± 9% -100.0% 6.00 interrupts.NMI:Non-maskable_interrupts 550173 ± 9% -100.0% 6.00 interrupts.PMI:Performance_monitoring_interrupts *************************************************************************************************** lkp-hsw-4ex1: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/thread/16/debian-10.4-x86_64-20200603.cgz/lkp-hsw-4ex1/pipe1/will-it-scale/0x16 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 10347715 -7.1% 9613244 will-it-scale.16.threads 646731 -7.1% 600827 will-it-scale.per_thread_ops 10347715 -7.1% 9613244 will-it-scale.workload 2161039 ±140% -89.9% 217227 ± 11% cpuidle.POLL.time 778537 ± 94% -90.1% 76755 ± 15% cpuidle.POLL.usage 2.35 ± 5% -0.6 1.75 ± 3% mpstat.cpu.all.irq% 0.09 ± 3% -0.0 0.08 ± 2% mpstat.cpu.all.soft% 5.16 +0.5 5.67 mpstat.cpu.all.sys% 126150 ± 67% +455.9% 701316 ± 13% numa-numastat.node2.local_node 190120 ± 44% +302.2% 764691 ± 11% numa-numastat.node2.numa_hit 343721 ± 40% +124.8% 772821 ± 19% numa-numastat.node3.local_node 411537 ± 35% +104.5% 841447 ± 16% numa-numastat.node3.numa_hit 15406 ± 9% -19.7% 12379 ± 13% sched_debug.cfs_rq:/.load.avg 1538 ± 9% -28.6% 1098 ± 10% sched_debug.cpu.clock_task.stddev 3291 +20.7% 3971 ± 5% sched_debug.cpu.nr_switches.avg -16.72 -28.2% -12.00 sched_debug.cpu.nr_uninterruptible.min 86.17 -1.4% 85.00 vmstat.cpu.id 1367031 +179.8% 3824534 vmstat.memory.cache 16.00 +12.5% 18.00 vmstat.procs.r 1773 +41.5% 2508 vmstat.system.cs 49509 ± 3% +4948.7% 2499569 ± 2% meminfo.Active 49509 ± 3% +4948.7% 2499569 ± 2% meminfo.Active(anon) 1253240 +195.4% 3702564 meminfo.Cached 532084 +471.8% 3042485 meminfo.Committed_AS 3151775 +94.6% 6133656 meminfo.Memused 4357 +12.6% 4906 meminfo.PageTables 71968 ± 2% +3403.3% 2521291 ± 2% meminfo.Shmem 3803846 +61.3% 6134154 meminfo.max_used_kB 54196 ± 4% +25.7% 68114 ± 4% slabinfo.filp.active_objs 855.67 ± 4% +26.0% 1078 ± 4% slabinfo.filp.active_slabs 54795 ± 4% +26.0% 69029 ± 4% slabinfo.filp.num_objs 855.67 ± 4% +26.0% 1078 ± 4% slabinfo.filp.num_slabs 19864 -29.1% 14091 slabinfo.proc_inode_cache.active_objs 413.17 -28.8% 294.17 slabinfo.proc_inode_cache.active_slabs 19865 -28.8% 14140 slabinfo.proc_inode_cache.num_objs 413.17 -28.8% 294.17 slabinfo.proc_inode_cache.num_slabs 24290 +36.2% 33073 slabinfo.radix_tree_node.active_objs 433.50 +36.2% 590.33 slabinfo.radix_tree_node.active_slabs 24290 +36.2% 33073 slabinfo.radix_tree_node.num_objs 433.50 +36.2% 590.33 slabinfo.radix_tree_node.num_slabs 243.00 ± 67% +12967.1% 31753 ± 95% numa-vmstat.node0.nr_active_anon 1329 ±107% +2462.5% 34073 ± 87% numa-vmstat.node0.nr_shmem 243.00 ± 67% +12967.1% 31753 ± 95% numa-vmstat.node0.nr_zone_active_anon 390.50 ± 70% +51012.8% 199595 ± 33% numa-vmstat.node1.nr_active_anon 74888 ± 3% +267.5% 275184 ± 24% numa-vmstat.node1.nr_file_pages 2116 ± 81% +9380.3% 200665 ± 33% numa-vmstat.node1.nr_shmem 390.50 ± 70% +51012.8% 199595 ± 33% numa-vmstat.node1.nr_zone_active_anon 219.50 ± 74% +87889.5% 193137 ± 42% numa-vmstat.node2.nr_active_anon 75475 ± 5% +250.7% 264662 ± 31% numa-vmstat.node2.nr_file_pages 1579 ± 57% +12147.1% 193483 ± 42% numa-vmstat.node2.nr_shmem 219.50 ± 74% +87889.5% 193137 ± 42% numa-vmstat.node2.nr_zone_active_anon 11535 ± 4% +1630.4% 199614 ± 60% numa-vmstat.node3.nr_active_anon 85408 ± 3% +226.2% 278565 ± 42% numa-vmstat.node3.nr_file_pages 12935 ± 14% +1456.2% 201307 ± 59% numa-vmstat.node3.nr_shmem 11535 ± 4% +1630.4% 199614 ± 60% numa-vmstat.node3.nr_zone_active_anon 575672 ± 16% +57.8% 908500 ± 8% numa-vmstat.node3.numa_hit 439401 ± 23% +70.1% 747636 ± 11% numa-vmstat.node3.numa_local 12360 ± 3% +4953.6% 624669 ± 2% proc-vmstat.nr_active_anon 64392 +8.8% 70043 proc-vmstat.nr_anon_pages 313288 +195.4% 925417 proc-vmstat.nr_file_pages 69914 +7.7% 75279 proc-vmstat.nr_inactive_anon 1091 +12.4% 1226 proc-vmstat.nr_page_table_pages 17970 ± 2% +3406.4% 630099 ± 2% proc-vmstat.nr_shmem 53352 +4.7% 55880 proc-vmstat.nr_slab_unreclaimable 12360 ± 3% +4953.6% 624669 ± 2% proc-vmstat.nr_zone_active_anon 69914 +7.7% 75279 proc-vmstat.nr_zone_inactive_anon 1201307 +109.1% 2512032 proc-vmstat.numa_hit 968119 +135.4% 2278868 proc-vmstat.numa_local 13054 ± 49% -95.5% 588.67 ± 34% proc-vmstat.numa_pages_migrated 17335 ± 4% +382.6% 83662 ± 2% proc-vmstat.pgactivate 1269551 +125.3% 2860508 ± 2% proc-vmstat.pgalloc_normal 1172329 -2.8% 1138971 proc-vmstat.pgfault 1183638 +10.8% 1311710 ± 5% proc-vmstat.pgfree 13054 ± 49% -95.5% 588.67 ± 34% proc-vmstat.pgmigrate_success 38556 ± 13% +163.9% 101756 ± 30% syscalls.sys_close.max 568.17 -22.5% 440.50 ± 6% syscalls.sys_close.med 17718871 ± 11% -4.3e+06 13405659 ± 21% syscalls.sys_close.noise.2% 17644152 ± 11% -4.3e+06 13344394 ± 22% syscalls.sys_close.noise.5% 103226 ± 13% +267.9% 379798 ± 12% syscalls.sys_openat.max 3284 +72.8% 5675 ± 6% syscalls.sys_openat.med 99825845 ± 3% -4.1e+07 58578210 ± 13% syscalls.sys_openat.noise.100% 1.404e+08 ± 3% -3.3e+07 1.078e+08 ± 13% syscalls.sys_openat.noise.2% 1.332e+08 ± 3% -4.5e+07 87987720 ± 18% syscalls.sys_openat.noise.25% 1.4e+08 ± 3% -3.4e+07 1.065e+08 ± 14% syscalls.sys_openat.noise.5% 1.183e+08 ± 3% -4.4e+07 74351118 ± 24% syscalls.sys_openat.noise.50% 1.066e+08 ± 3% -3.9e+07 67567568 ± 24% syscalls.sys_openat.noise.75% 1.074e+11 ± 6% -3.8e+10 6.928e+10 ± 19% syscalls.sys_read.noise.100% 1.074e+11 ± 6% -3.8e+10 6.928e+10 ± 19% syscalls.sys_read.noise.2% 1.074e+11 ± 6% -3.8e+10 6.928e+10 ± 19% syscalls.sys_read.noise.25% 1.074e+11 ± 6% -3.8e+10 6.928e+10 ± 19% syscalls.sys_read.noise.5% 1.074e+11 ± 6% -3.8e+10 6.928e+10 ± 19% syscalls.sys_read.noise.50% 1.074e+11 ± 6% -3.8e+10 6.928e+10 ± 19% syscalls.sys_read.noise.75% 973.33 ± 67% +12957.0% 127087 ± 95% numa-meminfo.node0.Active 973.33 ± 67% +12957.0% 127087 ± 95% numa-meminfo.node0.Active(anon) 973600 ± 8% +18.3% 1152023 ± 8% numa-meminfo.node0.MemUsed 5320 ±107% +2463.2% 136369 ± 87% numa-meminfo.node0.Shmem 1563 ± 70% +51094.1% 800163 ± 33% numa-meminfo.node1.Active 1563 ± 70% +51094.1% 800163 ± 33% numa-meminfo.node1.Active(anon) 299552 ± 3% +268.1% 1102521 ± 24% numa-meminfo.node1.FilePages 693218 ± 6% +133.1% 1615690 ± 16% numa-meminfo.node1.MemUsed 8467 ± 81% +9400.8% 804445 ± 33% numa-meminfo.node1.Shmem 879.67 ± 73% +87936.5% 774427 ± 42% numa-meminfo.node2.Active 879.67 ± 73% +87936.5% 774427 ± 42% numa-meminfo.node2.Active(anon) 301902 ± 5% +251.3% 1060528 ± 31% numa-meminfo.node2.FilePages 728495 ± 15% +119.9% 1601674 ± 20% numa-meminfo.node2.MemUsed 6321 ± 57% +12173.6% 775813 ± 42% numa-meminfo.node2.Shmem 46166 ± 4% +1633.7% 800379 ± 60% numa-meminfo.node3.Active 46166 ± 4% +1633.7% 800379 ± 60% numa-meminfo.node3.Active(anon) 341668 ± 3% +226.7% 1116179 ± 42% numa-meminfo.node3.FilePages 756726 ± 7% +133.5% 1766769 ± 28% numa-meminfo.node3.MemUsed 51779 ± 14% +1458.8% 807147 ± 59% numa-meminfo.node3.Shmem 36.03 ± 18% -36.0 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 34.77 ± 16% -34.8 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 34.77 ± 16% -34.8 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 34.76 ± 16% -34.8 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 33.91 ± 16% -33.9 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 33.74 ± 17% -33.7 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 32.55 ± 21% -32.5 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 31.89 ± 10% -31.9 0.00 perf-profile.calltrace.cycles-pp.__libc_read 31.50 ± 10% -31.5 0.00 perf-profile.calltrace.cycles-pp.__libc_write 24.59 ± 10% -24.6 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_read 24.14 ± 10% -24.1 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_write 12.22 ± 10% -12.2 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 12.04 ± 10% -12.0 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__libc_write 11.97 ± 10% -12.0 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__libc_read 11.73 ± 10% -11.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write 8.45 ± 10% -8.4 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 7.88 ± 9% -7.9 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write 7.20 ± 10% -7.2 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 6.64 ± 9% -6.6 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write 5.12 ± 10% -5.1 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 4.97 ± 10% -5.0 0.00 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 48.91 ± 10% -48.9 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 36.03 ± 18% -36.0 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 36.03 ± 18% -36.0 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 36.03 ± 18% -36.0 0.00 perf-profile.children.cycles-pp.do_idle 35.17 ± 19% -35.2 0.00 perf-profile.children.cycles-pp.cpuidle_enter 35.16 ± 19% -35.2 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 34.77 ± 16% -34.8 0.00 perf-profile.children.cycles-pp.start_secondary 32.55 ± 21% -32.5 0.00 perf-profile.children.cycles-pp.intel_idle 32.38 ± 10% -32.4 0.00 perf-profile.children.cycles-pp.__libc_read 31.78 ± 10% -31.8 0.00 perf-profile.children.cycles-pp.__libc_write 24.08 ± 10% -24.1 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 24.06 ± 10% -24.1 0.00 perf-profile.children.cycles-pp.do_syscall_64 8.49 ± 10% -8.5 0.00 perf-profile.children.cycles-pp.ksys_read 7.92 ± 9% -7.9 0.00 perf-profile.children.cycles-pp.ksys_write 7.27 ± 10% -7.3 0.00 perf-profile.children.cycles-pp.vfs_read 6.70 ± 9% -6.7 0.00 perf-profile.children.cycles-pp.vfs_write 6.68 ± 10% -6.7 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 6.38 ± 10% -6.4 0.00 perf-profile.children.cycles-pp.syscall_trace_enter 6.34 ± 10% -6.3 0.00 perf-profile.children.cycles-pp.__entry_text_start 6.15 ± 10% -6.1 0.00 perf-profile.children.cycles-pp.trace_buffer_lock_reserve 5.96 ± 10% -6.0 0.00 perf-profile.children.cycles-pp.ftrace_syscall_enter 5.66 ± 10% -5.7 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode_prepare 5.42 ± 11% -5.4 0.00 perf-profile.children.cycles-pp.ring_buffer_lock_reserve 5.29 ± 10% -5.3 0.00 perf-profile.children.cycles-pp.ftrace_syscall_exit 5.16 ± 11% -5.2 0.00 perf-profile.children.cycles-pp.new_sync_read 5.03 ± 9% -5.0 0.00 perf-profile.children.cycles-pp.new_sync_write 4.69 ± 10% -4.7 0.00 perf-profile.children.cycles-pp.pipe_read 32.55 ± 21% -32.5 0.00 perf-profile.self.cycles-pp.intel_idle 18.11 ± 10% -18.1 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode 6.65 ± 10% -6.7 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 5.45 ± 10% -5.5 0.00 perf-profile.self.cycles-pp.__entry_text_start 1.86 +9.5% 2.04 perf-stat.i.MPKI 7.787e+09 +13.4% 8.828e+09 perf-stat.i.branch-instructions 1.27 -0.1 1.14 perf-stat.i.branch-miss-rate% 0.34 ± 9% +5.1 5.46 ± 13% perf-stat.i.cache-miss-rate% 245601 ± 9% +1822.1% 4720745 ± 13% perf-stat.i.cache-misses 70035252 +24.3% 87072170 perf-stat.i.cache-references 1724 +42.7% 2461 perf-stat.i.context-switches 5.293e+10 +12.2% 5.936e+10 perf-stat.i.cpu-cycles 251199 ± 10% -88.3% 29314 ± 26% perf-stat.i.cycles-between-cache-misses 0.21 -0.0 0.19 perf-stat.i.dTLB-load-miss-rate% 1.139e+10 +9.2% 1.244e+10 perf-stat.i.dTLB-loads 0.26 -0.0 0.23 perf-stat.i.dTLB-store-miss-rate% 8.301e+09 +7.0% 8.884e+09 perf-stat.i.dTLB-stores 86.29 -5.0 81.30 ± 2% perf-stat.i.iTLB-load-miss-rate% 3599988 ± 13% +44.8% 5213290 ± 14% perf-stat.i.iTLB-loads 3.804e+10 +13.4% 4.312e+10 perf-stat.i.instructions 1686 ± 2% +13.5% 1913 ± 4% perf-stat.i.instructions-per-iTLB-miss 1.02 ± 2% -94.3% 0.06 ± 44% perf-stat.i.major-faults 0.37 +12.2% 0.41 perf-stat.i.metric.GHz 1.09 ± 10% -24.1% 0.83 ± 14% perf-stat.i.metric.K/sec 191.47 +9.8% 210.19 perf-stat.i.metric.M/sec 3770 -3.1% 3653 perf-stat.i.minor-faults 94.62 +3.2 97.82 perf-stat.i.node-load-miss-rate% 155456 ± 10% +1936.0% 3165118 ± 16% perf-stat.i.node-load-misses 13817 ± 8% +243.8% 47505 ± 43% perf-stat.i.node-loads 46.19 ± 12% +39.2 85.35 ± 2% perf-stat.i.node-store-miss-rate% 32493 ± 13% +4121.7% 1371781 ± 11% perf-stat.i.node-store-misses 40563 ± 14% +220.5% 130023 ± 8% perf-stat.i.node-stores 3771 -3.1% 3653 perf-stat.i.page-faults 1.84 +9.7% 2.02 perf-stat.overall.MPKI 1.26 -0.1 1.13 perf-stat.overall.branch-miss-rate% 0.35 ± 9% +5.1 5.42 ± 13% perf-stat.overall.cache-miss-rate% 216967 ± 8% -94.1% 12875 ± 16% perf-stat.overall.cycles-between-cache-misses 0.21 -0.0 0.19 perf-stat.overall.dTLB-load-miss-rate% 0.26 -0.0 0.23 perf-stat.overall.dTLB-store-miss-rate% 86.34 -5.0 81.34 ± 2% perf-stat.overall.iTLB-load-miss-rate% 1677 ± 2% +13.5% 1904 ± 4% perf-stat.overall.instructions-per-iTLB-miss 91.77 +6.7 98.44 perf-stat.overall.node-load-miss-rate% 44.57 ± 11% +46.7 91.25 perf-stat.overall.node-store-miss-rate% 1106216 +22.1% 1350408 perf-stat.overall.path-length 7.76e+09 +13.4% 8.798e+09 perf-stat.ps.branch-instructions 245116 ± 9% +1819.1% 4704109 ± 13% perf-stat.ps.cache-misses 69795445 +24.3% 86773526 perf-stat.ps.cache-references 1719 +42.7% 2452 perf-stat.ps.context-switches 5.275e+10 +12.2% 5.916e+10 perf-stat.ps.cpu-cycles 1.135e+10 +9.2% 1.24e+10 perf-stat.ps.dTLB-loads 8.273e+09 +7.0% 8.853e+09 perf-stat.ps.dTLB-stores 3587022 ± 13% +44.8% 5195656 ± 14% perf-stat.ps.iTLB-loads 3.792e+10 +13.3% 4.298e+10 perf-stat.ps.instructions 1.01 ± 2% -94.3% 0.06 ± 44% perf-stat.ps.major-faults 3757 -3.1% 3640 perf-stat.ps.minor-faults 155084 ± 10% +1933.7% 3153975 ± 16% perf-stat.ps.node-load-misses 13831 ± 8% +242.2% 47330 ± 43% perf-stat.ps.node-loads 32417 ± 13% +4116.6% 1366944 ± 11% perf-stat.ps.node-store-misses 40467 ± 14% +220.2% 129580 ± 8% perf-stat.ps.node-stores 3758 -3.1% 3640 perf-stat.ps.page-faults 1.145e+13 +13.4% 1.298e+13 perf-stat.total.instructions 0.02 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 0.04 ± 18% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 0.01 ± 21% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 0.03 ± 32% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 0.01 ± 19% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 0.03 ± 37% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.futex_wait_queue_me.futex_wait.do_futex 0.00 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.05 ± 34% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 24% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 0.02 ± 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.io_schedule_timeout.wait_for_completion_io 0.02 ± 61% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 0.04 ± 43% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 0.02 ± 40% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.wait_for_completion.__flush_work 0.01 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 0.00 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 1.91 ±218% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 0.03 ± 42% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 0.06 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 0.03 ± 49% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 0.05 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 0.05 ± 3% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 0.06 ± 9% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 0.03 ± 47% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 0.05 ± 33% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.futex_wait_queue_me.futex_wait.do_futex 0.06 ± 8% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.04 ± 12% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 0.07 ± 30% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 0.05 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait 0.04 ± 29% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 0.02 ± 31% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.io_schedule_timeout.wait_for_completion_io 0.55 ±208% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 6.73 ±140% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 0.02 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.wait_for_completion.__flush_work 6.77 ± 9% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 0.03 ± 28% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 298.90 ±213% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 0.14 ±206% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 300.24 ±212% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 215.33 -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 9201 -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8311 ± 8% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 215.19 -100.0% 0.00 perf-sched.total_wait_time.average.ms 8311 ± 8% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.66 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 259.95 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 781.14 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 259.95 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 211.48 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 0.88 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 0.05 ± 26% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 53.04 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.05 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.mutex_lock 5.67 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 2.82 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 580.11 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 677.37 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.io_schedule_timeout.wait_for_completion_io 478.71 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 12.51 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 687.31 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 0.00 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 438.87 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 6.33 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 23.17 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 6.33 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 313.50 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 310.83 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 69.00 ± 43% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 2916 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 82.83 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.mutex_lock 1655 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 83.50 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 59.50 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 4.83 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.io_schedule_timeout.wait_for_completion_io 79.67 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 782.17 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 1947 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 73.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 658.50 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 999.71 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 1537 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 1537 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 1966 ± 59% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 20.30 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 0.48 ±164% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 1540 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.12 ± 46% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.mutex_lock 1071 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 4.54 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 7489 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 1151 ± 23% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.io_schedule_timeout.wait_for_completion_io 505.63 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 376.36 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 1691 ± 88% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 0.03 ± 28% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 7711 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 899.64 -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 259.91 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 781.13 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 259.91 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 211.47 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 0.86 -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 0.06 ± 49% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt 0.04 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_irq_work 0.06 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi 0.05 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 208.50 ±196% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.futex_wait_queue_me.futex_wait.do_futex 53.03 -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.05 ± 23% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.copy_page_from_iter 0.06 ± 47% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.copy_page_to_iter 0.05 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.mutex_lock 5.67 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 2.77 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 247.85 ± 48% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait 580.11 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 677.35 ± 30% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.io_schedule_timeout.wait_for_completion_io 478.69 -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 0.01 ±110% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.khugepaged.kthread 12.47 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 687.30 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 436.96 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 999.69 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 1537 ± 5% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 1537 ± 5% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 1966 ± 59% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 20.30 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 0.46 ±162% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt 0.05 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_irq_work 0.13 ±115% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi 0.48 ±164% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 1152 ±213% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.futex_wait_queue_me.futex_wait.do_futex 1540 ± 5% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.08 ± 71% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.copy_page_from_iter 0.09 ± 73% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.copy_page_to_iter 0.12 ± 46% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.mutex_lock 1070 ± 14% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 4.50 ± 10% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 2024 ± 94% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait 7489 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 1151 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.io_schedule_timeout.wait_for_completion_io 505.62 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 0.01 ±110% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.khugepaged.kthread 376.31 ± 10% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 1691 ± 88% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 7711 ± 21% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 19423 ± 16% +181.4% 54663 ± 9% softirqs.CPU0.RCU 17368 ± 14% +203.7% 52743 ± 7% softirqs.CPU1.RCU 17971 ± 17% +220.1% 57519 ± 4% softirqs.CPU10.RCU 14509 ± 18% +188.4% 41847 ± 18% softirqs.CPU100.RCU 16450 ± 21% +192.8% 48170 ± 9% softirqs.CPU101.RCU 16134 ± 22% +191.2% 46984 ± 12% softirqs.CPU102.RCU 15942 ± 23% +210.5% 49505 ± 15% softirqs.CPU103.RCU 14345 ± 18% +222.9% 46315 ± 13% softirqs.CPU104.RCU 15933 ± 22% +194.7% 46949 ± 6% softirqs.CPU105.RCU 16305 ± 22% +205.6% 49821 ± 12% softirqs.CPU106.RCU 15880 ± 24% +195.4% 46907 ± 6% softirqs.CPU107.RCU 10021 ± 14% +802.9% 90484 ± 21% softirqs.CPU108.RCU 9881 ± 11% +346.2% 44092 ± 30% softirqs.CPU109.RCU 18504 ± 11% +229.5% 60977 ± 4% softirqs.CPU11.RCU 8863 ± 8% +273.6% 33113 ± 23% softirqs.CPU110.RCU 9235 ± 7% +231.7% 30633 ± 18% softirqs.CPU111.RCU 12750 ± 17% +283.6% 48906 ± 25% softirqs.CPU112.RCU 12897 ± 17% +240.0% 43852 ± 11% softirqs.CPU113.RCU 8779 ± 5% +231.7% 29116 ± 13% softirqs.CPU114.RCU 12658 ± 26% +248.9% 44164 ± 22% softirqs.CPU115.RCU 11441 ± 11% +247.2% 39726 ± 10% softirqs.CPU116.RCU 13127 ± 15% +233.2% 43736 ± 12% softirqs.CPU117.RCU 15022 ± 18% +222.7% 48483 ± 6% softirqs.CPU118.RCU 16019 ± 21% +222.8% 51715 ± 9% softirqs.CPU119.RCU 18445 ± 13% +229.4% 60762 ± 5% softirqs.CPU12.RCU 16411 ± 22% +272.3% 61102 ± 22% softirqs.CPU120.RCU 15637 ± 21% +249.0% 54573 ± 15% softirqs.CPU121.RCU 13994 ± 20% +245.8% 48393 ± 13% softirqs.CPU122.RCU 16481 ± 17% +240.0% 56032 ± 8% softirqs.CPU123.RCU 16313 ± 22% +244.0% 56119 ± 11% softirqs.CPU124.RCU 15193 ± 19% +238.4% 51414 ± 15% softirqs.CPU125.RCU 9449 ± 22% +684.6% 74140 ± 29% softirqs.CPU126.RCU 8026 ± 7% +341.6% 35447 ± 32% softirqs.CPU127.RCU 9925 ± 12% +316.8% 41374 ± 26% softirqs.CPU128.RCU 10407 ± 10% +284.2% 39989 ± 17% softirqs.CPU129.RCU 18780 ± 8% +211.9% 58578 ± 8% softirqs.CPU13.RCU 12368 ± 14% +234.7% 41402 ± 8% softirqs.CPU130.RCU 13023 ± 13% +212.5% 40692 ± 12% softirqs.CPU131.RCU 12325 ± 16% +210.6% 38280 ± 13% softirqs.CPU132.RCU 12901 ± 17% +328.4% 55266 ± 25% softirqs.CPU133.RCU 11583 ± 18% +274.1% 43338 ± 14% softirqs.CPU134.RCU 12702 ± 14% +269.0% 46874 ± 13% softirqs.CPU135.RCU 14088 ± 18% +234.1% 47075 ± 9% softirqs.CPU136.RCU 15252 ± 19% +254.4% 54060 ± 17% softirqs.CPU137.RCU 14751 ± 21% +247.2% 51225 ± 13% softirqs.CPU138.RCU 14324 ± 20% +241.1% 48856 ± 9% softirqs.CPU139.RCU 16718 ± 24% +256.5% 59604 ± 4% softirqs.CPU14.RCU 13748 ± 16% +249.7% 48071 ± 9% softirqs.CPU140.RCU 13603 ± 22% +276.9% 51270 ± 7% softirqs.CPU141.RCU 15426 ± 19% +231.4% 51124 ± 8% softirqs.CPU142.RCU 14208 ± 17% +246.0% 49156 ± 9% softirqs.CPU143.RCU 18660 ± 10% +208.6% 57578 ± 7% softirqs.CPU15.RCU 7546 ± 10% +338.4% 33085 ± 56% softirqs.CPU16.RCU 8489 ± 7% +304.3% 34320 ± 55% softirqs.CPU17.RCU 15143 ± 21% +1246.1% 203848 ± 12% softirqs.CPU18.RCU 40325 ± 3% -21.5% 31673 ± 7% softirqs.CPU18.SCHED 12313 ± 12% +435.6% 65947 ± 21% softirqs.CPU19.RCU 17395 ± 9% +207.3% 53452 ± 9% softirqs.CPU2.RCU 10912 ± 12% +233.8% 36423 ± 9% softirqs.CPU20.RCU 11902 ± 9% +246.3% 41217 ± 15% softirqs.CPU21.RCU 14465 ± 18% +275.2% 54269 ± 46% softirqs.CPU22.RCU 14245 ± 18% +261.1% 51444 ± 24% softirqs.CPU23.RCU 13959 ± 21% +162.6% 36661 ± 14% softirqs.CPU24.RCU 14336 ± 18% +177.9% 39840 ± 12% softirqs.CPU25.RCU 12796 ± 15% +273.2% 47753 ± 27% softirqs.CPU26.RCU 14604 ± 20% +248.6% 50910 ± 34% softirqs.CPU27.RCU 15773 ± 19% +181.0% 44330 ± 11% softirqs.CPU28.RCU 16353 ± 20% +225.3% 53200 ± 15% softirqs.CPU29.RCU 16735 ± 17% +243.5% 57482 ± 4% softirqs.CPU3.RCU 16013 ± 24% +190.4% 46509 ± 6% softirqs.CPU30.RCU 16103 ± 21% +242.3% 55121 ± 36% softirqs.CPU31.RCU 9610 ± 11% +349.1% 43162 ± 50% softirqs.CPU32.RCU 11333 ± 12% +233.9% 37847 ± 15% softirqs.CPU33.RCU 11324 ± 14% +247.2% 39320 ± 9% softirqs.CPU34.RCU 11168 ± 15% +210.2% 34641 ± 20% softirqs.CPU35.RCU 14767 ± 20% +1293.9% 205842 ± 19% softirqs.CPU36.RCU 12741 ± 15% +449.9% 70065 ± 13% softirqs.CPU37.RCU 11457 ± 12% +379.8% 54980 ± 31% softirqs.CPU38.RCU 12440 ± 15% +307.9% 50748 ± 16% softirqs.CPU39.RCU 17911 ± 9% +184.9% 51035 ± 19% softirqs.CPU4.RCU 13898 ± 19% +301.5% 55802 ± 21% softirqs.CPU40.RCU 14268 ± 15% +244.9% 49204 ± 20% softirqs.CPU41.RCU 9776 ± 4% +260.8% 35272 ± 24% softirqs.CPU42.RCU 12653 ± 13% +275.9% 47564 ± 20% softirqs.CPU43.RCU 12635 ± 15% +267.8% 46472 ± 11% softirqs.CPU44.RCU 14269 ± 17% +234.5% 47725 ± 15% softirqs.CPU45.RCU 14850 ± 17% +229.5% 48938 ± 14% softirqs.CPU46.RCU 14489 ± 20% +235.0% 48536 ± 14% softirqs.CPU47.RCU 12510 ± 16% +294.0% 49293 ± 20% softirqs.CPU48.RCU 11967 ± 9% +237.0% 40330 ± 14% softirqs.CPU49.RCU 18541 ± 14% +195.2% 54740 ± 6% softirqs.CPU5.RCU 10043 ± 8% +296.6% 39831 ± 32% softirqs.CPU50.RCU 12549 ± 17% +243.7% 43135 ± 8% softirqs.CPU51.RCU 13016 ± 16% +231.6% 43156 ± 15% softirqs.CPU52.RCU 11202 ± 13% +244.3% 38567 ± 19% softirqs.CPU53.RCU 11751 ± 24% +1443.5% 181385 ± 11% softirqs.CPU54.RCU 40026 ± 2% -17.4% 33063 ± 6% softirqs.CPU54.SCHED 10619 ± 13% +557.8% 69857 ± 30% softirqs.CPU55.RCU 11473 ± 20% +328.5% 49168 ± 29% softirqs.CPU56.RCU 10539 ± 11% +337.0% 46052 ± 19% softirqs.CPU57.RCU 12270 ± 13% +256.0% 43680 ± 16% softirqs.CPU58.RCU 12315 ± 14% +246.1% 42625 ± 18% softirqs.CPU59.RCU 17153 ± 13% +216.0% 54205 ± 7% softirqs.CPU6.RCU 11927 ± 13% +289.7% 46480 ± 24% softirqs.CPU60.RCU 12446 ± 17% +315.3% 51691 ± 48% softirqs.CPU61.RCU 11852 ± 17% +267.2% 43518 ± 23% softirqs.CPU62.RCU 12387 ± 13% +251.1% 43496 ± 11% softirqs.CPU63.RCU 10996 ± 14% +232.5% 36563 ± 18% softirqs.CPU64.RCU 12143 ± 23% +262.1% 43967 ± 16% softirqs.CPU65.RCU 12599 ± 19% +274.8% 47226 ± 22% softirqs.CPU66.RCU 11427 ± 16% +285.8% 44090 ± 24% softirqs.CPU67.RCU 10829 ± 11% +257.9% 38760 ± 25% softirqs.CPU68.RCU 12273 ± 14% +262.0% 44435 ± 9% softirqs.CPU69.RCU 17499 ± 8% +227.2% 57257 ± 6% softirqs.CPU7.RCU 13200 ± 17% +232.9% 43942 ± 7% softirqs.CPU70.RCU 10909 ± 11% +240.2% 37115 ± 10% softirqs.CPU71.RCU 16827 ± 21% +144.9% 41204 softirqs.CPU71.SCHED 13148 ± 5% +222.0% 42332 ± 4% softirqs.CPU72.RCU 12524 ± 17% +233.6% 41781 ± 3% softirqs.CPU73.RCU 11992 ± 9% +228.0% 39332 ± 4% softirqs.CPU74.RCU 12262 ± 10% +224.1% 39743 ± 6% softirqs.CPU75.RCU 12221 ± 8% +215.6% 38564 ± 5% softirqs.CPU76.RCU 12596 ± 8% +213.9% 39538 ± 7% softirqs.CPU77.RCU 12097 ± 7% +228.9% 39794 ± 4% softirqs.CPU78.RCU 11903 ± 7% +218.4% 37895 ± 6% softirqs.CPU79.RCU 17361 ± 12% +221.1% 55751 ± 7% softirqs.CPU8.RCU 12445 ± 8% +221.8% 40052 ± 7% softirqs.CPU80.RCU 12004 ± 7% +225.0% 39015 ± 4% softirqs.CPU81.RCU 12000 ± 6% +229.3% 39517 ± 8% softirqs.CPU82.RCU 12504 ± 9% +219.9% 40003 ± 6% softirqs.CPU83.RCU 12440 ± 8% +216.1% 39328 ± 4% softirqs.CPU84.RCU 12262 ± 8% +230.4% 40513 ± 7% softirqs.CPU85.RCU 12229 ± 9% +225.8% 39845 ± 8% softirqs.CPU86.RCU 11983 ± 7% +234.5% 40084 ± 5% softirqs.CPU87.RCU 7843 ± 8% +252.7% 27661 ± 27% softirqs.CPU88.RCU 8306 ± 12% +243.3% 28516 ± 23% softirqs.CPU89.RCU 17622 ± 11% +230.0% 58148 ± 4% softirqs.CPU9.RCU 12073 ± 23% +619.4% 86857 ± 23% softirqs.CPU90.RCU 10332 ± 19% +290.2% 40318 ± 27% softirqs.CPU91.RCU 8983 ± 8% +221.7% 28896 ± 17% softirqs.CPU92.RCU 9696 ± 10% +239.8% 32944 ± 15% softirqs.CPU93.RCU 10886 ± 12% +245.6% 37620 ± 21% softirqs.CPU94.RCU 10441 ± 15% +214.6% 32849 ± 16% softirqs.CPU95.RCU 12834 ± 20% +158.0% 33113 ± 9% softirqs.CPU96.RCU 13146 ± 17% +159.9% 34160 ± 8% softirqs.CPU97.RCU 10393 ± 8% +267.4% 38183 ± 19% softirqs.CPU98.RCU 12492 ± 17% +277.9% 47207 ± 26% softirqs.CPU99.RCU 1898417 ± 11% +276.1% 7139397 softirqs.RCU 24540 ± 2% +96.5% 48221 softirqs.TIMER 4440 ± 54% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 4440 ± 54% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 2777 ±126% -62.7% 1037 ± 6% interrupts.CPU1.CAL:Function_call_interrupts 6995 ± 26% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 6995 ± 26% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 5665 ± 38% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 5665 ± 38% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 149.50 ± 35% -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 149.50 ± 35% -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 118.17 ± 35% -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 118.17 ± 35% -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 162.17 ± 31% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 162.17 ± 31% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 141.33 ± 32% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 141.33 ± 32% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 152.50 ± 37% -100.0% 0.00 interrupts.CPU104.NMI:Non-maskable_interrupts 152.50 ± 37% -100.0% 0.00 interrupts.CPU104.PMI:Performance_monitoring_interrupts 151.67 ± 37% -100.0% 0.00 interrupts.CPU105.NMI:Non-maskable_interrupts 151.67 ± 37% -100.0% 0.00 interrupts.CPU105.PMI:Performance_monitoring_interrupts 152.67 ± 37% -100.0% 0.00 interrupts.CPU106.NMI:Non-maskable_interrupts 152.67 ± 37% -100.0% 0.00 interrupts.CPU106.PMI:Performance_monitoring_interrupts 203.50 ± 34% -100.0% 0.00 interrupts.CPU107.NMI:Non-maskable_interrupts 203.50 ± 34% -100.0% 0.00 interrupts.CPU107.PMI:Performance_monitoring_interrupts 223.83 ± 40% -100.0% 0.00 interrupts.CPU108.NMI:Non-maskable_interrupts 223.83 ± 40% -100.0% 0.00 interrupts.CPU108.PMI:Performance_monitoring_interrupts 174.50 ± 29% -99.4% 1.00 ±223% interrupts.CPU109.NMI:Non-maskable_interrupts 174.50 ± 29% -99.4% 1.00 ±223% interrupts.CPU109.PMI:Performance_monitoring_interrupts 4690 ± 55% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 4690 ± 55% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 147.67 ± 28% -100.0% 0.00 interrupts.CPU110.NMI:Non-maskable_interrupts 147.67 ± 28% -100.0% 0.00 interrupts.CPU110.PMI:Performance_monitoring_interrupts 149.67 ± 38% -100.0% 0.00 interrupts.CPU111.NMI:Non-maskable_interrupts 149.67 ± 38% -100.0% 0.00 interrupts.CPU111.PMI:Performance_monitoring_interrupts 133.17 ± 36% -100.0% 0.00 interrupts.CPU112.NMI:Non-maskable_interrupts 133.17 ± 36% -100.0% 0.00 interrupts.CPU112.PMI:Performance_monitoring_interrupts 160.83 ± 26% -100.0% 0.00 interrupts.CPU113.NMI:Non-maskable_interrupts 160.83 ± 26% -100.0% 0.00 interrupts.CPU113.PMI:Performance_monitoring_interrupts 129.33 ± 38% -100.0% 0.00 interrupts.CPU114.NMI:Non-maskable_interrupts 129.33 ± 38% -100.0% 0.00 interrupts.CPU114.PMI:Performance_monitoring_interrupts 147.00 ± 37% -100.0% 0.00 interrupts.CPU115.NMI:Non-maskable_interrupts 147.00 ± 37% -100.0% 0.00 interrupts.CPU115.PMI:Performance_monitoring_interrupts 112.00 ± 36% -100.0% 0.00 interrupts.CPU116.NMI:Non-maskable_interrupts 112.00 ± 36% -100.0% 0.00 interrupts.CPU116.PMI:Performance_monitoring_interrupts 133.00 ± 39% -100.0% 0.00 interrupts.CPU117.NMI:Non-maskable_interrupts 133.00 ± 39% -100.0% 0.00 interrupts.CPU117.PMI:Performance_monitoring_interrupts 98.00 ± 8% -100.0% 0.00 interrupts.CPU118.NMI:Non-maskable_interrupts 98.00 ± 8% -100.0% 0.00 interrupts.CPU118.PMI:Performance_monitoring_interrupts 131.17 ± 30% -100.0% 0.00 interrupts.CPU119.NMI:Non-maskable_interrupts 131.17 ± 30% -100.0% 0.00 interrupts.CPU119.PMI:Performance_monitoring_interrupts 5913 ± 32% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 5913 ± 32% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 116.83 ± 35% -100.0% 0.00 interrupts.CPU120.NMI:Non-maskable_interrupts 116.83 ± 35% -100.0% 0.00 interrupts.CPU120.PMI:Performance_monitoring_interrupts 113.00 ± 21% -100.0% 0.00 interrupts.CPU121.NMI:Non-maskable_interrupts 113.00 ± 21% -100.0% 0.00 interrupts.CPU121.PMI:Performance_monitoring_interrupts 98.83 ± 6% -100.0% 0.00 interrupts.CPU122.NMI:Non-maskable_interrupts 98.83 ± 6% -100.0% 0.00 interrupts.CPU122.PMI:Performance_monitoring_interrupts 142.00 ± 61% -100.0% 0.00 interrupts.CPU123.NMI:Non-maskable_interrupts 142.00 ± 61% -100.0% 0.00 interrupts.CPU123.PMI:Performance_monitoring_interrupts 119.17 ± 39% -100.0% 0.00 interrupts.CPU124.NMI:Non-maskable_interrupts 119.17 ± 39% -100.0% 0.00 interrupts.CPU124.PMI:Performance_monitoring_interrupts 133.67 ± 38% -100.0% 0.00 interrupts.CPU125.NMI:Non-maskable_interrupts 133.67 ± 38% -100.0% 0.00 interrupts.CPU125.PMI:Performance_monitoring_interrupts 173.17 ± 48% -100.0% 0.00 interrupts.CPU126.NMI:Non-maskable_interrupts 173.17 ± 48% -100.0% 0.00 interrupts.CPU126.PMI:Performance_monitoring_interrupts 137.00 ± 32% -100.0% 0.00 interrupts.CPU127.NMI:Non-maskable_interrupts 137.00 ± 32% -100.0% 0.00 interrupts.CPU127.PMI:Performance_monitoring_interrupts 117.83 ± 32% -100.0% 0.00 interrupts.CPU128.NMI:Non-maskable_interrupts 117.83 ± 32% -100.0% 0.00 interrupts.CPU128.PMI:Performance_monitoring_interrupts 122.33 ± 34% -100.0% 0.00 interrupts.CPU129.NMI:Non-maskable_interrupts 122.33 ± 34% -100.0% 0.00 interrupts.CPU129.PMI:Performance_monitoring_interrupts 5466 ± 50% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 5466 ± 50% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 101.50 ± 8% -100.0% 0.00 interrupts.CPU130.NMI:Non-maskable_interrupts 101.50 ± 8% -100.0% 0.00 interrupts.CPU130.PMI:Performance_monitoring_interrupts 116.17 ± 29% -100.0% 0.00 interrupts.CPU131.NMI:Non-maskable_interrupts 116.17 ± 29% -100.0% 0.00 interrupts.CPU131.PMI:Performance_monitoring_interrupts 215.17 ± 98% -100.0% 0.00 interrupts.CPU132.NMI:Non-maskable_interrupts 215.17 ± 98% -100.0% 0.00 interrupts.CPU132.PMI:Performance_monitoring_interrupts 114.67 ± 32% -100.0% 0.00 interrupts.CPU133.NMI:Non-maskable_interrupts 114.67 ± 32% -100.0% 0.00 interrupts.CPU133.PMI:Performance_monitoring_interrupts 164.33 ± 27% -100.0% 0.00 interrupts.CPU134.NMI:Non-maskable_interrupts 164.33 ± 27% -100.0% 0.00 interrupts.CPU134.PMI:Performance_monitoring_interrupts 118.67 ± 34% -100.0% 0.00 interrupts.CPU135.NMI:Non-maskable_interrupts 118.67 ± 34% -100.0% 0.00 interrupts.CPU135.PMI:Performance_monitoring_interrupts 131.83 ± 30% -100.0% 0.00 interrupts.CPU136.NMI:Non-maskable_interrupts 131.83 ± 30% -100.0% 0.00 interrupts.CPU136.PMI:Performance_monitoring_interrupts 118.67 ± 33% -100.0% 0.00 interrupts.CPU137.NMI:Non-maskable_interrupts 118.67 ± 33% -100.0% 0.00 interrupts.CPU137.PMI:Performance_monitoring_interrupts 122.50 ± 31% -100.0% 0.00 interrupts.CPU138.NMI:Non-maskable_interrupts 122.50 ± 31% -100.0% 0.00 interrupts.CPU138.PMI:Performance_monitoring_interrupts 100.67 ± 7% -100.0% 0.00 interrupts.CPU139.NMI:Non-maskable_interrupts 100.67 ± 7% -100.0% 0.00 interrupts.CPU139.PMI:Performance_monitoring_interrupts 5232 ± 48% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 5232 ± 48% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 118.33 ± 36% -100.0% 0.00 interrupts.CPU140.NMI:Non-maskable_interrupts 118.33 ± 36% -100.0% 0.00 interrupts.CPU140.PMI:Performance_monitoring_interrupts 118.00 ± 31% -100.0% 0.00 interrupts.CPU141.NMI:Non-maskable_interrupts 118.00 ± 31% -100.0% 0.00 interrupts.CPU141.PMI:Performance_monitoring_interrupts 144.83 ± 45% -100.0% 0.00 interrupts.CPU142.NMI:Non-maskable_interrupts 144.83 ± 45% -100.0% 0.00 interrupts.CPU142.PMI:Performance_monitoring_interrupts 361.83 ± 30% -100.0% 0.00 interrupts.CPU143.NMI:Non-maskable_interrupts 361.83 ± 30% -100.0% 0.00 interrupts.CPU143.PMI:Performance_monitoring_interrupts 6247 ± 35% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 6247 ± 35% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 425.50 ± 14% -23.4% 325.83 ± 14% interrupts.CPU15.TLB:TLB_shootdowns 172.17 ± 41% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 172.17 ± 41% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 164.50 ± 30% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 164.50 ± 30% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 181.00 ± 20% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 181.00 ± 20% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 159.17 ± 33% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 159.17 ± 33% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 6587 ± 33% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 6587 ± 33% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 147.50 ± 28% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 147.50 ± 28% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 160.67 ± 23% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 160.67 ± 23% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 183.33 ± 17% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 183.33 ± 17% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 156.67 ± 38% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 156.67 ± 38% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 164.67 ± 26% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 164.67 ± 26% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 176.50 ± 26% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 176.50 ± 26% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 153.50 ± 33% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 153.50 ± 33% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 144.50 ± 26% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 144.50 ± 26% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 135.00 ± 37% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 135.00 ± 37% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 131.17 ± 29% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 131.17 ± 29% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 6274 ± 42% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 6274 ± 42% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 147.33 ± 29% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 147.33 ± 29% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 118.00 ± 21% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 118.00 ± 21% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 137.83 ± 34% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 137.83 ± 34% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 131.67 ± 31% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 131.67 ± 31% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 152.33 ± 30% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 152.33 ± 30% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 177.33 ± 54% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 177.33 ± 54% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 173.33 ± 45% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 173.33 ± 45% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 143.50 ± 38% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 143.50 ± 38% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 124.33 ± 27% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 124.33 ± 27% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 145.67 ± 29% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 145.67 ± 29% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 5122 ± 38% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 5122 ± 38% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 142.67 ± 29% -99.3% 1.00 ±223% interrupts.CPU40.NMI:Non-maskable_interrupts 142.67 ± 29% -99.3% 1.00 ±223% interrupts.CPU40.PMI:Performance_monitoring_interrupts 151.50 ± 26% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 151.50 ± 26% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 150.67 ± 29% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 150.67 ± 29% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 143.50 ± 30% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 143.50 ± 30% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 160.67 ± 26% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 160.67 ± 26% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 122.67 ± 30% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 122.67 ± 30% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 177.17 ± 20% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 177.17 ± 20% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 186.83 ± 24% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 186.83 ± 24% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 141.00 ± 23% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 141.00 ± 23% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 170.50 ± 30% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 170.50 ± 30% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 5397 ± 51% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 5397 ± 51% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 169.17 ± 23% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 169.17 ± 23% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 181.83 ± 19% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 181.83 ± 19% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 136.67 ± 22% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 136.67 ± 22% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 183.17 ± 19% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 183.17 ± 19% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 197.50 ± 47% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 197.50 ± 47% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 173.00 ± 32% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 173.00 ± 32% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 162.17 ± 22% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 162.17 ± 22% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 155.00 ± 27% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 155.00 ± 27% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 171.50 ± 19% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 171.50 ± 19% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 182.33 ± 16% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 182.33 ± 16% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 5701 ± 45% -100.0% 1.00 ±223% interrupts.CPU6.NMI:Non-maskable_interrupts 5701 ± 45% -100.0% 1.00 ±223% interrupts.CPU6.PMI:Performance_monitoring_interrupts 207.50 ± 44% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 207.50 ± 44% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 175.00 ± 25% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 175.00 ± 25% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 160.17 ± 25% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 160.17 ± 25% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 175.00 ± 18% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 175.00 ± 18% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 133.33 ± 34% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 133.33 ± 34% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 160.00 ± 23% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 160.00 ± 23% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 171.67 ± 24% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 171.67 ± 24% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 197.00 ± 19% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 197.00 ± 19% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 181.67 ± 19% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 181.67 ± 19% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 189.67 ± 23% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 189.67 ± 23% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 5634 ± 32% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 5634 ± 32% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 168.67 ± 26% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 168.67 ± 26% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 351.83 ± 29% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 351.83 ± 29% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 5467 ± 50% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 5467 ± 50% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 3198 ± 44% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 3198 ± 44% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 3548 ± 30% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 3548 ± 30% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 2786 ± 38% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 2786 ± 38% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 3942 ± 61% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 3942 ± 61% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 5265 ± 45% -100.0% 1.00 ±223% interrupts.CPU77.NMI:Non-maskable_interrupts 5265 ± 45% -100.0% 1.00 ±223% interrupts.CPU77.PMI:Performance_monitoring_interrupts 3939 ± 58% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 3939 ± 58% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 4860 ± 54% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 4860 ± 54% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 4825 ± 35% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 4825 ± 35% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 4414 ± 51% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 4414 ± 51% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 3466 ± 65% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 3466 ± 65% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 3724 ± 54% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 3724 ± 54% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 5910 ± 53% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 5910 ± 53% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 2163 ± 33% -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 2163 ± 33% -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 4383 ± 61% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 4383 ± 61% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 4643 ± 44% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 4643 ± 44% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 801.83 ± 26% +25.3% 1004 ± 6% interrupts.CPU87.CAL:Function_call_interrupts 4130 ± 56% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 4130 ± 56% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 121.83 ± 39% +102.6% 246.83 ± 23% interrupts.CPU87.TLB:TLB_shootdowns 218.17 ± 32% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 218.17 ± 32% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 124.67 ± 32% -99.2% 1.00 ±223% interrupts.CPU89.NMI:Non-maskable_interrupts 124.67 ± 32% -99.2% 1.00 ±223% interrupts.CPU89.PMI:Performance_monitoring_interrupts 6183 ± 34% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 6183 ± 34% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 129.83 ± 36% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 129.83 ± 36% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 194.00 ± 25% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 194.00 ± 25% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 167.00 ± 29% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 167.00 ± 29% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 147.00 ± 32% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 147.00 ± 32% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 122.50 ± 42% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 122.50 ± 42% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 139.00 ± 30% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 139.00 ± 30% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 148.67 ± 37% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 148.67 ± 37% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 165.50 ± 25% -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 165.50 ± 25% -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 134.00 ± 36% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 134.00 ± 36% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 157.00 ± 39% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 157.00 ± 39% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 173574 ± 12% -100.0% 5.00 ± 44% interrupts.NMI:Non-maskable_interrupts 173574 ± 12% -100.0% 5.00 ± 44% interrupts.PMI:Performance_monitoring_interrupts *************************************************************************************************** lkp-csl-2ap2: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/process/100%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2ap2/eventfd1/will-it-scale/0x5003006 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 3.19e+08 -21.4% 2.506e+08 ± 2% will-it-scale.192.processes 1661424 -21.4% 1305403 ± 2% will-it-scale.per_process_ops 3.19e+08 -21.4% 2.506e+08 ± 2% will-it-scale.workload 15.64 -3.8 11.81 ± 2% mpstat.cpu.all.usr% 81.33 +5.1% 85.50 vmstat.cpu.sy 16.00 -26.0% 11.83 ± 3% vmstat.cpu.us 1795189 +59.3% 2860578 ± 2% vmstat.memory.cache 2109 +12.9% 2381 vmstat.system.cs 392800 -2.6% 382776 vmstat.system.in 237380 ± 17% +57.1% 372854 ± 31% numa-numastat.node0.local_node 296018 ± 4% +47.2% 435692 ± 21% numa-numastat.node0.numa_hit 229104 ± 15% +108.0% 476599 ± 57% numa-numastat.node1.local_node 299205 ± 4% +79.6% 537453 ± 45% numa-numastat.node1.numa_hit 241363 ± 15% +165.1% 639959 ± 54% numa-numastat.node2.local_node 298061 ± 2% +132.0% 691534 ± 45% numa-numastat.node2.numa_hit 5125 ± 8% -44.1% 2863 ± 29% numa-vmstat.node0.nr_mapped 5612 ± 14% -61.1% 2183 ± 22% numa-vmstat.node1.nr_mapped 5637 ± 11% -60.2% 2246 ± 24% numa-vmstat.node2.nr_mapped 98791 -99.8% 222.00 ± 97% numa-vmstat.node3.nr_active_anon 184650 -58.7% 76245 ± 5% numa-vmstat.node3.nr_file_pages 11931 ± 5% -78.1% 2615 ± 27% numa-vmstat.node3.nr_mapped 111754 -98.7% 1503 ± 67% numa-vmstat.node3.nr_shmem 98791 -99.8% 222.00 ± 97% numa-vmstat.node3.nr_zone_active_anon 412873 +268.1% 1519853 ± 5% meminfo.Active 412873 +268.1% 1519853 ± 5% meminfo.Active(anon) 172606 +13.5% 195882 meminfo.AnonHugePages 1660362 +63.9% 2720845 ± 2% meminfo.Cached 14522368 ± 3% -9.8% 13094229 ± 5% meminfo.DirectMap2M 112253 -65.1% 39126 meminfo.Mapped 3904140 +44.8% 5653456 ± 2% meminfo.Memused 478895 +221.4% 1539378 ± 5% meminfo.Shmem 4795676 +18.1% 5664768 ± 2% meminfo.max_used_kB 28645697 -30.1% 20025861 ± 10% sched_debug.cfs_rq:/.min_vruntime.min 270324 ± 4% +323.7% 1145492 ± 9% sched_debug.cfs_rq:/.min_vruntime.stddev 1537 ± 3% +22.4% 1880 sched_debug.cfs_rq:/.runnable_avg.max 77.78 ± 8% +71.3% 133.25 ± 2% sched_debug.cfs_rq:/.runnable_avg.stddev -740865 +1187.0% -9535177 sched_debug.cfs_rq:/.spread0.min 270372 ± 4% +323.6% 1145428 ± 9% sched_debug.cfs_rq:/.spread0.stddev 662.17 ± 5% +58.0% 1046 ± 8% sched_debug.cfs_rq:/.util_est_enqueued.max 58.49 ± 5% +48.0% 86.57 ± 6% sched_debug.cfs_rq:/.util_est_enqueued.stddev 0.10 ± 7% +47.8% 0.15 ± 5% sched_debug.cpu.nr_running.stddev -14.47 +98.1% -28.67 sched_debug.cpu.nr_uninterruptible.min 20075 ± 8% -43.8% 11290 ± 29% numa-meminfo.node0.Mapped 885155 ± 8% +59.9% 1415792 ± 26% numa-meminfo.node0.MemUsed 21924 ± 14% -60.5% 8650 ± 21% numa-meminfo.node1.Mapped 842633 ± 10% +69.9% 1431232 ± 41% numa-meminfo.node1.MemUsed 22003 ± 11% -59.5% 8902 ± 25% numa-meminfo.node2.Mapped 850282 ± 10% +109.8% 1783695 ± 38% numa-meminfo.node2.MemUsed 395289 -99.8% 891.00 ± 96% numa-meminfo.node3.Active 395289 -99.8% 891.00 ± 96% numa-meminfo.node3.Active(anon) 737971 -58.7% 304984 ± 5% numa-meminfo.node3.FilePages 47233 ± 4% -78.2% 10293 ± 27% numa-meminfo.node3.Mapped 1324245 ± 5% -22.8% 1022515 ± 9% numa-meminfo.node3.MemUsed 446390 -98.7% 6016 ± 67% numa-meminfo.node3.Shmem 77110 ± 5% +15.6% 89165 ± 2% slabinfo.filp.active_objs 1210 ± 5% +15.9% 1403 ± 2% slabinfo.filp.active_slabs 77525 ± 5% +15.9% 89825 ± 2% slabinfo.filp.num_objs 1210 ± 5% +15.9% 1403 ± 2% slabinfo.filp.num_slabs 13829 ± 4% +7.7% 14888 ± 4% slabinfo.pde_opener.active_objs 13829 ± 4% +7.7% 14888 ± 4% slabinfo.pde_opener.num_objs 22895 -29.9% 16059 slabinfo.proc_inode_cache.active_objs 485.17 ± 2% -27.0% 354.00 ± 5% slabinfo.proc_inode_cache.active_slabs 23321 ± 2% -27.0% 17018 ± 5% slabinfo.proc_inode_cache.num_objs 485.17 ± 2% -27.0% 354.00 ± 5% slabinfo.proc_inode_cache.num_slabs 28685 +15.0% 32980 slabinfo.radix_tree_node.active_objs 28685 +15.0% 32980 slabinfo.radix_tree_node.num_objs 14049 ± 2% -8.9% 12795 ± 2% softirqs.CPU107.RCU 14091 ± 3% -7.6% 13025 ± 4% softirqs.CPU124.RCU 14002 ± 3% -7.9% 12903 ± 3% softirqs.CPU125.RCU 14054 ± 3% -7.6% 12993 ± 4% softirqs.CPU127.RCU 13952 -8.5% 12763 ± 3% softirqs.CPU15.RCU 13529 ± 14% -14.2% 11604 ± 4% softirqs.CPU176.RCU 12814 -10.4% 11480 ± 2% softirqs.CPU187.RCU 13064 ± 2% -11.1% 11610 ± 3% softirqs.CPU191.RCU 14610 ± 3% -9.0% 13294 ± 3% softirqs.CPU32.RCU 14907 -10.1% 13405 ± 2% softirqs.CPU34.RCU 14923 ± 2% -8.7% 13619 ± 3% softirqs.CPU44.RCU 15292 ± 4% -11.2% 13578 ± 3% softirqs.CPU47.RCU 13795 ± 2% -7.7% 12730 ± 3% softirqs.CPU7.RCU 14032 ± 2% -8.3% 12871 ± 3% softirqs.CPU9.RCU 44967 ± 2% +13.3% 50936 softirqs.TIMER 103261 +268.3% 380330 ± 5% proc-vmstat.nr_active_anon 96939 +7.6% 104326 proc-vmstat.nr_anon_pages 415060 +64.0% 680578 ± 2% proc-vmstat.nr_file_pages 113267 -3.7% 109064 proc-vmstat.nr_inactive_anon 27907 ± 2% -64.5% 9906 proc-vmstat.nr_mapped 7631 +1.3% 7728 proc-vmstat.nr_page_table_pages 119693 +221.8% 385211 ± 5% proc-vmstat.nr_shmem 33546 -2.0% 32861 proc-vmstat.nr_slab_reclaimable 75589 +3.9% 78567 proc-vmstat.nr_slab_unreclaimable 103261 +268.3% 380330 ± 5% proc-vmstat.nr_zone_active_anon 113267 -3.7% 109064 proc-vmstat.nr_zone_inactive_anon 30184 ± 7% -76.7% 7024 ± 30% proc-vmstat.numa_hint_faults 28351 ± 10% -90.8% 2603 ± 15% proc-vmstat.numa_hint_faults_local 1406251 +50.1% 2110395 ± 3% proc-vmstat.numa_hit 1146549 +61.4% 1850559 ± 4% proc-vmstat.numa_local 224944 ± 5% -64.9% 79051 ± 20% proc-vmstat.numa_pte_updates 169142 -69.3% 51843 ± 5% proc-vmstat.pgactivate 1490899 +77.8% 2651515 ± 7% proc-vmstat.pgalloc_normal 1380861 -17.0% 1146532 proc-vmstat.pgfault 58.10 -58.1 0.00 perf-profile.calltrace.cycles-pp.read 46.94 -46.9 0.00 perf-profile.calltrace.cycles-pp.write 43.70 -43.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read 38.84 -38.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read 36.90 -36.9 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read 32.98 -33.0 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read 32.56 -32.6 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write 27.74 -27.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 25.76 -25.8 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 21.88 -21.9 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write 17.51 -17.5 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 13.52 -13.5 0.00 perf-profile.calltrace.cycles-pp.eventfd_read.new_sync_read.vfs_read.ksys_read.do_syscall_64 10.65 -10.7 0.00 perf-profile.calltrace.cycles-pp.eventfd_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 8.97 -9.0 0.00 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 8.29 -8.3 0.00 perf-profile.calltrace.cycles-pp._copy_to_iter.eventfd_read.new_sync_read.vfs_read.ksys_read 8.15 -8.1 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.write 8.13 -8.1 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.read 5.79 -5.8 0.00 perf-profile.calltrace.cycles-pp._copy_from_user.eventfd_write.vfs_write.ksys_write.do_syscall_64 5.54 -5.5 0.00 perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 76.51 -76.5 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 66.71 -66.7 0.00 perf-profile.children.cycles-pp.do_syscall_64 58.15 -58.1 0.00 perf-profile.children.cycles-pp.read 46.96 -47.0 0.00 perf-profile.children.cycles-pp.write 37.00 -37.0 0.00 perf-profile.children.cycles-pp.ksys_read 33.48 -33.5 0.00 perf-profile.children.cycles-pp.vfs_read 25.88 -25.9 0.00 perf-profile.children.cycles-pp.ksys_write 22.27 -22.3 0.00 perf-profile.children.cycles-pp.vfs_write 17.69 -17.7 0.00 perf-profile.children.cycles-pp.new_sync_read 14.82 -14.8 0.00 perf-profile.children.cycles-pp.security_file_permission 13.75 -13.8 0.00 perf-profile.children.cycles-pp.eventfd_read 10.82 -10.8 0.00 perf-profile.children.cycles-pp.eventfd_write 10.51 -10.5 0.00 perf-profile.children.cycles-pp.__entry_text_start 9.03 -9.0 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 8.45 -8.4 0.00 perf-profile.children.cycles-pp._copy_to_iter 8.08 -8.1 0.00 perf-profile.children.cycles-pp.common_file_perm 7.35 -7.3 0.00 perf-profile.children.cycles-pp.fsnotify 6.65 -6.6 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 6.09 -6.1 0.00 perf-profile.children.cycles-pp._copy_from_user 8.94 -8.9 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 7.01 -7.0 0.00 perf-profile.self.cycles-pp.fsnotify 6.31 ± 2% -6.3 0.00 perf-profile.self.cycles-pp.common_file_perm 0.05 ± 19% +363.2% 0.22 ± 7% perf-stat.i.MPKI 1.093e+11 -20.4% 8.701e+10 ± 2% perf-stat.i.branch-instructions 1.01 -0.0 0.99 perf-stat.i.branch-miss-rate% 1.099e+09 -22.1% 8.558e+08 ± 2% perf-stat.i.branch-misses 1468914 ± 3% +815.5% 13448025 ± 56% perf-stat.i.cache-misses 13805086 ± 2% +534.2% 87557211 ± 5% perf-stat.i.cache-references 2062 +13.0% 2331 perf-stat.i.context-switches 1.03 +29.5% 1.33 ± 2% perf-stat.i.cpi 5.576e+11 +3.0% 5.741e+11 perf-stat.i.cpu-cycles 256.89 -2.0% 251.73 perf-stat.i.cpu-migrations 483463 ± 3% -86.3% 66209 ± 59% perf-stat.i.cycles-between-cache-misses 0.00 ± 31% -0.0 0.00 ± 52% perf-stat.i.dTLB-load-miss-rate% 606099 ± 4% -80.6% 117765 ± 12% perf-stat.i.dTLB-load-misses 1.576e+11 -20.8% 1.249e+11 ± 2% perf-stat.i.dTLB-loads 154523 -24.6% 116487 ± 2% perf-stat.i.dTLB-store-misses 1.06e+11 -20.8% 8.392e+10 ± 2% perf-stat.i.dTLB-stores 9.027e+08 -21.6% 7.075e+08 ± 2% perf-stat.i.iTLB-load-misses 91054 ± 13% -24.0% 69165 ± 5% perf-stat.i.iTLB-loads 5.418e+11 -20.4% 4.312e+11 ± 2% perf-stat.i.instructions 607.51 +1.5% 616.70 perf-stat.i.instructions-per-iTLB-miss 0.97 -22.6% 0.75 ± 2% perf-stat.i.ipc 1.53 ± 5% -84.6% 0.24 ± 12% perf-stat.i.major-faults 2.90 +3.0% 2.99 perf-stat.i.metric.GHz 1.69 -31.8% 1.15 ± 11% perf-stat.i.metric.K/sec 1942 -20.6% 1541 ± 2% perf-stat.i.metric.M/sec 4354 -17.5% 3592 perf-stat.i.minor-faults 88.68 +6.2 94.84 ± 2% perf-stat.i.node-load-miss-rate% 275415 +1081.5% 3253930 ± 47% perf-stat.i.node-load-misses 93.22 +4.3 97.50 perf-stat.i.node-store-miss-rate% 100085 +949.2% 1050120 ± 51% perf-stat.i.node-store-misses 4356 -17.5% 3592 perf-stat.i.page-faults 0.03 ± 2% +689.2% 0.20 ± 6% perf-stat.overall.MPKI 1.01 -0.0 0.98 perf-stat.overall.branch-miss-rate% 1.03 +29.4% 1.33 ± 2% perf-stat.overall.cpi 372002 ± 2% -82.9% 63661 ± 58% perf-stat.overall.cycles-between-cache-misses 0.00 ± 5% -0.0 0.00 ± 12% perf-stat.overall.dTLB-load-miss-rate% 0.00 -0.0 0.00 perf-stat.overall.dTLB-store-miss-rate% 600.16 +1.6% 609.50 perf-stat.overall.instructions-per-iTLB-miss 0.97 -22.7% 0.75 ± 2% perf-stat.overall.ipc 56.04 ± 3% +38.8 94.84 ± 2% perf-stat.overall.node-load-miss-rate% 85.49 +11.9 97.37 perf-stat.overall.node-store-miss-rate% 512044 +1.2% 517978 perf-stat.overall.path-length 1.089e+11 -20.4% 8.672e+10 ± 2% perf-stat.ps.branch-instructions 1.095e+09 -22.1% 8.529e+08 ± 2% perf-stat.ps.branch-misses 1494957 ± 2% +796.6% 13403081 ± 56% perf-stat.ps.cache-misses 13905852 ± 2% +527.5% 87264866 ± 5% perf-stat.ps.cache-references 2053 +13.2% 2324 perf-stat.ps.context-switches 5.557e+11 +3.0% 5.721e+11 perf-stat.ps.cpu-cycles 255.67 -1.9% 250.91 perf-stat.ps.cpu-migrations 617075 ± 5% -81.0% 117434 ± 12% perf-stat.ps.dTLB-load-misses 1.57e+11 -20.7% 1.245e+11 ± 2% perf-stat.ps.dTLB-loads 154432 -24.8% 116117 ± 2% perf-stat.ps.dTLB-store-misses 1.056e+11 -20.8% 8.363e+10 ± 2% perf-stat.ps.dTLB-stores 8.996e+08 -21.6% 7.051e+08 ± 2% perf-stat.ps.iTLB-load-misses 91058 ± 13% -24.1% 69143 ± 5% perf-stat.ps.iTLB-loads 5.399e+11 -20.4% 4.298e+11 ± 2% perf-stat.ps.instructions 1.52 ± 5% -84.4% 0.24 ± 12% perf-stat.ps.major-faults 4365 -18.0% 3581 perf-stat.ps.minor-faults 275022 +1079.2% 3243066 ± 47% perf-stat.ps.node-load-misses 99905 +947.6% 1046625 ± 51% perf-stat.ps.node-store-misses 4367 -18.0% 3582 perf-stat.ps.page-faults 1.633e+14 -20.5% 1.298e+14 ± 2% perf-stat.total.instructions 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 12% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 4.70 ± 62% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.03 ±146% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.02 ± 13% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.01 ± 24% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 9.25 -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.01 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.03 ± 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.02 ± 46% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.05 ±216% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 13% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.42 ±212% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.02 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 9.53 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 4.26 ±140% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.05 ± 49% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 3.27 ±141% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.03 ± 17% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 7.40 ± 98% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.06 ± 6% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 20.23 ± 7% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.02 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.21 ± 3% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.03 ± 67% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 8.52 ±222% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.70 ±208% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.08 ± 67% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 259.68 ±219% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.16 ± 19% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 277.53 ±202% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 135.13 ± 4% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 18660 ± 4% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 7859 ± 15% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 134.97 ± 4% -100.0% 0.00 perf-sched.total_wait_time.average.ms 7859 ± 15% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.42 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1379 ± 28% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 808.64 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1541 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 205.68 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 83.01 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.19 ± 17% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.14 ± 61% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.17 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 93.18 ± 42% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 4.96 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 9.25 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 537.25 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 121.61 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.42 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 5.28 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 597.10 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 534.01 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3.67 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 22.33 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3.33 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 279.17 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 306.50 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 849.67 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 284.67 ± 29% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 6841 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 1524 ± 26% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 2243 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 278.33 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 21.50 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 195.50 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 78.83 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1876 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 3072 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 614.67 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.38 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3634 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3635 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1001 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1020 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 16.44 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 7.13 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 72.55 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3635 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 2248 ± 24% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 20.23 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 3321 ± 24% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 4233 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.01 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 218.67 ± 41% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 7032 ± 23% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 7569 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.40 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1379 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 808.63 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1536 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 205.65 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 82.99 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.18 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.14 ± 61% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 1.82 ±131% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.17 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 93.17 ± 42% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 275.71 ±146% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 826.38 ± 93% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 8.53 ± 38% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 197.05 ±134% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 4.95 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.78 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 537.22 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 121.56 ± 14% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.41 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 3.53 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 5.27 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 597.09 -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 1.73 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 533.59 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.37 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3634 ± 22% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3631 ± 22% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1001 -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1020 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 14.15 ± 15% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 7.13 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 4.68 ±126% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 72.55 ± 10% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3635 ± 22% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1925 ±141% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 3014 ± 95% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 11.75 ± 26% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 2145 ±140% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 2248 ± 24% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 5.00 -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 3321 ± 24% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 4233 ± 34% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.00 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 3.53 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 218.66 ± 41% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 7032 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 11.01 ± 14% -100.0% 0.00 perf-sched.wait_time.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 7569 ± 16% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 260639 -7.6% 240826 interrupts.CAL:Function_call_interrupts 6814 ± 28% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 6814 ± 28% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 6133 ± 33% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 6133 ± 33% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 8177 -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 8177 -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 8177 -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 8177 -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 8178 -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 8178 -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 8178 -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 8178 -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU104.NMI:Non-maskable_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU104.PMI:Performance_monitoring_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU105.NMI:Non-maskable_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU105.PMI:Performance_monitoring_interrupts 6814 ± 28% -100.0% 0.00 interrupts.CPU106.NMI:Non-maskable_interrupts 6814 ± 28% -100.0% 0.00 interrupts.CPU106.PMI:Performance_monitoring_interrupts 6814 ± 28% -100.0% 0.00 interrupts.CPU107.NMI:Non-maskable_interrupts 6814 ± 28% -100.0% 0.00 interrupts.CPU107.PMI:Performance_monitoring_interrupts 6814 ± 28% -100.0% 0.00 interrupts.CPU108.NMI:Non-maskable_interrupts 6814 ± 28% -100.0% 0.00 interrupts.CPU108.PMI:Performance_monitoring_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU109.NMI:Non-maskable_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU109.PMI:Performance_monitoring_interrupts 8178 -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 8178 -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU110.NMI:Non-maskable_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU110.PMI:Performance_monitoring_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU111.NMI:Non-maskable_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU111.PMI:Performance_monitoring_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU112.NMI:Non-maskable_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU112.PMI:Performance_monitoring_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU113.NMI:Non-maskable_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU113.PMI:Performance_monitoring_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU114.NMI:Non-maskable_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU114.PMI:Performance_monitoring_interrupts 8178 -100.0% 0.00 interrupts.CPU115.NMI:Non-maskable_interrupts 8178 -100.0% 0.00 interrupts.CPU115.PMI:Performance_monitoring_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU116.NMI:Non-maskable_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU116.PMI:Performance_monitoring_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU117.NMI:Non-maskable_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU117.PMI:Performance_monitoring_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU118.NMI:Non-maskable_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU118.PMI:Performance_monitoring_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU119.NMI:Non-maskable_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU119.PMI:Performance_monitoring_interrupts 966.83 ± 5% -8.4% 885.33 interrupts.CPU12.CAL:Function_call_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 7496 ± 20% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU120.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU120.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU121.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU121.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU122.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU122.PMI:Performance_monitoring_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU123.NMI:Non-maskable_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU123.PMI:Performance_monitoring_interrupts 6796 ± 28% -100.0% 1.00 ±223% interrupts.CPU124.NMI:Non-maskable_interrupts 6796 ± 28% -100.0% 1.00 ±223% interrupts.CPU124.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU125.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU125.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU126.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU126.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU127.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU127.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU128.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU128.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU129.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU129.PMI:Performance_monitoring_interrupts 7495 ± 20% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 7495 ± 20% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU130.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU130.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU131.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU131.PMI:Performance_monitoring_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU132.NMI:Non-maskable_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU132.PMI:Performance_monitoring_interrupts 7478 ± 20% -100.0% 0.00 interrupts.CPU133.NMI:Non-maskable_interrupts 7478 ± 20% -100.0% 0.00 interrupts.CPU133.PMI:Performance_monitoring_interrupts 8157 -100.0% 0.00 interrupts.CPU134.NMI:Non-maskable_interrupts 8157 -100.0% 0.00 interrupts.CPU134.PMI:Performance_monitoring_interrupts 7477 ± 20% -100.0% 0.00 interrupts.CPU135.NMI:Non-maskable_interrupts 7477 ± 20% -100.0% 0.00 interrupts.CPU135.PMI:Performance_monitoring_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU136.NMI:Non-maskable_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU136.PMI:Performance_monitoring_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU137.NMI:Non-maskable_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU137.PMI:Performance_monitoring_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU138.NMI:Non-maskable_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU138.PMI:Performance_monitoring_interrupts 6795 ± 28% -100.0% 0.00 interrupts.CPU139.NMI:Non-maskable_interrupts 6795 ± 28% -100.0% 0.00 interrupts.CPU139.PMI:Performance_monitoring_interrupts 8180 -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 8180 -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 6795 ± 28% -100.0% 0.00 interrupts.CPU140.NMI:Non-maskable_interrupts 6795 ± 28% -100.0% 0.00 interrupts.CPU140.PMI:Performance_monitoring_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU141.NMI:Non-maskable_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU141.PMI:Performance_monitoring_interrupts 6797 ± 28% -100.0% 0.00 interrupts.CPU142.NMI:Non-maskable_interrupts 6797 ± 28% -100.0% 0.00 interrupts.CPU142.PMI:Performance_monitoring_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU143.NMI:Non-maskable_interrupts 6796 ± 28% -100.0% 0.00 interrupts.CPU143.PMI:Performance_monitoring_interrupts 11605 ± 39% -80.7% 2241 ± 94% interrupts.CPU144.CAL:Function_call_interrupts 7800 ± 20% -100.0% 0.00 interrupts.CPU144.NMI:Non-maskable_interrupts 7800 ± 20% -100.0% 0.00 interrupts.CPU144.PMI:Performance_monitoring_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU145.NMI:Non-maskable_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU145.PMI:Performance_monitoring_interrupts 7093 ± 28% -100.0% 0.00 interrupts.CPU146.NMI:Non-maskable_interrupts 7093 ± 28% -100.0% 0.00 interrupts.CPU146.PMI:Performance_monitoring_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU147.NMI:Non-maskable_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU147.PMI:Performance_monitoring_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU148.NMI:Non-maskable_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU148.PMI:Performance_monitoring_interrupts 8511 -100.0% 0.00 interrupts.CPU149.NMI:Non-maskable_interrupts 8511 -100.0% 0.00 interrupts.CPU149.PMI:Performance_monitoring_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU150.NMI:Non-maskable_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU150.PMI:Performance_monitoring_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU151.NMI:Non-maskable_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU151.PMI:Performance_monitoring_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU152.NMI:Non-maskable_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU152.PMI:Performance_monitoring_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU153.NMI:Non-maskable_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU153.PMI:Performance_monitoring_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU154.NMI:Non-maskable_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU154.PMI:Performance_monitoring_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU155.NMI:Non-maskable_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU155.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU156.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU156.PMI:Performance_monitoring_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU157.NMI:Non-maskable_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU157.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU158.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU158.PMI:Performance_monitoring_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU159.NMI:Non-maskable_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU159.PMI:Performance_monitoring_interrupts 7499 ± 20% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 7499 ± 20% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU160.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU160.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU161.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU161.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU162.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU162.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU163.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU163.PMI:Performance_monitoring_interrupts 7090 ± 28% -100.0% 0.00 interrupts.CPU164.NMI:Non-maskable_interrupts 7090 ± 28% -100.0% 0.00 interrupts.CPU164.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU165.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU165.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 1.00 ±223% interrupts.CPU166.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 1.00 ±223% interrupts.CPU166.PMI:Performance_monitoring_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU167.NMI:Non-maskable_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU167.PMI:Performance_monitoring_interrupts 15479 ± 14% -48.3% 8010 ± 56% interrupts.CPU168.CAL:Function_call_interrupts 6638 ± 28% -100.0% 0.00 interrupts.CPU168.NMI:Non-maskable_interrupts 6638 ± 28% -100.0% 0.00 interrupts.CPU168.PMI:Performance_monitoring_interrupts 2442 ± 27% +143.6% 5949 ± 23% interrupts.CPU169.CAL:Function_call_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU169.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU169.PMI:Performance_monitoring_interrupts 7499 ± 20% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 7499 ± 20% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 1942 ± 63% +380.4% 9331 ± 39% interrupts.CPU170.CAL:Function_call_interrupts 7306 ± 20% -100.0% 0.00 interrupts.CPU170.NMI:Non-maskable_interrupts 7306 ± 20% -100.0% 0.00 interrupts.CPU170.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 1.00 ±223% interrupts.CPU171.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 1.00 ±223% interrupts.CPU171.PMI:Performance_monitoring_interrupts 7968 -100.0% 0.00 interrupts.CPU172.NMI:Non-maskable_interrupts 7968 -100.0% 0.00 interrupts.CPU172.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU173.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU173.PMI:Performance_monitoring_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU174.NMI:Non-maskable_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU174.PMI:Performance_monitoring_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU175.NMI:Non-maskable_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU175.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU176.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU176.PMI:Performance_monitoring_interrupts 7308 ± 20% -100.0% 0.00 interrupts.CPU177.NMI:Non-maskable_interrupts 7308 ± 20% -100.0% 0.00 interrupts.CPU177.PMI:Performance_monitoring_interrupts 6640 ± 28% -100.0% 0.00 interrupts.CPU178.NMI:Non-maskable_interrupts 6640 ± 28% -100.0% 0.00 interrupts.CPU178.PMI:Performance_monitoring_interrupts 6640 ± 28% -100.0% 0.00 interrupts.CPU179.NMI:Non-maskable_interrupts 6640 ± 28% -100.0% 0.00 interrupts.CPU179.PMI:Performance_monitoring_interrupts 7499 ± 20% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 7499 ± 20% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 6639 ± 28% -100.0% 0.00 interrupts.CPU180.NMI:Non-maskable_interrupts 6639 ± 28% -100.0% 0.00 interrupts.CPU180.PMI:Performance_monitoring_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU181.NMI:Non-maskable_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU181.PMI:Performance_monitoring_interrupts 7302 ± 20% -100.0% 0.00 interrupts.CPU182.NMI:Non-maskable_interrupts 7302 ± 20% -100.0% 0.00 interrupts.CPU182.PMI:Performance_monitoring_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU183.NMI:Non-maskable_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU183.PMI:Performance_monitoring_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU184.NMI:Non-maskable_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU184.PMI:Performance_monitoring_interrupts 6639 ± 28% -100.0% 0.00 interrupts.CPU185.NMI:Non-maskable_interrupts 6639 ± 28% -100.0% 0.00 interrupts.CPU185.PMI:Performance_monitoring_interrupts 6638 ± 28% -100.0% 0.00 interrupts.CPU186.NMI:Non-maskable_interrupts 6638 ± 28% -100.0% 0.00 interrupts.CPU186.PMI:Performance_monitoring_interrupts 6639 ± 28% -100.0% 0.00 interrupts.CPU187.NMI:Non-maskable_interrupts 6639 ± 28% -100.0% 0.00 interrupts.CPU187.PMI:Performance_monitoring_interrupts 6642 ± 28% -100.0% 0.00 interrupts.CPU188.NMI:Non-maskable_interrupts 6642 ± 28% -100.0% 0.00 interrupts.CPU188.PMI:Performance_monitoring_interrupts 6638 ± 28% -100.0% 0.00 interrupts.CPU189.NMI:Non-maskable_interrupts 6638 ± 28% -100.0% 0.00 interrupts.CPU189.PMI:Performance_monitoring_interrupts 7502 ± 20% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 7502 ± 20% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 6638 ± 28% -100.0% 0.00 interrupts.CPU190.NMI:Non-maskable_interrupts 6638 ± 28% -100.0% 0.00 interrupts.CPU190.PMI:Performance_monitoring_interrupts 1014 -9.5% 918.17 interrupts.CPU191.CAL:Function_call_interrupts 7313 ± 20% -100.0% 0.00 interrupts.CPU191.NMI:Non-maskable_interrupts 7313 ± 20% -100.0% 0.00 interrupts.CPU191.PMI:Performance_monitoring_interrupts 354.00 ± 4% -12.3% 310.33 ± 2% interrupts.CPU191.RES:Rescheduling_interrupts 6818 ± 28% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 6818 ± 28% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 8180 -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 8180 -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 7499 ± 20% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 7499 ± 20% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 7477 ± 20% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 7477 ± 20% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 8156 -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 8156 -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 8156 -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 8156 -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 8155 -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 8155 -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 8156 -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 8156 -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 8155 -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 8155 -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 8155 -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 8155 -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 8155 -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 8155 -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 8155 -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 8155 -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 8156 -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 8156 -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 6815 ± 28% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 6815 ± 28% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 7476 ± 20% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 7475 ± 20% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 7474 ± 20% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 7474 ± 20% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 7474 ± 20% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 7474 ± 20% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 7474 ± 20% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 7474 ± 20% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 7089 ± 28% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 7089 ± 28% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 7092 ± 28% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 7803 ± 20% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 7803 ± 20% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 6815 ± 28% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 6815 ± 28% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 7801 ± 20% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 7803 ± 20% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 7803 ± 20% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 7091 ± 28% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 7802 ± 20% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 7093 ± 28% -100.0% 1.00 ±223% interrupts.CPU66.NMI:Non-maskable_interrupts 7093 ± 28% -100.0% 1.00 ±223% interrupts.CPU66.PMI:Performance_monitoring_interrupts 7094 ± 28% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 7094 ± 28% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 7804 ± 20% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 7804 ± 20% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 7095 ± 28% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 7095 ± 28% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 6815 ± 28% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 6815 ± 28% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 7803 ± 20% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 7803 ± 20% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 7803 ± 20% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 7803 ± 20% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 367.33 ± 3% +64.7% 605.17 ± 33% interrupts.CPU73.RES:Rescheduling_interrupts 7967 -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 7967 -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 7969 -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 7969 -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 7968 -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 7968 -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 7498 ± 20% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 7968 -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 7968 -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 7971 -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 7971 -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 7306 ± 20% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 7306 ± 20% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 7497 ± 20% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 7304 ± 20% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 7303 ± 20% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 7305 ± 20% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 7971 -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 7971 -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 7971 -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 7971 -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 7494 ± 20% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 7494 ± 20% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 7495 ± 20% -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 7495 ± 20% -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 7494 ± 20% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 7494 ± 20% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 7495 ± 20% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 7495 ± 20% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 351.17 ± 6% -100.0% 0.00 interrupts.IWI:IRQ_work_interrupts 1423429 ± 6% -100.0% 4.00 ± 70% interrupts.NMI:Non-maskable_interrupts 1423429 ± 6% -100.0% 4.00 ± 70% interrupts.PMI:Performance_monitoring_interrupts *************************************************************************************************** lkp-csl-2sp4: 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory ========================================================================================= compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/1/debian-10.4-x86_64-20200603.cgz/300s/lkp-csl-2sp4/pipe/unixbench/0x4003006 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 1520 -4.4% 1453 unixbench.score 34.88 -3.6% 33.62 unixbench.time.user_time 7.381e+08 -4.5% 7.05e+08 unixbench.workload 409742 ± 22% +656.7% 3100441 ± 51% numa-numastat.node0.local_node 438639 ± 21% +613.4% 3129335 ± 49% numa-numastat.node0.numa_hit 1.58 ± 18% -0.9 0.69 ± 13% mpstat.cpu.all.irq% 0.73 +1.9 2.62 mpstat.cpu.all.sys% 0.12 +0.1 0.24 ± 6% mpstat.cpu.all.usr% 1084963 +413.4% 5570268 ± 8% vmstat.memory.cache 0.00 +2e+102% 2.00 vmstat.procs.r 1392 ± 2% +23.9% 1725 vmstat.system.cs 3750 ± 8% +81778.3% 3070845 ± 62% numa-meminfo.node0.Active 3681 ± 9% +83310.4% 3070616 ± 62% numa-meminfo.node0.Active(anon) 506395 ± 5% +605.2% 3570979 ± 52% numa-meminfo.node0.FilePages 1121422 ± 5% +295.0% 4429459 ± 39% numa-meminfo.node0.MemUsed 8555 ± 42% +35848.9% 3075431 ± 62% numa-meminfo.node0.Shmem 1249492 ± 4% +119.9% 2747778 ± 55% numa-meminfo.node1.MemUsed 10392 +43057.3% 4485054 ± 10% meminfo.Active 10248 +43660.3% 4484769 ± 10% meminfo.Active(anon) 999932 +447.6% 5475487 ± 9% meminfo.Cached 340242 +1332.7% 4874583 ± 10% meminfo.Committed_AS 2370049 +203.0% 7181438 ± 6% meminfo.Memused 4033 +11.9% 4515 meminfo.PageTables 19989 +22383.4% 4494403 ± 10% meminfo.Shmem 2736614 +162.4% 7181438 ± 6% meminfo.max_used_kB 920.83 ± 9% +83320.0% 768158 ± 62% numa-vmstat.node0.nr_active_anon 126599 ± 5% +605.6% 893249 ± 52% numa-vmstat.node0.nr_file_pages 2139 ± 42% +35865.5% 769362 ± 62% numa-vmstat.node0.nr_shmem 920.83 ± 9% +83319.9% 768158 ± 62% numa-vmstat.node0.nr_zone_active_anon 929661 ± 4% +143.5% 2263281 ± 28% numa-vmstat.node0.numa_hit 855512 ± 10% +160.7% 2229964 ± 30% numa-vmstat.node0.numa_local 1027688 ± 4% +57.9% 1622508 ± 32% numa-vmstat.node1.numa_hit 870047 ± 9% +63.6% 1423799 ± 38% numa-vmstat.node1.numa_local 50471 ± 3% -12.2% 44310 ± 6% softirqs.CPU22.SCHED 20757 ± 7% -12.8% 18091 ± 12% softirqs.CPU40.RCU 20699 ± 6% -13.6% 17886 ± 13% softirqs.CPU45.RCU 26238 ± 6% +68.4% 44189 ± 17% softirqs.CPU47.SCHED 50712 ± 4% -8.7% 46317 softirqs.CPU48.SCHED 50227 ± 4% -13.8% 43278 ± 14% softirqs.CPU60.SCHED 50143 ± 4% -9.1% 45585 ± 7% softirqs.CPU63.SCHED 20082 ± 8% -11.5% 17782 ± 11% softirqs.CPU93.RCU 28253 ± 2% +93.2% 54581 ± 2% softirqs.TIMER 30003 +33.4% 40010 slabinfo.filp.active_objs 946.83 +32.5% 1254 slabinfo.filp.active_slabs 30318 +32.5% 40163 slabinfo.filp.num_objs 946.83 +32.5% 1254 slabinfo.filp.num_slabs 8559 ± 2% +13.2% 9689 ± 4% slabinfo.kmalloc-256.active_objs 8594 ± 2% +13.8% 9780 ± 4% slabinfo.kmalloc-256.num_objs 14664 -30.7% 10156 ± 2% slabinfo.proc_inode_cache.active_objs 14695 -30.9% 10156 ± 2% slabinfo.proc_inode_cache.num_objs 16638 +106.9% 34430 ± 5% slabinfo.radix_tree_node.active_objs 297.33 +106.6% 614.33 ± 5% slabinfo.radix_tree_node.active_slabs 16674 +106.5% 34430 ± 5% slabinfo.radix_tree_node.num_objs 297.33 +106.6% 614.33 ± 5% slabinfo.radix_tree_node.num_slabs 2557 +43670.0% 1119564 ± 10% proc-vmstat.nr_active_anon 61448 +7.7% 66209 proc-vmstat.nr_anon_pages 3212540 -3.7% 3092643 proc-vmstat.nr_dirty_background_threshold 6432936 -3.7% 6192848 proc-vmstat.nr_dirty_threshold 249984 +446.9% 1367246 ± 9% proc-vmstat.nr_file_pages 32327104 -3.7% 31126085 proc-vmstat.nr_free_pages 63727 +7.4% 68432 proc-vmstat.nr_inactive_anon 8433 +3.4% 8724 proc-vmstat.nr_mapped 1013 +11.2% 1127 proc-vmstat.nr_page_table_pages 4997 +22349.2% 1121973 ± 10% proc-vmstat.nr_shmem 21252 +6.5% 22635 proc-vmstat.nr_slab_reclaimable 45823 +2.4% 46915 proc-vmstat.nr_slab_unreclaimable 2557 +43670.0% 1119564 ± 10% proc-vmstat.nr_zone_active_anon 63727 +7.4% 68432 proc-vmstat.nr_zone_inactive_anon 1070925 +362.3% 4950407 ± 9% proc-vmstat.numa_hit 984193 +394.2% 4863666 ± 9% proc-vmstat.numa_local 2909 ± 4% +5050.4% 149824 ± 11% proc-vmstat.pgactivate 1134959 +488.0% 6674021 ± 10% proc-vmstat.pgalloc_normal 1173909 -1.0% 1161979 proc-vmstat.pgfault 1117882 +280.6% 4254121 ± 11% proc-vmstat.pgfree 491793 ± 18% -52.2% 235056 sched_debug.cfs_rq:/.load.max 103813 ± 13% -35.1% 67422 ± 6% sched_debug.cfs_rq:/.load.stddev 587.93 ± 10% -33.4% 391.42 ± 10% sched_debug.cfs_rq:/.load_avg.max 116.38 ± 7% -18.5% 94.90 ± 10% sched_debug.cfs_rq:/.load_avg.stddev 16646 ± 16% +173.0% 45439 ± 6% sched_debug.cfs_rq:/.min_vruntime.avg 70112 ± 22% +538.3% 447493 ± 10% sched_debug.cfs_rq:/.min_vruntime.max 10077 ± 21% +725.1% 83153 ± 8% sched_debug.cfs_rq:/.min_vruntime.stddev 0.10 ± 8% +29.0% 0.13 ± 10% sched_debug.cfs_rq:/.nr_running.avg 0.28 ± 4% +12.3% 0.32 ± 4% sched_debug.cfs_rq:/.nr_running.stddev 94.80 ± 4% +33.1% 126.21 ± 6% sched_debug.cfs_rq:/.runnable_avg.avg 200.10 ± 3% +35.2% 270.44 ± 4% sched_debug.cfs_rq:/.runnable_avg.stddev 49937 ± 31% +736.3% 417624 ± 11% sched_debug.cfs_rq:/.spread0.max 10078 ± 20% +725.0% 83153 ± 8% sched_debug.cfs_rq:/.spread0.stddev 94.71 ± 4% +33.2% 126.11 ± 6% sched_debug.cfs_rq:/.util_avg.avg 200.02 ± 3% +35.1% 270.24 ± 4% sched_debug.cfs_rq:/.util_avg.stddev 7.31 ± 14% +589.4% 50.37 ± 8% sched_debug.cfs_rq:/.util_est_enqueued.avg 193.59 ± 8% +411.1% 989.35 sched_debug.cfs_rq:/.util_est_enqueued.max 31.57 ± 9% +534.2% 200.18 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.stddev 185672 ± 7% +8.8% 201920 ± 5% sched_debug.cpu.clock_task.avg 186883 ± 7% +8.3% 202403 ± 5% sched_debug.cpu.clock_task.max 180574 ± 8% +9.2% 197161 ± 5% sched_debug.cpu.clock_task.min 1287 ± 15% -46.1% 693.50 ± 9% sched_debug.cpu.clock_task.stddev 191.36 ± 2% +28.4% 245.65 ± 3% sched_debug.cpu.curr->pid.avg 1015 ± 4% +10.6% 1122 ± 3% sched_debug.cpu.curr->pid.stddev 0.05 ± 5% +35.1% 0.06 ± 2% sched_debug.cpu.nr_running.avg 0.20 +20.4% 0.24 sched_debug.cpu.nr_running.stddev 4138 ± 5% +20.9% 5005 ± 4% sched_debug.cpu.nr_switches.avg 6568 ± 18% +28.1% 8415 ± 7% sched_debug.cpu.nr_switches.stddev 9.599e+08 ± 2% +202.1% 2.9e+09 ± 7% perf-stat.i.branch-instructions 2.71 ± 42% -2.2 0.56 ± 4% perf-stat.i.branch-miss-rate% 1625346 ± 14% +632.4% 11904315 ± 99% perf-stat.i.cache-misses 1364 ± 2% +24.6% 1699 perf-stat.i.context-switches 2.22 ± 20% -58.1% 0.93 ± 9% perf-stat.i.cpi 6.069e+09 ± 8% +114.0% 1.299e+10 perf-stat.i.cpu-cycles 98.51 -1.7% 96.83 perf-stat.i.cpu-migrations 1.388e+09 +144.7% 3.396e+09 ± 6% perf-stat.i.dTLB-loads 8.378e+08 +147.5% 2.074e+09 ± 6% perf-stat.i.dTLB-stores 7570481 ± 2% +57.8% 11942446 ± 4% perf-stat.i.iTLB-load-misses 1391240 ± 42% +54.3% 2146373 ± 3% perf-stat.i.iTLB-loads 4.702e+09 ± 2% +200.0% 1.411e+10 ± 7% perf-stat.i.instructions 656.51 ± 2% +91.4% 1256 ± 3% perf-stat.i.instructions-per-iTLB-miss 0.75 ± 7% +45.0% 1.09 ± 8% perf-stat.i.ipc 0.06 ± 8% +114.0% 0.14 perf-stat.i.metric.GHz 33.56 +161.6% 87.78 ± 6% perf-stat.i.metric.M/sec 2907 -1.0% 2877 perf-stat.i.minor-faults 86.17 ± 3% -12.1 74.04 ± 12% perf-stat.i.node-load-miss-rate% 105679 ± 40% +2104.9% 2330159 ±105% perf-stat.i.node-load-misses 17322 ± 23% +2865.0% 513616 ± 62% perf-stat.i.node-loads 29341 ± 27% +3461.2% 1044902 ±107% perf-stat.i.node-store-misses 13011 ± 56% +281.1% 49581 ± 14% perf-stat.i.node-stores 2907 -1.0% 2877 perf-stat.i.page-faults 1.61 ± 25% -1.1 0.56 ± 3% perf-stat.overall.branch-miss-rate% 1.29 ± 8% -28.3% 0.93 ± 8% perf-stat.overall.cpi 621.22 +89.9% 1179 ± 3% perf-stat.overall.instructions-per-iTLB-miss 0.78 ± 8% +39.3% 1.09 ± 8% perf-stat.overall.ipc 85.07 ± 3% -12.2 72.85 ± 13% perf-stat.overall.node-load-miss-rate% 2485 ± 2% +214.2% 7811 ± 7% perf-stat.overall.path-length 9.573e+08 ± 2% +202.2% 2.893e+09 ± 7% perf-stat.ps.branch-instructions 1621575 ± 14% +632.2% 11873380 ± 99% perf-stat.ps.cache-misses 1361 ± 2% +24.6% 1695 perf-stat.ps.context-switches 6.053e+09 ± 8% +114.0% 1.295e+10 perf-stat.ps.cpu-cycles 98.26 -1.7% 96.58 perf-stat.ps.cpu-migrations 1.384e+09 +144.8% 3.387e+09 ± 6% perf-stat.ps.dTLB-loads 8.354e+08 +147.6% 2.068e+09 ± 6% perf-stat.ps.dTLB-stores 7549188 ± 2% +57.8% 11911377 ± 4% perf-stat.ps.iTLB-load-misses 1387528 ± 42% +54.3% 2140817 ± 3% perf-stat.ps.iTLB-loads 4.689e+09 ± 2% +200.1% 1.407e+10 ± 7% perf-stat.ps.instructions 2899 -1.0% 2870 perf-stat.ps.minor-faults 105437 ± 40% +2104.3% 2324118 ±105% perf-stat.ps.node-load-misses 17288 ± 23% +2863.0% 512270 ± 62% perf-stat.ps.node-loads 29280 ± 27% +3459.4% 1042184 ±107% perf-stat.ps.node-store-misses 12991 ± 56% +280.8% 49471 ± 14% perf-stat.ps.node-stores 2900 -1.0% 2870 perf-stat.ps.page-faults 1.835e+12 ± 2% +200.0% 5.504e+12 ± 7% perf-stat.total.instructions 66.43 ± 4% -66.4 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 65.50 ± 4% -65.5 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 65.50 ± 4% -65.5 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 65.45 ± 4% -65.5 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 58.65 ± 8% -58.6 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 56.81 ± 10% -56.8 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 37.15 ± 17% -37.2 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 22.85 ± 10% -22.9 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe 21.54 ± 10% -21.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe 18.82 ± 13% -18.8 0.00 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 13.81 ± 7% -13.8 0.00 perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle 10.57 ± 9% -10.6 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 10.21 ± 10% -10.2 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 9.83 ± 9% -9.8 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 9.55 ± 9% -9.5 0.00 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 9.54 ± 9% -9.5 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 9.13 ± 8% -9.1 0.00 perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter 8.88 ± 11% -8.9 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 8.87 ± 8% -8.9 0.00 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state 8.81 ± 9% -8.8 0.00 perf-profile.calltrace.cycles-pp.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64 8.11 ± 10% -8.1 0.00 perf-profile.calltrace.cycles-pp.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64 5.58 ± 21% -5.6 0.00 perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt 5.56 ± 37% -5.6 0.00 perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 66.43 ± 4% -66.4 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 66.43 ± 4% -66.4 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 66.43 ± 4% -66.4 0.00 perf-profile.children.cycles-pp.do_idle 65.50 ± 4% -65.5 0.00 perf-profile.children.cycles-pp.start_secondary 59.51 ± 7% -59.5 0.00 perf-profile.children.cycles-pp.cpuidle_enter 59.49 ± 7% -59.5 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 37.50 ± 16% -37.5 0.00 perf-profile.children.cycles-pp.intel_idle 28.13 ± 9% -28.1 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 26.49 ± 10% -26.5 0.00 perf-profile.children.cycles-pp.do_syscall_64 16.84 ± 7% -16.8 0.00 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt 14.15 ± 6% -14.1 0.00 perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt 12.86 ± 10% -12.9 0.00 perf-profile.children.cycles-pp.ksys_write 12.47 ± 10% -12.5 0.00 perf-profile.children.cycles-pp.ksys_read 12.01 ± 9% -12.0 0.00 perf-profile.children.cycles-pp.vfs_write 11.66 ± 10% -11.7 0.00 perf-profile.children.cycles-pp.vfs_read 9.61 ± 9% -9.6 0.00 perf-profile.children.cycles-pp.new_sync_write 9.38 ± 8% -9.4 0.00 perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt 9.14 ± 7% -9.1 0.00 perf-profile.children.cycles-pp.hrtimer_interrupt 8.94 ± 9% -8.9 0.00 perf-profile.children.cycles-pp.pipe_write 8.91 ± 11% -8.9 0.00 perf-profile.children.cycles-pp.new_sync_read 8.23 ± 10% -8.2 0.00 perf-profile.children.cycles-pp.pipe_read 5.77 ± 21% -5.8 0.00 perf-profile.children.cycles-pp.__hrtimer_run_queues 5.64 ± 37% -5.6 0.00 perf-profile.children.cycles-pp.menu_select 37.49 ± 16% -37.5 0.00 perf-profile.self.cycles-pp.intel_idle 0.01 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 19% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 45% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 77% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 ± 21% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 50% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.00 ± 58% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.00 ± 51% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 99% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 40% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 82% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 39% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.03 ± 21% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.__skb_wait_for_more_packets.unix_dgram_recvmsg.__sys_recvfrom 0.01 ± 58% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq 0.01 ± 32% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 68% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 40% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion_interruptible.usb_stor_control_thread.kthread 0.01 ± 80% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion_interruptible_timeout.usb_stor_msg_common.usb_stor_bulk_transfer_buf 0.01 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.01 ± 19% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.02 ± 30% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 49% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.02 ± 99% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.05 ± 60% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.09 ± 63% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 43% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.12 ± 63% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.03 ± 58% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.04 ± 76% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.05 ± 66% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.03 ± 97% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.06 ± 64% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.03 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.__skb_wait_for_more_packets.unix_dgram_recvmsg.__sys_recvfrom 0.02 ± 42% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq 0.02 ± 49% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 1.57 ±133% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion_interruptible.usb_stor_control_thread.kthread 0.03 ± 76% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion_interruptible_timeout.usb_stor_msg_common.usb_stor_bulk_transfer_buf 0.04 ± 29% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 2.51 ± 2% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.01 ± 29% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 3.18 ± 30% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 229.38 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 6583 ± 2% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 9993 -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 229.38 ± 3% -100.0% 0.00 perf-sched.total_wait_time.average.ms 9993 -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.76 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 692.36 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 274.74 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 4.82 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 72.87 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 18.01 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 3.65 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 946.15 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 391.70 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 963.16 ± 35% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq 477.92 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 13.61 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 819.16 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.wait_for_completion_interruptible_timeout.usb_stor_msg_common.usb_stor_bulk_transfer_buf 742.93 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 427.12 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 13.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 251.50 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 241.33 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 2114 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 1056 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 115.17 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 6.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 23.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq 38.67 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 736.00 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.wait_for_completion_interruptible_timeout.usb_stor_msg_common.usb_stor_bulk_transfer_buf 1250 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 70.83 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 585.17 -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.84 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3223 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 3004 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 9993 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 5.06 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 5676 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 1717 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq 505.05 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 265.99 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2048 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.wait_for_completion_interruptible_timeout.usb_stor_msg_common.usb_stor_bulk_transfer_buf 1528 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 8021 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.75 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1.30 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 692.35 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1.30 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 274.74 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 4.81 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 72.86 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 18.01 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 3.63 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 324.16 ± 42% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 946.13 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 391.69 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 4.29 ±104% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.__skb_wait_for_more_packets.unix_dgram_recvmsg.__sys_recvfrom 963.14 ± 35% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq 477.90 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 13.60 ± 20% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.08 ± 15% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion_interruptible.usb_stor_control_thread.kthread 819.15 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion_interruptible_timeout.usb_stor_msg_common.usb_stor_bulk_transfer_buf 742.92 -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 427.11 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.82 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2.60 ± 10% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2.60 ± 11% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3223 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.04 ± 76% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3004 -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 9993 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 5.04 -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 1201 ±131% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5676 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 4.29 ±104% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.__skb_wait_for_more_packets.unix_dgram_recvmsg.__sys_recvfrom 1717 ± 27% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.blk_execute_rq 505.03 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 265.97 ± 20% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.10 ± 6% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.wait_for_completion_interruptible.usb_stor_control_thread.kthread 2048 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.wait_for_completion_interruptible_timeout.usb_stor_msg_common.usb_stor_bulk_transfer_buf 1528 ± 34% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 8021 ± 4% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 143078 ± 44% +75.7% 251446 ± 2% interrupts.CAL:Function_call_interrupts 176.00 ± 56% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 176.00 ± 56% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 116.83 ± 23% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 116.83 ± 23% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 96.67 ± 45% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 96.67 ± 45% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 92.17 ± 38% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 92.17 ± 38% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 87.00 ± 39% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 87.00 ± 39% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 88.50 ± 38% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 88.50 ± 38% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 92.00 ± 43% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 92.00 ± 43% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 43009 ±101% -98.8% 502.83 interrupts.CPU15.CAL:Function_call_interrupts 96.17 ± 34% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 96.17 ± 34% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 96.00 ± 31% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 96.00 ± 31% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 229.17 ±115% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 229.17 ±115% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 138.17 ± 50% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 138.17 ± 50% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 109.00 ± 25% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 109.00 ± 25% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 181.00 ± 89% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 181.00 ± 89% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 900.33 ±195% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 900.33 ±195% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 113.17 ± 33% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 113.17 ± 33% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 114.17 ± 33% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 114.17 ± 33% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 956.17 ±201% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 956.17 ±201% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 120.67 ± 44% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 120.67 ± 44% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 660.83 ±193% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 660.83 ±193% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 175.50 ± 90% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 175.50 ± 90% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 91.50 ± 34% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 91.50 ± 34% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 93.50 ± 37% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 93.50 ± 37% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 100.17 ± 32% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 100.17 ± 32% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 100.33 ± 34% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 100.33 ± 34% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 112.00 ± 28% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 112.00 ± 28% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 123.17 ± 20% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 123.17 ± 20% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 113.00 ± 12% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 113.00 ± 12% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 94.50 ± 28% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 94.50 ± 28% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 110.33 ± 27% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 110.33 ± 27% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 112.33 ± 31% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 112.33 ± 31% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 106.17 ± 25% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 106.17 ± 25% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 174.17 ± 94% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 174.17 ± 94% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 893.33 ±198% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 893.33 ±198% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 83.83 ± 18% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 83.83 ± 18% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 107.67 ± 27% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 107.67 ± 27% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 119.83 ± 29% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 119.83 ± 29% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 111.33 ± 28% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 111.33 ± 28% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 562.33 ± 5% -10.4% 504.00 interrupts.CPU43.CAL:Function_call_interrupts 117.17 ± 15% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 117.17 ± 15% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 122.33 ± 15% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 122.33 ± 15% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 114.83 ± 14% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 114.83 ± 14% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 118.00 ± 15% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 118.00 ± 15% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 134.50 ± 28% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 134.50 ± 28% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 131.33 ± 27% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 131.33 ± 27% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 123.50 ± 29% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 123.50 ± 29% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 340.33 ±144% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 340.33 ±144% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 105.17 ± 44% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 105.17 ± 44% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 113.50 ± 34% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 113.50 ± 34% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 4.50 ± 80% +522.2% 28.00 ±149% interrupts.CPU51.RES:Rescheduling_interrupts 114.00 ± 37% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 114.00 ± 37% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 710.50 ±181% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 710.50 ±181% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 227.00 ± 94% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 227.00 ± 94% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 115.33 ± 36% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 115.33 ± 36% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 126.00 ± 25% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 126.00 ± 25% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 2556 ± 96% -80.0% 510.67 ± 2% interrupts.CPU57.CAL:Function_call_interrupts 124.83 ± 24% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 124.83 ± 24% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 130.17 ± 32% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 130.17 ± 32% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 131.50 ± 30% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 131.50 ± 30% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 171.50 ± 89% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 171.50 ± 89% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 117.17 ± 26% -99.1% 1.00 ±223% interrupts.CPU60.NMI:Non-maskable_interrupts 117.17 ± 26% -99.1% 1.00 ±223% interrupts.CPU60.PMI:Performance_monitoring_interrupts 111.17 ± 32% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 111.17 ± 32% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 113.50 ± 33% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 113.50 ± 33% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 108.33 ± 35% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 108.33 ± 35% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 120.50 ± 27% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 120.50 ± 27% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 120.83 ± 26% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 120.83 ± 26% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 911.17 ±193% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 911.17 ±193% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 335.67 ±145% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 335.67 ±145% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 342.33 ±142% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 342.33 ±142% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 359.67 ±156% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 359.67 ±156% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 50.00 ±112% -87.0% 6.50 ±125% interrupts.CPU69.RES:Rescheduling_interrupts 99.33 ± 32% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 99.33 ± 32% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 117.50 ± 36% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 117.50 ± 36% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 120.83 ± 35% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 120.83 ± 35% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 351.67 ±141% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 351.67 ±141% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 110.67 ± 32% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 110.67 ± 32% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 584.83 ±184% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 584.83 ±184% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 337.00 ±143% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 337.00 ±143% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 114.33 ± 31% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 114.33 ± 31% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 117.17 ± 30% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 117.17 ± 30% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 110.50 ± 24% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 110.50 ± 24% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 115.67 ± 15% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 115.67 ± 15% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 78.50 ± 29% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 78.50 ± 29% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 117.67 ± 13% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 117.67 ± 13% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 108.67 ± 28% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 108.67 ± 28% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 109.17 ± 28% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 109.17 ± 28% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 117.00 ± 29% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 117.00 ± 29% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 111.17 ± 29% -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 111.17 ± 29% -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 107.17 ± 25% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 107.17 ± 25% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 1044 ± 66% -49.0% 532.50 ± 9% interrupts.CPU86.CAL:Function_call_interrupts 205.17 ±111% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 205.17 ±111% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 220.83 ±120% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 220.83 ±120% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 110.17 ± 28% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 110.17 ± 28% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 118.33 ± 42% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 118.33 ± 42% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 99.50 ± 33% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 99.50 ± 33% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 114.67 ± 23% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 114.67 ± 23% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 115.67 ± 31% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 115.67 ± 31% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 110.67 ± 29% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 110.67 ± 29% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 107.17 ± 28% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 107.17 ± 28% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 108.00 ± 29% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 108.00 ± 29% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 112.67 ± 26% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 112.67 ± 26% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 17625 ± 12% -100.0% 1.00 ±223% interrupts.NMI:Non-maskable_interrupts 17625 ± 12% -100.0% 1.00 ±223% interrupts.PMI:Performance_monitoring_interrupts *************************************************************************************************** lkp-skl-fpga01: 104 threads Skylake with 192G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/process/16/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/open1/will-it-scale/0x2006a0a commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 3465120 -20.3% 2762637 ± 3% will-it-scale.16.processes 83.40 -1.8% 81.93 will-it-scale.16.processes_idle 216569 -20.3% 172664 ± 3% will-it-scale.per_process_ops 3465120 -20.3% 2762637 ± 3% will-it-scale.workload 1.18 -0.5 0.72 ± 3% mpstat.cpu.all.irq% 0.70 ± 2% +0.1 0.85 mpstat.cpu.all.soft% 12.23 +1.4 13.61 mpstat.cpu.all.sys% 2.08 -0.2 1.87 ± 2% mpstat.cpu.all.usr% 27101114 -16.9% 22524690 ± 4% numa-numastat.node0.local_node 27147885 -16.9% 22554727 ± 4% numa-numastat.node0.numa_hit 640758 ± 7% +209.1% 1980823 ± 17% numa-numastat.node1.local_node 687660 ± 4% +197.3% 2044488 ± 16% numa-numastat.node1.numa_hit 83.00 -1.2% 82.00 vmstat.cpu.id 1358759 +147.7% 3365354 ± 2% vmstat.memory.cache 15.83 ± 2% +13.7% 18.00 vmstat.procs.r 1870 +10.8% 2071 vmstat.system.cs -426199 +54.6% -658753 sched_debug.cfs_rq:/.spread0.avg 662465 ± 23% -70.0% 198997 ± 40% sched_debug.cfs_rq:/.spread0.max 566.19 +71.4% 970.34 sched_debug.cfs_rq:/.util_est_enqueued.max 208.90 ± 5% +13.4% 236.80 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.stddev 1509 ± 4% +33.0% 2006 ± 8% sched_debug.cpu.clock_task.stddev 0.16 +11.3% 0.18 ± 2% sched_debug.cpu.nr_running.avg 1059 ± 9% +18.0% 1250 ± 9% sched_debug.cpu.nr_switches.min 55063 ± 4% +3636.5% 2057437 ± 4% meminfo.Active 54372 ± 3% +3683.3% 2057074 ± 4% meminfo.Active(anon) 1260885 +158.5% 3259312 ± 2% meminfo.Cached 465143 +443.2% 2526784 ± 3% meminfo.Committed_AS 2621024 +91.7% 5025386 meminfo.Memused 4488 +11.3% 4994 meminfo.PageTables 73406 ± 2% +2726.8% 2075033 ± 4% meminfo.Shmem 3081831 +63.1% 5027589 meminfo.max_used_kB 1579 ± 36% +20883.1% 331323 ± 88% numa-meminfo.node0.Active 1091 ± 43% +30247.8% 331094 ± 88% numa-meminfo.node0.Active(anon) 601961 ± 2% +56.6% 942489 ± 31% numa-meminfo.node0.FilePages 1290225 ± 4% +43.7% 1853949 ± 18% numa-meminfo.node0.MemUsed 11714 ± 27% +2846.8% 345206 ± 84% numa-meminfo.node0.Shmem 53530 ± 4% +3126.6% 1727189 ± 21% numa-meminfo.node1.Active 53327 ± 4% +3138.6% 1727056 ± 21% numa-meminfo.node1.Active(anon) 658931 ± 2% +251.8% 2317897 ± 16% numa-meminfo.node1.FilePages 1330859 ± 4% +138.4% 3173127 ± 13% numa-meminfo.node1.MemUsed 61694 ± 6% +2705.6% 1730896 ± 21% numa-meminfo.node1.Shmem 272.50 ± 43% +30299.1% 82837 ± 88% numa-vmstat.node0.nr_active_anon 150489 ± 2% +56.6% 235686 ± 31% numa-vmstat.node0.nr_file_pages 2928 ± 27% +2849.2% 86366 ± 84% numa-vmstat.node0.nr_shmem 272.50 ± 43% +30299.1% 82837 ± 88% numa-vmstat.node0.nr_zone_active_anon 14301314 -15.7% 12050556 ± 4% numa-vmstat.node0.numa_hit 14251705 -16.0% 11969395 ± 4% numa-vmstat.node0.numa_local 13339 ± 4% +3136.3% 431702 ± 21% numa-vmstat.node1.nr_active_anon 164717 ± 2% +251.8% 579413 ± 16% numa-vmstat.node1.nr_file_pages 15408 ± 6% +2707.9% 432662 ± 21% numa-vmstat.node1.nr_shmem 13339 ± 4% +3136.3% 431701 ± 21% numa-vmstat.node1.nr_zone_active_anon 1100071 ± 7% +62.4% 1786534 ± 11% numa-vmstat.node1.numa_hit 871366 ± 7% +82.4% 1589557 ± 13% numa-vmstat.node1.numa_local 18879 +36.8% 25823 slabinfo.kmalloc-256.active_objs 640.00 +33.0% 851.33 slabinfo.kmalloc-256.active_slabs 20492 +33.0% 27259 slabinfo.kmalloc-256.num_objs 640.00 +33.0% 851.33 slabinfo.kmalloc-256.num_slabs 13769 -35.3% 8908 slabinfo.proc_inode_cache.active_objs 286.67 -35.5% 185.00 slabinfo.proc_inode_cache.active_slabs 13769 -35.3% 8908 slabinfo.proc_inode_cache.num_objs 286.67 -35.5% 185.00 slabinfo.proc_inode_cache.num_slabs 21880 +34.8% 29490 slabinfo.radix_tree_node.active_objs 390.33 +34.8% 526.00 slabinfo.radix_tree_node.active_slabs 21880 +34.8% 29490 slabinfo.radix_tree_node.num_objs 390.33 +34.8% 526.00 slabinfo.radix_tree_node.num_slabs 13627 ± 3% +3678.1% 514857 ± 4% proc-vmstat.nr_active_anon 61753 +8.5% 67005 proc-vmstat.nr_anon_pages 4823803 -1.2% 4763631 proc-vmstat.nr_dirty_background_threshold 9659401 -1.2% 9538910 proc-vmstat.nr_dirty_threshold 315225 +158.7% 815419 ± 2% proc-vmstat.nr_file_pages 48525906 -1.2% 47924109 proc-vmstat.nr_free_pages 66340 +7.5% 71309 proc-vmstat.nr_inactive_anon 1121 +11.5% 1249 proc-vmstat.nr_page_table_pages 18354 ± 2% +2729.6% 519347 ± 4% proc-vmstat.nr_shmem 59465 +2.9% 61176 proc-vmstat.nr_slab_unreclaimable 13627 ± 3% +3678.1% 514857 ± 4% proc-vmstat.nr_zone_active_anon 66340 +7.5% 71309 proc-vmstat.nr_zone_inactive_anon 27830863 -11.6% 24594337 ± 2% proc-vmstat.numa_hit 27737146 -11.7% 24500607 ± 2% proc-vmstat.numa_local 19151 ± 3% +260.5% 69034 ± 4% proc-vmstat.pgactivate 54917608 -13.6% 47441222 ± 3% proc-vmstat.pgalloc_normal 899744 -2.7% 875009 proc-vmstat.pgfault 54883163 -15.8% 46222944 ± 3% proc-vmstat.pgfree 21077 ± 9% -19.7% 16930 ± 4% softirqs.CPU1.RCU 19719 ± 7% -18.9% 15996 ± 3% softirqs.CPU10.RCU 19668 ± 7% -18.4% 16050 ± 3% softirqs.CPU11.RCU 19192 ± 5% -13.4% 16620 ± 6% softirqs.CPU12.RCU 19645 ± 8% -13.9% 16919 ± 7% softirqs.CPU13.RCU 20045 ± 2% -20.0% 16042 ± 7% softirqs.CPU2.RCU 11967 ± 5% +165.4% 31760 ± 29% softirqs.CPU26.RCU 11131 ± 6% +127.3% 25296 ± 48% softirqs.CPU27.RCU 10719 ± 4% +84.7% 19803 ± 16% softirqs.CPU28.RCU 11116 ± 4% +118.0% 24231 ± 31% softirqs.CPU29.RCU 19364 ± 5% -14.8% 16493 ± 3% softirqs.CPU3.RCU 1750 ±147% +590.7% 12093 ± 65% softirqs.CPU33.NET_RX 19543 ± 3% -17.6% 16106 ± 4% softirqs.CPU4.RCU 17706 ± 14% +114.4% 37957 ± 10% softirqs.CPU51.SCHED 19687 ± 5% -17.5% 16239 ± 3% softirqs.CPU6.RCU 18009 ± 9% -13.7% 15545 ± 4% softirqs.CPU64.RCU 18189 ± 10% -15.9% 15288 ± 5% softirqs.CPU65.RCU 39506 -22.9% 30446 ± 28% softirqs.CPU68.SCHED 9173 ± 2% +38.9% 12738 ± 23% softirqs.CPU75.RCU 9293 ± 5% +65.2% 15356 ± 67% softirqs.CPU76.RCU 9344 ± 7% +38.7% 12957 ± 37% softirqs.CPU77.RCU 15808 ± 5% +67.4% 26466 ± 22% softirqs.CPU78.RCU 41796 ± 2% -10.0% 37623 softirqs.CPU79.SCHED 19717 ± 3% -18.1% 16158 ± 2% softirqs.CPU9.RCU 1626815 +20.6% 1961283 ± 2% softirqs.RCU 5.88 +4.4% 6.15 ± 4% perf-stat.i.MPKI 4.554e+09 +6.6% 4.854e+09 perf-stat.i.branch-instructions 0.75 -0.1 0.68 perf-stat.i.branch-miss-rate% 11.09 +9.4 20.48 ± 9% perf-stat.i.cache-miss-rate% 14539965 +105.3% 29846666 ± 10% perf-stat.i.cache-misses 1.314e+08 +10.9% 1.458e+08 ± 3% perf-stat.i.cache-references 1842 +10.8% 2041 perf-stat.i.context-switches 2.09 +6.5% 2.23 perf-stat.i.cpi 4.677e+10 +13.1% 5.29e+10 perf-stat.i.cpu-cycles 111.51 +5.8% 117.97 perf-stat.i.cpu-migrations 3243 -44.1% 1814 ± 10% perf-stat.i.cycles-between-cache-misses 0.11 -0.0 0.09 perf-stat.i.dTLB-load-miss-rate% 7115170 -13.1% 6185042 ± 3% perf-stat.i.dTLB-load-misses 0.00 ± 8% -0.0 0.00 ± 27% perf-stat.i.dTLB-store-miss-rate% 155770 ± 8% -39.4% 94381 ± 28% perf-stat.i.dTLB-store-misses 74.86 -2.7 72.20 perf-stat.i.iTLB-load-miss-rate% 6938840 -9.0% 6311089 ± 4% perf-stat.i.iTLB-load-misses 2312627 +4.3% 2412563 perf-stat.i.iTLB-loads 2.232e+10 +6.3% 2.372e+10 perf-stat.i.instructions 3270 +16.1% 3795 ± 3% perf-stat.i.instructions-per-iTLB-miss 0.48 -6.0% 0.45 perf-stat.i.ipc 0.88 ± 19% -88.1% 0.11 ± 46% perf-stat.i.major-faults 0.45 +13.1% 0.51 perf-stat.i.metric.GHz 143.54 +2.1% 146.59 perf-stat.i.metric.M/sec 2868 -2.8% 2787 perf-stat.i.minor-faults 5.62 ± 11% +53.7 59.33 ± 5% perf-stat.i.node-load-miss-rate% 137219 ± 8% +2379.6% 3402537 ± 18% perf-stat.i.node-load-misses 1.45 ± 4% +53.6 55.01 ± 4% perf-stat.i.node-store-miss-rate% 33610 ± 3% +7186.8% 2449174 ± 3% perf-stat.i.node-store-misses 2382792 -15.9% 2002740 ± 6% perf-stat.i.node-stores 2869 -2.9% 2787 perf-stat.i.page-faults 5.89 +4.4% 6.15 ± 4% perf-stat.overall.MPKI 0.75 -0.1 0.68 perf-stat.overall.branch-miss-rate% 11.06 +9.4 20.47 ± 9% perf-stat.overall.cache-miss-rate% 2.10 +6.5% 2.23 perf-stat.overall.cpi 3216 -44.3% 1791 ± 10% perf-stat.overall.cycles-between-cache-misses 0.11 -0.0 0.10 perf-stat.overall.dTLB-load-miss-rate% 0.00 ± 8% -0.0 0.00 ± 27% perf-stat.overall.dTLB-store-miss-rate% 75.00 -2.7 72.31 perf-stat.overall.iTLB-load-miss-rate% 3217 +17.0% 3763 ± 3% perf-stat.overall.instructions-per-iTLB-miss 0.48 -6.0% 0.45 perf-stat.overall.ipc 5.63 ± 11% +53.9 59.54 ± 5% perf-stat.overall.node-load-miss-rate% 1.40 ± 3% +53.6 55.04 ± 4% perf-stat.overall.node-store-miss-rate% 1940249 +33.4% 2587475 ± 2% perf-stat.overall.path-length 4.539e+09 +6.6% 4.837e+09 perf-stat.ps.branch-instructions 14492324 +105.2% 29742417 ± 10% perf-stat.ps.cache-misses 1.31e+08 +10.9% 1.453e+08 ± 3% perf-stat.ps.cache-references 1836 +10.8% 2034 perf-stat.ps.context-switches 4.661e+10 +13.1% 5.272e+10 perf-stat.ps.cpu-cycles 111.20 +5.7% 117.56 perf-stat.ps.cpu-migrations 7091164 -13.1% 6163793 ± 3% perf-stat.ps.dTLB-load-misses 155264 ± 8% -39.4% 94063 ± 28% perf-stat.ps.dTLB-store-misses 6915458 -9.1% 6289393 ± 4% perf-stat.ps.iTLB-load-misses 2304780 +4.3% 2404266 perf-stat.ps.iTLB-loads 2.225e+10 +6.3% 2.364e+10 perf-stat.ps.instructions 0.87 ± 19% -88.0% 0.10 ± 47% perf-stat.ps.major-faults 2857 -2.8% 2777 perf-stat.ps.minor-faults 136761 ± 8% +2379.2% 3390553 ± 18% perf-stat.ps.node-load-misses 33604 ± 3% +7163.4% 2440858 ± 3% perf-stat.ps.node-store-misses 2374968 -16.0% 1995766 ± 6% perf-stat.ps.node-stores 2858 -2.8% 2777 perf-stat.ps.page-faults 6.723e+12 +6.2% 7.143e+12 perf-stat.total.instructions 51.10 ± 7% -51.1 0.00 perf-profile.calltrace.cycles-pp.open64 47.40 ± 7% -47.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64 44.28 ± 7% -44.3 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64 43.99 ± 7% -44.0 0.00 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64 43.89 ± 7% -43.9 0.00 perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64 40.96 ± 7% -41.0 0.00 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe 40.85 ± 7% -40.9 0.00 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64 31.55 ± 16% -31.5 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 30.58 ± 16% -30.6 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 30.58 ± 16% -30.6 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 30.58 ± 16% -30.6 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 30.26 ± 16% -30.3 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 30.13 ± 16% -30.1 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 29.65 ± 18% -29.6 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 27.69 ± 7% -27.7 0.00 perf-profile.calltrace.cycles-pp.do_open.path_openat.do_filp_open.do_sys_openat2.do_sys_open 16.83 ± 7% -16.8 0.00 perf-profile.calltrace.cycles-pp.__close 13.40 ± 7% -13.4 0.00 perf-profile.calltrace.cycles-pp.do_dentry_open.do_open.path_openat.do_filp_open.do_sys_openat2 13.17 ± 7% -13.2 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close 13.15 ± 7% -13.2 0.00 perf-profile.calltrace.cycles-pp.ima_file_check.do_open.path_openat.do_filp_open.do_sys_openat2 13.06 ± 7% -13.1 0.00 perf-profile.calltrace.cycles-pp.security_task_getsecid.ima_file_check.do_open.path_openat.do_filp_open 13.01 ± 7% -13.0 0.00 perf-profile.calltrace.cycles-pp.apparmor_task_getsecid.security_task_getsecid.ima_file_check.do_open.path_openat 12.55 ± 7% -12.6 0.00 perf-profile.calltrace.cycles-pp.security_file_open.do_dentry_open.do_open.path_openat.do_filp_open 12.48 ± 7% -12.5 0.00 perf-profile.calltrace.cycles-pp.apparmor_file_open.security_file_open.do_dentry_open.do_open.path_openat 11.85 ± 7% -11.9 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__close 10.62 ± 7% -10.6 0.00 perf-profile.calltrace.cycles-pp.alloc_empty_file.path_openat.do_filp_open.do_sys_openat2.do_sys_open 10.55 ± 7% -10.5 0.00 perf-profile.calltrace.cycles-pp.__alloc_file.alloc_empty_file.path_openat.do_filp_open.do_sys_openat2 9.06 ± 7% -9.1 0.00 perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__close 8.71 ± 7% -8.7 0.00 perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__close 7.82 ± 7% -7.8 0.00 perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 7.45 ± 7% -7.5 0.00 perf-profile.calltrace.cycles-pp.security_file_alloc.__alloc_file.alloc_empty_file.path_openat.do_filp_open 6.90 ± 7% -6.9 0.00 perf-profile.calltrace.cycles-pp.apparmor_file_alloc_security.security_file_alloc.__alloc_file.alloc_empty_file.path_openat 6.57 ± 7% -6.6 0.00 perf-profile.calltrace.cycles-pp.security_file_free.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 6.57 ± 7% -6.6 0.00 perf-profile.calltrace.cycles-pp.aa_get_task_label.apparmor_task_getsecid.security_task_getsecid.ima_file_check.do_open 6.53 ± 7% -6.5 0.00 perf-profile.calltrace.cycles-pp.apparmor_file_free_security.security_file_free.__fput.task_work_run.exit_to_user_mode_prepare 60.76 ± 7% -60.8 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 51.31 ± 7% -51.3 0.00 perf-profile.children.cycles-pp.open64 45.62 ± 7% -45.6 0.00 perf-profile.children.cycles-pp.do_syscall_64 44.02 ± 7% -44.0 0.00 perf-profile.children.cycles-pp.do_sys_open 43.93 ± 7% -43.9 0.00 perf-profile.children.cycles-pp.do_sys_openat2 40.98 ± 7% -41.0 0.00 perf-profile.children.cycles-pp.do_filp_open 40.89 ± 7% -40.9 0.00 perf-profile.children.cycles-pp.path_openat 31.55 ± 16% -31.5 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 31.55 ± 16% -31.5 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 31.55 ± 16% -31.5 0.00 perf-profile.children.cycles-pp.do_idle 31.22 ± 16% -31.2 0.00 perf-profile.children.cycles-pp.cpuidle_enter 31.22 ± 16% -31.2 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 30.58 ± 16% -30.6 0.00 perf-profile.children.cycles-pp.start_secondary 29.71 ± 18% -29.7 0.00 perf-profile.children.cycles-pp.intel_idle 27.72 ± 7% -27.7 0.00 perf-profile.children.cycles-pp.do_open 17.07 ± 7% -17.1 0.00 perf-profile.children.cycles-pp.__close 14.88 ± 7% -14.9 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 13.42 ± 7% -13.4 0.00 perf-profile.children.cycles-pp.do_dentry_open 13.16 ± 7% -13.2 0.00 perf-profile.children.cycles-pp.ima_file_check 13.07 ± 7% -13.1 0.00 perf-profile.children.cycles-pp.security_task_getsecid 13.02 ± 7% -13.0 0.00 perf-profile.children.cycles-pp.apparmor_task_getsecid 12.56 ± 7% -12.6 0.00 perf-profile.children.cycles-pp.security_file_open 12.49 ± 7% -12.5 0.00 perf-profile.children.cycles-pp.apparmor_file_open 10.64 ± 7% -10.6 0.00 perf-profile.children.cycles-pp.alloc_empty_file 10.56 ± 7% -10.6 0.00 perf-profile.children.cycles-pp.__alloc_file 9.15 ± 7% -9.2 0.00 perf-profile.children.cycles-pp.exit_to_user_mode_prepare 8.75 ± 7% -8.7 0.00 perf-profile.children.cycles-pp.task_work_run 7.88 ± 7% -7.9 0.00 perf-profile.children.cycles-pp.__fput 7.46 ± 7% -7.5 0.00 perf-profile.children.cycles-pp.security_file_alloc 6.91 ± 7% -6.9 0.00 perf-profile.children.cycles-pp.apparmor_file_alloc_security 6.57 ± 7% -6.6 0.00 perf-profile.children.cycles-pp.aa_get_task_label 6.57 ± 7% -6.6 0.00 perf-profile.children.cycles-pp.security_file_free 6.53 ± 7% -6.5 0.00 perf-profile.children.cycles-pp.apparmor_file_free_security 4.79 ± 11% -4.8 0.00 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt 29.71 ± 18% -29.7 0.00 perf-profile.self.cycles-pp.intel_idle 11.93 ± 7% -11.9 0.00 perf-profile.self.cycles-pp.apparmor_file_open 6.47 ± 7% -6.5 0.00 perf-profile.self.cycles-pp.apparmor_file_alloc_security 6.28 ± 7% -6.3 0.00 perf-profile.self.cycles-pp.aa_get_task_label 6.21 ± 8% -6.2 0.00 perf-profile.self.cycles-pp.apparmor_file_free_security 6.15 ± 7% -6.1 0.00 perf-profile.self.cycles-pp.apparmor_task_getsecid 5.70 ± 7% -5.7 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode 2858 ±139% +710.2% 23161 ± 69% interrupts.41:PCI-MSI.67633156-edge.eth0-TxRx-3 160549 ± 20% -42.6% 92097 ± 11% interrupts.CAL:Function_call_interrupts 5093 ± 32% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 5093 ± 32% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 306.17 ± 18% +35.2% 413.83 ± 5% interrupts.CPU0.RES:Rescheduling_interrupts 5215 ± 37% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 5215 ± 37% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 79.33 ± 24% +65.8% 131.50 ± 17% interrupts.CPU1.RES:Rescheduling_interrupts 4947 ± 48% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 4947 ± 48% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 78.50 ± 22% +58.2% 124.17 ± 18% interrupts.CPU10.RES:Rescheduling_interrupts 107.83 ± 83% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 107.83 ± 83% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 342.83 ± 34% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 342.83 ± 34% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 5460 ± 33% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 5460 ± 33% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 74.17 ± 26% +74.2% 129.17 ± 17% interrupts.CPU11.RES:Rescheduling_interrupts 4247 ± 20% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 4247 ± 20% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 77.67 ± 26% +63.3% 126.83 ± 22% interrupts.CPU12.RES:Rescheduling_interrupts 6619 ± 22% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 6619 ± 22% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 79.50 ± 21% +59.1% 126.50 ± 16% interrupts.CPU13.RES:Rescheduling_interrupts 5213 ± 37% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 5213 ± 37% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 80.00 ± 17% +55.2% 124.17 ± 17% interrupts.CPU14.RES:Rescheduling_interrupts 5428 ± 33% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 5428 ± 33% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 81.00 ± 31% +66.9% 135.17 ± 14% interrupts.CPU15.RES:Rescheduling_interrupts 87.83 ± 13% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 87.83 ± 13% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 0.50 ±100% +13933.3% 70.17 ± 42% interrupts.CPU18.RES:Rescheduling_interrupts 5.50 ±162% +2251.5% 129.33 ± 76% interrupts.CPU19.RES:Rescheduling_interrupts 4872 ± 42% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 4872 ± 42% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 80.00 ± 20% +61.5% 129.17 ± 12% interrupts.CPU2.RES:Rescheduling_interrupts 2.83 ±117% +2458.8% 72.50 ± 78% interrupts.CPU20.RES:Rescheduling_interrupts 67.83 ± 42% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 67.83 ± 42% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 75.67 ± 36% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 75.67 ± 36% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 160.83 ± 82% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 160.83 ± 82% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 111.67 ± 86% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 111.67 ± 86% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 4902 ± 30% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 4902 ± 30% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 82.00 ± 19% +64.4% 134.83 ± 18% interrupts.CPU3.RES:Rescheduling_interrupts 2858 ±139% +710.2% 23161 ± 69% interrupts.CPU33.41:PCI-MSI.67633156-edge.eth0-TxRx-3 245.50 ±106% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 245.50 ±106% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 4539 ± 45% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 4539 ± 45% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 77.50 ± 17% +68.4% 130.50 ± 17% interrupts.CPU4.RES:Rescheduling_interrupts 82.83 ± 26% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 82.83 ± 26% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 179.33 ±127% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 179.33 ±127% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 4359 ± 42% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 4359 ± 42% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 172.33 ±130% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 172.33 ±130% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 261.17 ± 32% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 261.17 ± 32% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 5562 ± 30% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 5562 ± 30% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 5087 ± 22% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 5087 ± 22% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 5482 ± 37% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 5482 ± 37% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 78.50 ± 32% +67.3% 131.33 ± 16% interrupts.CPU54.RES:Rescheduling_interrupts 5820 ± 26% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 5820 ± 26% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 6049 ± 20% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 6049 ± 20% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 5130 ± 30% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 5130 ± 30% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 76.17 ± 22% +69.4% 129.00 ± 20% interrupts.CPU57.RES:Rescheduling_interrupts 4359 ± 44% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 4359 ± 44% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 5622 ± 35% -100.0% 0.83 ±223% interrupts.CPU59.NMI:Non-maskable_interrupts 5622 ± 35% -100.0% 0.83 ±223% interrupts.CPU59.PMI:Performance_monitoring_interrupts 5124 ± 42% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 5124 ± 42% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 77.67 ± 20% +64.4% 127.67 ± 18% interrupts.CPU6.RES:Rescheduling_interrupts 5402 ± 38% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 5402 ± 38% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 5645 ± 29% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 5645 ± 29% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 5158 ± 41% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 5158 ± 41% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 76.67 ± 20% +63.3% 125.17 ± 16% interrupts.CPU62.RES:Rescheduling_interrupts 4781 ± 40% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 4781 ± 40% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 5426 ± 34% -100.0% 1.00 ±223% interrupts.CPU64.NMI:Non-maskable_interrupts 5426 ± 34% -100.0% 1.00 ±223% interrupts.CPU64.PMI:Performance_monitoring_interrupts 73.50 ± 21% +68.5% 123.83 ± 14% interrupts.CPU64.RES:Rescheduling_interrupts 3399 ± 58% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 3399 ± 58% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 73.67 ± 19% +73.5% 127.83 ± 23% interrupts.CPU65.RES:Rescheduling_interrupts 4146 ± 42% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 4146 ± 42% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 4911 ± 43% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 4911 ± 43% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 78.83 ± 14% +58.1% 124.67 ± 15% interrupts.CPU67.RES:Rescheduling_interrupts 74.50 ± 33% -98.7% 1.00 ±223% interrupts.CPU68.NMI:Non-maskable_interrupts 74.50 ± 33% -98.7% 1.00 ±223% interrupts.CPU68.PMI:Performance_monitoring_interrupts 3.83 ± 74% +2339.1% 93.50 ± 73% interrupts.CPU69.RES:Rescheduling_interrupts 3980 ± 37% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 3980 ± 37% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 73.83 ± 32% +82.4% 134.67 ± 19% interrupts.CPU7.RES:Rescheduling_interrupts 72.17 ± 39% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 72.17 ± 39% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 66.83 ± 38% -98.5% 1.00 ±223% interrupts.CPU72.NMI:Non-maskable_interrupts 66.83 ± 38% -98.5% 1.00 ±223% interrupts.CPU72.PMI:Performance_monitoring_interrupts 1.50 ± 63% +3011.1% 46.67 ± 90% interrupts.CPU72.RES:Rescheduling_interrupts 67.50 ± 42% -98.5% 1.00 ±223% interrupts.CPU75.NMI:Non-maskable_interrupts 67.50 ± 42% -98.5% 1.00 ±223% interrupts.CPU75.PMI:Performance_monitoring_interrupts 136.00 ± 74% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 136.00 ± 74% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 4724 ± 45% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 4724 ± 45% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 75.83 ± 19% +73.2% 131.33 ± 14% interrupts.CPU8.RES:Rescheduling_interrupts 4.50 ± 27% +596.3% 31.33 ±162% interrupts.CPU80.TLB:TLB_shootdowns 67.83 ± 43% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 67.83 ± 43% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 144.50 ± 77% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 144.50 ± 77% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 4064 ± 33% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 4064 ± 33% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 78.50 ± 22% +69.0% 132.67 ± 15% interrupts.CPU9.RES:Rescheduling_interrupts 73.33 ± 30% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 73.33 ± 30% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 81.17 ± 18% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 81.17 ± 18% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 1228 ±110% -54.1% 563.50 ± 7% interrupts.CPU96.CAL:Function_call_interrupts 114.83 ± 81% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 114.83 ± 81% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 79.83 ± 18% -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 79.83 ± 18% -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 167074 ± 16% -100.0% 4.83 ± 45% interrupts.NMI:Non-maskable_interrupts 167074 ± 16% -100.0% 4.83 ± 45% interrupts.PMI:Performance_monitoring_interrupts 4304 ± 20% +38.2% 5949 ± 11% interrupts.RES:Rescheduling_interrupts 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 24% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 19% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 ± 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 45% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 45% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 0.01 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.03 ± 26% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.01 ± 29% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 8% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.03 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 15% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 49% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 61% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.02 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.03 ± 34% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.02 ± 26% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.02 ± 47% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.68 ±218% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 27% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 1.35 ± 52% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 30% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 3.18 -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.01 ± 25% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 3.32 ± 9% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 190.86 ± 2% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 10079 -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8926 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 190.86 ± 2% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8926 ± 3% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.52 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2774 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 826.57 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2774 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 275.49 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.91 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.14 ± 29% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.22 ± 35% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 65.82 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.17 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 7.75 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 637.69 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 556.80 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.36 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 3.89 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 674.15 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 570.59 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3.17 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 21.83 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3.17 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 247.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 246.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 88.00 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 64.50 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2464 -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 315.33 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 1248 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 25.50 ± 39% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 63.33 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 39.33 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 2571 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1666 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 72.00 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 761.67 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.68 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 8195 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 8195 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3834 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 17.26 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.75 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 2.72 ± 48% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 8197 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 2.35 ± 17% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 1149 ± 28% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 7824 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 6265 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.00 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 5.01 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 6139 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 8810 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.51 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2774 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 826.57 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2774 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 275.49 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.91 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.14 ± 29% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.10 ± 96% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.22 ± 35% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 65.82 -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.16 ± 59% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__fput.task_work_run.exit_to_user_mode_prepare 0.26 ± 41% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.apparmor_file_alloc_security.security_file_alloc.__alloc_file 0.13 ± 88% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 0.17 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 0.12 ± 56% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.16 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 0.14 ± 27% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file 7.75 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.13 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 637.68 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 556.80 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.35 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 30% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 3.88 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 674.14 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 570.56 -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.66 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 8195 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 8195 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3834 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 17.25 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.75 ± 34% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.16 ±126% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 2.72 ± 48% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 8197 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.62 ± 82% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__fput.task_work_run.exit_to_user_mode_prepare 1.07 ± 78% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.apparmor_file_alloc_security.security_file_alloc.__alloc_file 0.29 ± 74% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 2.35 ± 17% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 0.90 ± 57% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 1.27 ± 25% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 1.34 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file 1149 ± 28% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 1.54 ± 30% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 7824 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 6265 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.00 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 30% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 5.00 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 6139 ± 27% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 8810 ± 4% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork *************************************************************************************************** lkp-cpl-4sp1: 144 threads Intel(R) Xeon(R) Gold 5318H CPU @ 2.50GHz with 128G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/thread/50%/debian-10.4-x86_64-20200603.cgz/lkp-cpl-4sp1/readseek1/will-it-scale/0x700001e commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 1.371e+08 -17.7% 1.128e+08 will-it-scale.72.threads 49.50 -2.7% 48.16 will-it-scale.72.threads_idle 1903942 -17.7% 1566480 will-it-scale.per_thread_ops 1.371e+08 -17.7% 1.128e+08 will-it-scale.workload 10990 ±219% +364.6% 51058 ± 38% numa-numastat.node1.other_node 164039 ± 6% +38.6% 227384 ± 5% cpuidle.POLL.time 38903 ± 3% +12.8% 43897 ± 4% cpuidle.POLL.usage 0.04 ± 3% +0.0 0.05 ± 5% mpstat.cpu.all.soft% 5.55 -1.2 4.40 ± 2% mpstat.cpu.all.usr% 74165 ± 5% +92.3% 142628 ± 63% numa-vmstat.node1.nr_file_pages 102845 ± 23% +38.5% 142481 ± 13% numa-vmstat.node1.numa_other 3798 ± 20% -46.7% 2023 ± 18% numa-vmstat.node3.nr_mapped 49.83 -3.7% 48.00 vmstat.cpu.id 1534792 +90.1% 2917406 ± 2% vmstat.memory.cache 2080 +3.7% 2157 vmstat.system.cs 293491 -2.0% 287681 vmstat.system.in 775324 ± 7% +85.9% 1441205 ± 30% numa-meminfo.node0.MemUsed 296660 ± 5% +92.2% 570311 ± 63% numa-meminfo.node1.FilePages 749780 ± 12% +49.0% 1116991 ± 35% numa-meminfo.node1.MemUsed 113057 ± 8% -20.9% 89420 ± 16% numa-meminfo.node1.Slab 15103 ± 22% -47.0% 8012 ± 18% numa-meminfo.node3.Mapped 5707449 ± 2% +25.3% 7153477 ± 4% sched_debug.cfs_rq:/.min_vruntime.avg 10375484 ± 5% +12.4% 11659627 sched_debug.cfs_rq:/.min_vruntime.max 739456 ± 87% -96.8% 23413 ± 21% sched_debug.cfs_rq:/.min_vruntime.min 2688138 ± 29% +69.5% 4556356 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev 2688150 ± 29% +69.5% 4556367 ± 7% sched_debug.cfs_rq:/.spread0.stddev 5083 ± 23% +38.5% 7041 ± 14% sched_debug.cpu.nr_switches.stddev 17512 -32.9% 11758 slabinfo.proc_inode_cache.active_objs 364.67 -33.0% 244.50 slabinfo.proc_inode_cache.active_slabs 17522 -32.9% 11761 slabinfo.proc_inode_cache.num_objs 364.67 -33.0% 244.50 slabinfo.proc_inode_cache.num_slabs 25459 +21.0% 30807 slabinfo.radix_tree_node.active_objs 25459 +21.0% 30807 slabinfo.radix_tree_node.num_objs 202987 ± 3% +682.9% 1589160 ± 4% meminfo.Active 202987 ± 3% +682.9% 1589160 ± 4% meminfo.Active(anon) 1411780 +97.7% 2790973 ± 2% meminfo.Cached 1186594 +122.4% 2638533 ± 2% meminfo.Committed_AS 46985 -17.9% 38555 meminfo.Mapped 3034394 +65.3% 5014942 meminfo.Memused 4695 +12.1% 5262 meminfo.PageTables 229287 ± 2% +601.5% 1608481 ± 4% meminfo.Shmem 3719881 +34.8% 5015953 meminfo.max_used_kB 50653 ± 3% +685.7% 397988 ± 4% proc-vmstat.nr_active_anon 75425 +9.3% 82417 proc-vmstat.nr_anon_pages 98.67 +8.3% 106.83 proc-vmstat.nr_anon_transparent_hugepages 3197404 -1.5% 3147915 proc-vmstat.nr_dirty_background_threshold 6402626 -1.5% 6303528 proc-vmstat.nr_dirty_threshold 352860 +97.9% 698442 ± 2% proc-vmstat.nr_file_pages 32154573 -1.5% 31658962 proc-vmstat.nr_free_pages 81840 +6.4% 87096 proc-vmstat.nr_inactive_anon 11931 -18.2% 9765 proc-vmstat.nr_mapped 1184 +10.8% 1311 proc-vmstat.nr_page_table_pages 57237 ± 2% +603.8% 402819 ± 4% proc-vmstat.nr_shmem 30678 -2.4% 29936 proc-vmstat.nr_slab_reclaimable 73560 +1.8% 74869 proc-vmstat.nr_slab_unreclaimable 50653 ± 3% +685.7% 397988 ± 4% proc-vmstat.nr_zone_active_anon 81840 +6.4% 87096 proc-vmstat.nr_zone_inactive_anon 1230674 +75.6% 2160870 ± 2% proc-vmstat.numa_hit 1035788 +89.8% 1966010 ± 2% proc-vmstat.numa_local 214630 ± 8% -22.9% 165526 ± 10% proc-vmstat.numa_pte_updates 75462 ± 2% -29.5% 53168 ± 4% proc-vmstat.pgactivate 1322229 +124.5% 2968671 ± 3% proc-vmstat.pgalloc_normal 1261055 -8.3% 1156791 proc-vmstat.pgfault 1216594 +58.3% 1925333 ± 4% proc-vmstat.pgfree 10183 ± 6% -25.2% 7617 ± 23% softirqs.CPU100.RCU 25587 ± 31% -67.6% 8292 ± 33% softirqs.CPU105.SCHED 8906 ± 9% -21.8% 6960 ± 14% softirqs.CPU108.RCU 10147 ± 6% -18.0% 8320 ± 16% softirqs.CPU113.RCU 10913 ± 3% -30.7% 7559 ± 19% softirqs.CPU117.RCU 10175 ± 8% -25.6% 7570 ± 16% softirqs.CPU118.RCU 10516 ± 7% -31.8% 7175 ± 15% softirqs.CPU119.RCU 9603 ± 14% -21.9% 7501 ± 16% softirqs.CPU120.RCU 10213 ± 3% -21.7% 7994 ± 20% softirqs.CPU122.RCU 9296 ± 10% -21.3% 7313 ± 12% softirqs.CPU124.RCU 9019 ± 23% -26.9% 6595 ± 16% softirqs.CPU138.RCU 28399 ± 14% -69.7% 8598 ± 38% softirqs.CPU141.SCHED 17529 ± 43% -47.0% 9295 ± 58% softirqs.CPU17.SCHED 11047 ± 8% -21.2% 8704 ± 17% softirqs.CPU23.RCU 8843 ± 8% -26.4% 6504 ± 15% softirqs.CPU33.RCU 10403 ± 9% -23.4% 7968 ± 20% softirqs.CPU38.RCU 11213 ± 4% -29.7% 7881 ± 14% softirqs.CPU39.RCU 9800 ± 8% -24.2% 7424 ± 12% softirqs.CPU42.RCU 10506 ± 6% -19.8% 8423 ± 17% softirqs.CPU43.RCU 22912 ± 27% -61.6% 8798 ± 29% softirqs.CPU57.SCHED 9547 ± 11% -21.7% 7477 ± 16% softirqs.CPU61.RCU 9837 ± 7% -17.9% 8079 ± 12% softirqs.CPU63.RCU 9716 ± 10% -29.3% 6872 ± 12% softirqs.CPU65.RCU 9401 ± 5% -30.8% 6509 ± 17% softirqs.CPU69.RCU 9341 ± 6% -17.2% 7733 ± 17% softirqs.CPU70.RCU 11168 ± 4% -15.8% 9403 ± 7% softirqs.CPU72.RCU 10626 ± 4% -23.1% 8172 ± 18% softirqs.CPU76.RCU 9456 ± 7% -26.2% 6981 ± 12% softirqs.CPU79.RCU 9756 ± 5% -14.2% 8366 ± 6% softirqs.CPU80.RCU 10325 ± 8% -28.6% 7377 ± 14% softirqs.CPU81.RCU 26869 ± 38% +46.6% 39382 ± 3% softirqs.CPU85.SCHED 10056 ± 21% -24.9% 7552 ± 16% softirqs.CPU88.RCU 9802 ± 11% -19.4% 7900 ± 17% softirqs.CPU99.RCU 1372894 ± 2% -14.5% 1174050 softirqs.RCU 45009 ± 2% +16.6% 52497 softirqs.TIMER 48.95 ± 10% -49.0 0.00 perf-profile.calltrace.cycles-pp.__libc_read 40.76 ± 11% -40.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_read 39.49 ± 11% -39.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 38.72 ± 11% -38.7 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 35.68 ± 20% -35.7 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 35.56 ± 20% -35.6 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 35.24 ± 19% -35.2 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 35.24 ± 19% -35.2 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 35.24 ± 19% -35.2 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 35.24 ± 19% -35.2 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 35.24 ± 19% -35.2 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 32.26 ± 10% -32.3 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 26.74 ± 10% -26.7 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 25.48 ± 10% -25.5 0.00 perf-profile.calltrace.cycles-pp.shmem_file_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64 17.62 ± 11% -17.6 0.00 perf-profile.calltrace.cycles-pp.__libc_lseek64 13.26 ± 10% -13.3 0.00 perf-profile.calltrace.cycles-pp.copy_page_to_iter.shmem_file_read_iter.new_sync_read.vfs_read.ksys_read 11.42 ± 12% -11.4 0.00 perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.shmem_file_read_iter.new_sync_read.vfs_read 10.53 ± 10% -10.5 0.00 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.shmem_file_read_iter.new_sync_read 10.36 ± 11% -10.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_lseek64 8.92 ± 11% -8.9 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_lseek64 8.16 ± 11% -8.2 0.00 perf-profile.calltrace.cycles-pp.ksys_lseek.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_lseek64 5.70 ± 10% -5.7 0.00 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_file_read_iter.new_sync_read.vfs_read.ksys_read 51.25 ± 11% -51.2 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 48.91 ± 11% -48.9 0.00 perf-profile.children.cycles-pp.__libc_read 48.46 ± 11% -48.5 0.00 perf-profile.children.cycles-pp.do_syscall_64 38.87 ± 11% -38.9 0.00 perf-profile.children.cycles-pp.ksys_read 35.68 ± 20% -35.7 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 35.68 ± 20% -35.7 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 35.68 ± 20% -35.7 0.00 perf-profile.children.cycles-pp.do_idle 35.68 ± 20% -35.7 0.00 perf-profile.children.cycles-pp.cpuidle_enter 35.68 ± 20% -35.7 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 35.68 ± 20% -35.7 0.00 perf-profile.children.cycles-pp.intel_idle 35.24 ± 19% -35.2 0.00 perf-profile.children.cycles-pp.start_secondary 32.36 ± 10% -32.4 0.00 perf-profile.children.cycles-pp.vfs_read 26.83 ± 10% -26.8 0.00 perf-profile.children.cycles-pp.new_sync_read 25.61 ± 10% -25.6 0.00 perf-profile.children.cycles-pp.shmem_file_read_iter 17.67 ± 11% -17.7 0.00 perf-profile.children.cycles-pp.__libc_lseek64 13.34 ± 10% -13.3 0.00 perf-profile.children.cycles-pp.copy_page_to_iter 11.26 ± 10% -11.3 0.00 perf-profile.children.cycles-pp.copyout 11.07 ± 10% -11.1 0.00 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string 9.12 ± 12% -9.1 0.00 perf-profile.children.cycles-pp.__fdget_pos 8.29 ± 11% -8.3 0.00 perf-profile.children.cycles-pp.ksys_lseek 5.70 ± 10% -5.7 0.00 perf-profile.children.cycles-pp.shmem_getpage_gfp 5.70 ± 10% -5.7 0.00 perf-profile.children.cycles-pp.__entry_text_start 35.68 ± 20% -35.7 0.00 perf-profile.self.cycles-pp.intel_idle 10.93 ± 10% -10.9 0.00 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string 0.03 ± 9% +847.2% 0.28 ± 3% perf-stat.i.MPKI 6.503e+10 -16.2% 5.447e+10 perf-stat.i.branch-instructions 4.299e+08 -17.0% 3.569e+08 perf-stat.i.branch-misses 16.66 ± 3% +14.5 31.14 ± 4% perf-stat.i.cache-miss-rate% 1149801 ± 4% +1788.9% 21718470 ± 4% perf-stat.i.cache-misses 6892041 +913.1% 69823615 ± 2% perf-stat.i.cache-references 2042 +3.7% 2117 perf-stat.i.context-switches 0.74 +26.6% 0.94 ± 2% perf-stat.i.cpi 2.234e+11 +6.2% 2.372e+11 perf-stat.i.cpu-cycles 153.36 -4.1% 147.12 perf-stat.i.cpu-migrations 221991 ± 5% -95.0% 11045 ± 4% perf-stat.i.cycles-between-cache-misses 47654 ± 19% -31.1% 32849 ± 10% perf-stat.i.dTLB-load-misses 8.909e+10 -16.7% 7.421e+10 perf-stat.i.dTLB-loads 45179 -13.2% 39209 perf-stat.i.dTLB-store-misses 5.374e+10 -16.6% 4.482e+10 perf-stat.i.dTLB-stores 4.189e+08 -16.9% 3.483e+08 perf-stat.i.iTLB-load-misses 3672070 -3.7% 3536754 perf-stat.i.iTLB-loads 3.014e+11 -16.2% 2.526e+11 perf-stat.i.instructions 1.35 -21.0% 1.07 ± 2% perf-stat.i.ipc 1.14 ± 7% -83.5% 0.19 ± 77% perf-stat.i.major-faults 1.55 +6.2% 1.65 perf-stat.i.metric.GHz 1.06 ± 3% -55.9% 0.47 ± 4% perf-stat.i.metric.K/sec 1442 -16.5% 1205 perf-stat.i.metric.M/sec 4051 -8.7% 3700 perf-stat.i.minor-faults 232725 ± 6% +1968.3% 4813467 ± 10% perf-stat.i.node-load-misses 35538 ± 9% +997.2% 389920 ± 29% perf-stat.i.node-loads 92.68 ± 2% +6.0 98.66 perf-stat.i.node-store-miss-rate% 81811 ± 6% +1962.4% 1687291 ± 4% perf-stat.i.node-store-misses 11608 ± 23% +86.1% 21604 ± 12% perf-stat.i.node-stores 4052 -8.7% 3700 perf-stat.i.page-faults 0.02 +1100.6% 0.28 ± 4% perf-stat.overall.MPKI 16.69 ± 4% +14.4 31.09 ± 4% perf-stat.overall.cache-miss-rate% 0.74 +26.7% 0.94 ± 2% perf-stat.overall.cpi 193280 ± 5% -94.3% 10948 ± 4% perf-stat.overall.cycles-between-cache-misses 0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate% 1.35 -21.0% 1.07 ± 2% perf-stat.overall.ipc 86.69 +5.6 92.32 ± 2% perf-stat.overall.node-load-miss-rate% 87.45 ± 3% +11.3 98.73 perf-stat.overall.node-store-miss-rate% 661785 +2.0% 674715 perf-stat.overall.path-length 6.479e+10 -16.2% 5.428e+10 perf-stat.ps.branch-instructions 4.284e+08 -17.0% 3.557e+08 perf-stat.ps.branch-misses 1154358 ± 4% +1774.2% 21635341 ± 4% perf-stat.ps.cache-misses 6916314 +906.4% 69608546 ± 2% perf-stat.ps.cache-references 2031 +3.9% 2109 perf-stat.ps.context-switches 2.226e+11 +6.2% 2.364e+11 perf-stat.ps.cpu-cycles 153.00 -4.1% 146.69 perf-stat.ps.cpu-migrations 50423 ± 23% -34.7% 32917 ± 10% perf-stat.ps.dTLB-load-misses 8.875e+10 -16.7% 7.395e+10 perf-stat.ps.dTLB-loads 45154 -13.4% 39118 perf-stat.ps.dTLB-store-misses 5.353e+10 -16.6% 4.466e+10 perf-stat.ps.dTLB-stores 4.174e+08 -16.8% 3.471e+08 perf-stat.ps.iTLB-load-misses 3657869 -3.6% 3524739 perf-stat.ps.iTLB-loads 3.002e+11 -16.1% 2.518e+11 perf-stat.ps.instructions 1.14 ± 7% -82.7% 0.20 ± 75% perf-stat.ps.major-faults 4041 -8.7% 3690 perf-stat.ps.minor-faults 233696 ± 6% +1952.0% 4795457 ± 10% perf-stat.ps.node-load-misses 35798 ± 9% +985.2% 388498 ± 29% perf-stat.ps.node-loads 81864 ± 6% +1953.2% 1680889 ± 4% perf-stat.ps.node-store-misses 11722 ± 23% +84.7% 21652 ± 12% perf-stat.ps.node-stores 4042 -8.7% 3691 perf-stat.ps.page-faults 9.072e+13 -16.1% 7.61e+13 perf-stat.total.instructions 0.01 ± 13% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 36% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 12% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.00 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 58% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 29% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.00 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.01 ± 42% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 34% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.03 ±122% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.02 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.02 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.02 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 32% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 12% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.04 ± 13% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.02 ± 30% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.02 ±103% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 1.52 ±192% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 11% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 3.70 ± 2% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.00 ± 11% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 4.45 ± 35% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 178.87 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 11143 -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8294 ± 6% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 178.86 ± 3% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8294 ± 6% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.56 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 791.44 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 221.65 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.00 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.03 ± 18% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 51.31 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 5.72 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.74 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 318.48 ± 18% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.80 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 5.57 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 692.68 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 440.52 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 22.83 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 307.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 306.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 724.17 ± 18% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3104 -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 1677 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 69.83 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 104.83 ± 17% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 80.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1770 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1969 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 72.00 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 593.50 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.54 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3539 ± 39% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 26.06 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 5.75 ± 65% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 6179 ± 37% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1093 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 5.00 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 6251 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.00 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 290.83 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1929 ± 44% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.02 ± 77% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 8252 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.55 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2003 ± 44% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 791.43 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2003 ± 44% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 221.64 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.99 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.10 ±147% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.03 ± 18% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 51.31 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.03 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.copy_page_to_iter.shmem_file_read_iter.new_sync_read 0.03 ± 42% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.__fdget_pos.ksys_lseek 0.03 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.__fdget_pos.ksys_read 0.03 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_file_read_iter 5.72 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.74 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 577.85 ± 44% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 318.48 ± 18% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.79 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 5.57 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 692.68 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 440.51 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.53 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 6007 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 6007 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3539 ± 39% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 26.05 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.60 ±190% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 5.75 ± 65% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 6179 ± 37% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.08 ± 50% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.copy_page_to_iter.shmem_file_read_iter.new_sync_read 0.27 ±158% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.__fdget_pos.ksys_lseek 0.08 ± 52% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.__fdget_pos.ksys_read 0.07 ± 46% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_file_read_iter 1093 ± 12% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 5.00 -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 5790 ± 41% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 6251 ± 19% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.00 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 290.82 ± 16% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1929 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 8252 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 4910 ± 43% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 4910 ± 43% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 6032 ± 25% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 6032 ± 25% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 7336 ± 23% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 7336 ± 23% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 5013 ± 34% -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 5013 ± 34% -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 5933 ± 33% -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 5933 ± 33% -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 4511 ± 52% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 4511 ± 52% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 6415 ± 35% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 6415 ± 35% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 7074 ± 29% -100.0% 0.00 interrupts.CPU104.NMI:Non-maskable_interrupts 7074 ± 29% -100.0% 0.00 interrupts.CPU104.PMI:Performance_monitoring_interrupts 6750 ± 23% -100.0% 0.00 interrupts.CPU105.NMI:Non-maskable_interrupts 6750 ± 23% -100.0% 0.00 interrupts.CPU105.PMI:Performance_monitoring_interrupts 105.83 ± 47% +156.2% 271.17 ± 9% interrupts.CPU105.RES:Rescheduling_interrupts 5545 ± 38% -100.0% 0.00 interrupts.CPU106.NMI:Non-maskable_interrupts 5545 ± 38% -100.0% 0.00 interrupts.CPU106.PMI:Performance_monitoring_interrupts 6259 ± 34% -100.0% 0.00 interrupts.CPU107.NMI:Non-maskable_interrupts 6259 ± 34% -100.0% 0.00 interrupts.CPU107.PMI:Performance_monitoring_interrupts 5591 ± 36% -100.0% 0.00 interrupts.CPU108.NMI:Non-maskable_interrupts 5591 ± 36% -100.0% 0.00 interrupts.CPU108.PMI:Performance_monitoring_interrupts 6515 ± 25% -100.0% 0.00 interrupts.CPU109.NMI:Non-maskable_interrupts 6515 ± 25% -100.0% 0.00 interrupts.CPU109.PMI:Performance_monitoring_interrupts 5580 ± 46% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 5580 ± 46% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 6831 ± 30% -100.0% 0.00 interrupts.CPU110.NMI:Non-maskable_interrupts 6831 ± 30% -100.0% 0.00 interrupts.CPU110.PMI:Performance_monitoring_interrupts 5730 ± 41% -100.0% 0.00 interrupts.CPU111.NMI:Non-maskable_interrupts 5730 ± 41% -100.0% 0.00 interrupts.CPU111.PMI:Performance_monitoring_interrupts 554.50 ± 54% +87.9% 1041 ± 31% interrupts.CPU111.TLB:TLB_shootdowns 6628 ± 36% -100.0% 1.00 ±223% interrupts.CPU112.NMI:Non-maskable_interrupts 6628 ± 36% -100.0% 1.00 ±223% interrupts.CPU112.PMI:Performance_monitoring_interrupts 6055 ± 40% -100.0% 0.00 interrupts.CPU113.NMI:Non-maskable_interrupts 6055 ± 40% -100.0% 0.00 interrupts.CPU113.PMI:Performance_monitoring_interrupts 7636 ± 24% -100.0% 0.00 interrupts.CPU114.NMI:Non-maskable_interrupts 7636 ± 24% -100.0% 0.00 interrupts.CPU114.PMI:Performance_monitoring_interrupts 5847 ± 43% -100.0% 0.00 interrupts.CPU115.NMI:Non-maskable_interrupts 5847 ± 43% -100.0% 0.00 interrupts.CPU115.PMI:Performance_monitoring_interrupts 7757 ± 28% -100.0% 0.00 interrupts.CPU116.NMI:Non-maskable_interrupts 7757 ± 28% -100.0% 0.00 interrupts.CPU116.PMI:Performance_monitoring_interrupts 5290 ± 41% -100.0% 0.00 interrupts.CPU117.NMI:Non-maskable_interrupts 5290 ± 41% -100.0% 0.00 interrupts.CPU117.PMI:Performance_monitoring_interrupts 1775 ± 8% -34.9% 1156 ± 31% interrupts.CPU118.CAL:Function_call_interrupts 6894 ± 30% -100.0% 0.00 interrupts.CPU118.NMI:Non-maskable_interrupts 6894 ± 30% -100.0% 0.00 interrupts.CPU118.PMI:Performance_monitoring_interrupts 6325 ± 39% -100.0% 0.00 interrupts.CPU119.NMI:Non-maskable_interrupts 6325 ± 39% -100.0% 0.00 interrupts.CPU119.PMI:Performance_monitoring_interrupts 5953 ± 35% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 5953 ± 35% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 141.50 ± 45% +86.0% 263.17 ± 14% interrupts.CPU12.RES:Rescheduling_interrupts 4701 ± 50% -100.0% 0.00 interrupts.CPU120.NMI:Non-maskable_interrupts 4701 ± 50% -100.0% 0.00 interrupts.CPU120.PMI:Performance_monitoring_interrupts 4905 ± 44% -100.0% 0.00 interrupts.CPU121.NMI:Non-maskable_interrupts 4905 ± 44% -100.0% 0.00 interrupts.CPU121.PMI:Performance_monitoring_interrupts 6347 ± 42% -100.0% 0.00 interrupts.CPU122.NMI:Non-maskable_interrupts 6347 ± 42% -100.0% 0.00 interrupts.CPU122.PMI:Performance_monitoring_interrupts 5147 ± 41% -100.0% 0.00 interrupts.CPU123.NMI:Non-maskable_interrupts 5147 ± 41% -100.0% 0.00 interrupts.CPU123.PMI:Performance_monitoring_interrupts 6510 ± 40% -100.0% 0.00 interrupts.CPU124.NMI:Non-maskable_interrupts 6510 ± 40% -100.0% 0.00 interrupts.CPU124.PMI:Performance_monitoring_interrupts 6099 ± 36% -100.0% 0.00 interrupts.CPU125.NMI:Non-maskable_interrupts 6099 ± 36% -100.0% 0.00 interrupts.CPU125.PMI:Performance_monitoring_interrupts 5745 ± 29% -100.0% 0.00 interrupts.CPU126.NMI:Non-maskable_interrupts 5745 ± 29% -100.0% 0.00 interrupts.CPU126.PMI:Performance_monitoring_interrupts 5816 ± 26% -100.0% 0.00 interrupts.CPU127.NMI:Non-maskable_interrupts 5816 ± 26% -100.0% 0.00 interrupts.CPU127.PMI:Performance_monitoring_interrupts 6252 ± 40% -100.0% 0.00 interrupts.CPU128.NMI:Non-maskable_interrupts 6252 ± 40% -100.0% 0.00 interrupts.CPU128.PMI:Performance_monitoring_interrupts 6063 ± 29% -100.0% 0.00 interrupts.CPU129.NMI:Non-maskable_interrupts 6063 ± 29% -100.0% 0.00 interrupts.CPU129.PMI:Performance_monitoring_interrupts 6700 ± 28% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 6700 ± 28% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 5518 ± 33% -100.0% 0.00 interrupts.CPU130.NMI:Non-maskable_interrupts 5518 ± 33% -100.0% 0.00 interrupts.CPU130.PMI:Performance_monitoring_interrupts 6671 ± 23% -100.0% 0.00 interrupts.CPU131.NMI:Non-maskable_interrupts 6671 ± 23% -100.0% 0.00 interrupts.CPU131.PMI:Performance_monitoring_interrupts 7697 ± 18% -100.0% 0.00 interrupts.CPU132.NMI:Non-maskable_interrupts 7697 ± 18% -100.0% 0.00 interrupts.CPU132.PMI:Performance_monitoring_interrupts 4834 ± 29% -100.0% 0.00 interrupts.CPU133.NMI:Non-maskable_interrupts 4834 ± 29% -100.0% 0.00 interrupts.CPU133.PMI:Performance_monitoring_interrupts 6156 ± 41% -100.0% 0.00 interrupts.CPU134.NMI:Non-maskable_interrupts 6156 ± 41% -100.0% 0.00 interrupts.CPU134.PMI:Performance_monitoring_interrupts 6949 ± 31% -100.0% 0.00 interrupts.CPU135.NMI:Non-maskable_interrupts 6949 ± 31% -100.0% 0.00 interrupts.CPU135.PMI:Performance_monitoring_interrupts 6063 ± 40% -100.0% 0.00 interrupts.CPU136.NMI:Non-maskable_interrupts 6063 ± 40% -100.0% 0.00 interrupts.CPU136.PMI:Performance_monitoring_interrupts 1150 ± 39% +60.6% 1847 ± 13% interrupts.CPU137.CAL:Function_call_interrupts 5414 ± 34% -100.0% 0.00 interrupts.CPU137.NMI:Non-maskable_interrupts 5414 ± 34% -100.0% 0.00 interrupts.CPU137.PMI:Performance_monitoring_interrupts 5769 ± 29% -100.0% 0.00 interrupts.CPU138.NMI:Non-maskable_interrupts 5769 ± 29% -100.0% 0.00 interrupts.CPU138.PMI:Performance_monitoring_interrupts 5483 ± 37% -100.0% 0.00 interrupts.CPU139.NMI:Non-maskable_interrupts 5483 ± 37% -100.0% 0.00 interrupts.CPU139.PMI:Performance_monitoring_interrupts 7846 ± 20% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 7846 ± 20% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 6600 ± 31% -100.0% 0.00 interrupts.CPU140.NMI:Non-maskable_interrupts 6600 ± 31% -100.0% 0.00 interrupts.CPU140.PMI:Performance_monitoring_interrupts 7205 ± 17% -100.0% 0.00 interrupts.CPU141.NMI:Non-maskable_interrupts 7205 ± 17% -100.0% 0.00 interrupts.CPU141.PMI:Performance_monitoring_interrupts 6790 ± 30% -100.0% 0.00 interrupts.CPU142.NMI:Non-maskable_interrupts 6790 ± 30% -100.0% 0.00 interrupts.CPU142.PMI:Performance_monitoring_interrupts 5683 ± 21% -100.0% 0.00 interrupts.CPU143.NMI:Non-maskable_interrupts 5683 ± 21% -100.0% 0.00 interrupts.CPU143.PMI:Performance_monitoring_interrupts 4559 ± 21% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 4559 ± 21% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 6005 ± 29% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 6005 ± 29% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 5685 ± 36% -100.0% 1.00 ±223% interrupts.CPU17.NMI:Non-maskable_interrupts 5685 ± 36% -100.0% 1.00 ±223% interrupts.CPU17.PMI:Performance_monitoring_interrupts 164.67 ± 41% +60.8% 264.83 ± 17% interrupts.CPU17.RES:Rescheduling_interrupts 6531 ± 29% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 6531 ± 29% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 5120 ± 37% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 5120 ± 37% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 4592 ± 28% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 4592 ± 28% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 5979 ± 33% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 5979 ± 33% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 7137 ± 19% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 7137 ± 19% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 6329 ± 24% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 6329 ± 24% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 7533 ± 17% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 7533 ± 17% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 6157 ± 41% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 6157 ± 41% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 6864 ± 29% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 6864 ± 29% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 6398 ± 36% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 6398 ± 36% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 5711 ± 39% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 5711 ± 39% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 5154 ± 47% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 5154 ± 47% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 6074 ± 32% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 6074 ± 32% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 5578 ± 38% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 5578 ± 38% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 7588 ± 27% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 7588 ± 27% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 5690 ± 40% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 5690 ± 40% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 4953 ± 54% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 4953 ± 54% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 5357 ± 45% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 5357 ± 45% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 6533 ± 29% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 6533 ± 29% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 5371 ± 44% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 5371 ± 44% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 6020 ± 38% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 6020 ± 38% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 5669 ± 43% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 5669 ± 43% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 5105 ± 44% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 5105 ± 44% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 6940 ± 40% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 6940 ± 40% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 1006 ± 29% -52.2% 480.50 ± 60% interrupts.CPU39.TLB:TLB_shootdowns 6119 ± 40% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 6119 ± 40% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 5909 ± 36% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 5909 ± 36% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 6394 ± 34% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 6394 ± 34% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 4547 ± 28% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 4547 ± 28% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 6109 ± 37% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 6109 ± 37% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 5430 ± 38% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 5430 ± 38% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 7161 ± 31% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 7161 ± 31% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 5053 ± 47% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 5053 ± 47% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 4869 ± 33% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 4869 ± 33% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 6597 ± 22% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 6597 ± 22% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 6573 ± 34% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 6573 ± 34% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 5710 ± 40% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 5710 ± 40% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 6093 ± 35% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 6093 ± 35% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 6588 ± 34% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 6588 ± 34% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 5928 ± 32% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 5928 ± 32% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 5843 ± 44% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 5843 ± 44% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 6316 ± 39% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 6316 ± 39% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 6363 ± 37% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 6363 ± 37% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 5410 ± 36% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 5410 ± 36% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 5202 ± 51% -100.0% 1.00 ±223% interrupts.CPU57.NMI:Non-maskable_interrupts 5202 ± 51% -100.0% 1.00 ±223% interrupts.CPU57.PMI:Performance_monitoring_interrupts 122.33 ± 32% +121.5% 271.00 ± 10% interrupts.CPU57.RES:Rescheduling_interrupts 6072 ± 37% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 6072 ± 37% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 5843 ± 48% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 5843 ± 48% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 5919 ± 32% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 5919 ± 32% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 4869 ± 48% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 4869 ± 48% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 7239 ± 28% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 7239 ± 28% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 6646 ± 24% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 6646 ± 24% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 5868 ± 48% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 5868 ± 48% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 6263 ± 35% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 6263 ± 35% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 5992 ± 33% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 5992 ± 33% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 5090 ± 36% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 5090 ± 36% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 7352 ± 24% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 7352 ± 24% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 6201 ± 34% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 6201 ± 34% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 5595 ± 35% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 5595 ± 35% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 5739 ± 30% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 5739 ± 30% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 5788 ± 46% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 5788 ± 46% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 6763 ± 31% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 6763 ± 31% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 6782 ± 28% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 6782 ± 28% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 4688 ± 46% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 4688 ± 46% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 6292 ± 36% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 6292 ± 36% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 5875 ± 38% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 5875 ± 38% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 6015 ± 34% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 6015 ± 34% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 6457 ± 36% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 6457 ± 36% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 4371 ± 39% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 4371 ± 39% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 5423 ± 50% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 5423 ± 50% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 5007 ± 35% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 5007 ± 35% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 5027 ± 39% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 5027 ± 39% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 6581 ± 31% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 6581 ± 31% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 5304 ± 37% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 5304 ± 37% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 7058 ± 29% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 7058 ± 29% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 6839 ± 29% -100.0% 1.00 ±223% interrupts.CPU84.NMI:Non-maskable_interrupts 6839 ± 29% -100.0% 1.00 ±223% interrupts.CPU84.PMI:Performance_monitoring_interrupts 1464 ± 19% -40.4% 873.33 ± 19% interrupts.CPU85.CAL:Function_call_interrupts 5206 ± 36% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 5206 ± 36% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 4338 ± 33% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 4338 ± 33% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 6159 ± 34% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 6159 ± 34% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 4127 ± 39% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 4127 ± 39% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 3836 ± 26% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 3836 ± 26% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 5586 ± 23% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 5586 ± 23% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 4182 ± 34% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 4182 ± 34% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 5805 ± 33% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 5805 ± 33% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 4879 ± 49% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 4879 ± 49% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 4739 ± 41% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 4739 ± 41% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 5676 ± 32% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 5676 ± 32% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 4715 ± 36% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 4715 ± 36% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 6426 ± 25% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 6426 ± 25% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 4766 ± 40% -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 4766 ± 40% -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 4756 ± 55% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 4756 ± 55% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 4505 ± 42% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 4505 ± 42% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 220.00 ± 10% -100.0% 0.00 interrupts.IWI:IRQ_work_interrupts 850938 ± 9% -100.0% 4.00 ± 70% interrupts.NMI:Non-maskable_interrupts 850938 ± 9% -100.0% 4.00 ± 70% interrupts.PMI:Performance_monitoring_interrupts 17818 ± 19% +27.6% 22728 interrupts.RES:Rescheduling_interrupts *************************************************************************************************** lkp-csl-2ap2: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/thread/16/debian-10.4-x86_64-20200603.cgz/lkp-csl-2ap2/eventfd1/will-it-scale/0x5003006 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 43214853 -11.2% 38388140 will-it-scale.16.threads 2700928 -11.2% 2399258 will-it-scale.per_thread_ops 43214853 -11.2% 38388140 will-it-scale.workload 209671 ± 3% +15.5% 242077 ± 8% cpuidle.POLL.time 1.33 ± 3% -0.4 0.95 ± 6% mpstat.cpu.all.irq% 6.68 +0.8 7.51 mpstat.cpu.all.sys% 1.44 -0.1 1.29 mpstat.cpu.all.usr% -360497 +76.7% -636909 sched_debug.cfs_rq:/.spread0.min 325.86 ± 3% +12.2% 365.55 sched_debug.cpu.curr->pid.avg 0.08 ± 2% +12.7% 0.10 sched_debug.cpu.nr_running.avg 1388429 +194.7% 4091858 ± 5% vmstat.memory.cache 15.33 ± 3% +14.1% 17.50 ± 2% vmstat.procs.r 2171 ± 2% +11.7% 2424 vmstat.system.cs 267406 ± 85% +141.4% 645584 ± 22% numa-numastat.node2.local_node 348818 ± 62% +104.3% 712543 ± 19% numa-numastat.node2.numa_hit 91038 ± 56% +1249.0% 1228124 ± 47% numa-numastat.node3.local_node 163213 ± 27% +691.0% 1291079 ± 43% numa-numastat.node3.numa_hit 55302 ± 3% +4880.3% 2754233 ± 8% meminfo.Active 55302 ± 3% +4880.3% 2754233 ± 8% meminfo.Active(anon) 181287 +13.1% 205014 meminfo.AnonHugePages 272534 +13.4% 309031 meminfo.AnonPages 1257134 +214.6% 3955164 ± 5% meminfo.Cached 553283 +501.4% 3327574 ± 6% meminfo.Committed_AS 292577 +12.0% 327736 meminfo.Inactive 292577 +12.0% 327736 meminfo.Inactive(anon) 3295068 +105.1% 6756683 ± 3% meminfo.Memused 4596 +9.7% 5042 ± 3% meminfo.PageTables 75666 ± 2% +3565.7% 2773697 ± 8% meminfo.Shmem 4560901 +48.3% 6764175 ± 3% meminfo.max_used_kB 1021 ± 57% +15711.6% 161515 ±112% numa-vmstat.node1.nr_active_anon 73543 +221.2% 236242 ± 76% numa-vmstat.node1.nr_file_pages 2323 ± 46% +6877.5% 162122 ±111% numa-vmstat.node1.nr_shmem 1021 ± 57% +15711.6% 161515 ±112% numa-vmstat.node1.nr_zone_active_anon 316.50 ±147% +53983.4% 171174 ± 51% numa-vmstat.node2.nr_active_anon 316.50 ±147% +53983.4% 171173 ± 51% numa-vmstat.node2.nr_zone_active_anon 12254 ± 4% +2792.3% 354441 ± 64% numa-vmstat.node3.nr_active_anon 88086 ± 4% +387.8% 429716 ± 54% numa-vmstat.node3.nr_file_pages 13051 ± 7% +2624.8% 355611 ± 64% numa-vmstat.node3.nr_shmem 12254 ± 4% +2792.3% 354440 ± 64% numa-vmstat.node3.nr_zone_active_anon 626785 ± 17% +80.1% 1128600 ± 35% numa-vmstat.node3.numa_hit 484870 ± 14% +103.1% 984592 ± 45% numa-vmstat.node3.numa_local 10653 ± 11% +128.5% 24347 ± 44% softirqs.CPU120.RCU 12699 ± 12% +159.7% 32976 ± 38% softirqs.CPU144.RCU 11305 ± 12% +213.3% 35413 ± 25% softirqs.CPU168.RCU 18.33 ± 31% +22963.6% 4228 ±220% softirqs.CPU171.TIMER 8949 ± 20% +23.6% 11062 ± 6% softirqs.CPU19.RCU 10900 ± 13% +198.9% 32579 ± 39% softirqs.CPU24.RCU 12760 ± 13% +227.8% 41830 ± 36% softirqs.CPU48.RCU 11503 ± 12% +395.4% 56995 ± 22% softirqs.CPU72.RCU 38943 ± 3% -24.2% 29535 ± 15% softirqs.CPU72.SCHED 11376 ± 11% +51.4% 17228 ± 29% softirqs.CPU73.RCU 22224 ± 33% +72.3% 38292 ± 18% softirqs.CPU95.SCHED 35412 ± 7% +43.3% 50754 ± 3% softirqs.TIMER 26462 ± 15% +35.3% 35792 ± 9% slabinfo.ep_head.active_objs 26462 ± 15% +35.3% 35792 ± 9% slabinfo.ep_head.num_objs 75918 ± 8% +20.5% 91486 ± 6% slabinfo.filp.active_objs 1193 ± 8% +20.7% 1440 ± 6% slabinfo.filp.active_slabs 76405 ± 8% +20.7% 92234 ± 6% slabinfo.filp.num_objs 1193 ± 8% +20.7% 1440 ± 6% slabinfo.filp.num_slabs 21100 ± 4% -25.2% 15778 slabinfo.proc_inode_cache.active_objs 453.17 ± 3% -24.3% 343.17 ± 4% slabinfo.proc_inode_cache.active_slabs 21778 ± 3% -24.2% 16502 ± 4% slabinfo.proc_inode_cache.num_objs 453.17 ± 3% -24.3% 343.17 ± 4% slabinfo.proc_inode_cache.num_slabs 27102 +38.6% 37559 ± 2% slabinfo.radix_tree_node.active_objs 483.67 +38.6% 670.17 ± 2% slabinfo.radix_tree_node.active_slabs 27102 +38.6% 37559 ± 2% slabinfo.radix_tree_node.num_objs 483.67 +38.6% 670.17 ± 2% slabinfo.radix_tree_node.num_slabs 794693 ± 6% +31.0% 1041224 ± 7% numa-meminfo.node0.MemUsed 4087 ± 57% +15693.8% 645624 ±112% numa-meminfo.node1.Active 4087 ± 57% +15693.8% 645624 ±112% numa-meminfo.node1.Active(anon) 290544 ± 67% -66.4% 97537 ±136% numa-meminfo.node1.AnonPages.max 294175 +221.1% 944532 ± 76% numa-meminfo.node1.FilePages 828264 ± 9% +94.4% 1610164 ± 48% numa-meminfo.node1.MemUsed 9295 ± 46% +6871.7% 648053 ±111% numa-meminfo.node1.Shmem 1267 ±147% +53883.4% 684239 ± 51% numa-meminfo.node2.Active 1267 ±147% +53883.4% 684239 ± 51% numa-meminfo.node2.Active(anon) 808960 ± 14% +101.6% 1630678 ± 20% numa-meminfo.node2.MemUsed 48997 ± 4% +2791.0% 1416511 ± 64% numa-meminfo.node3.Active 48997 ± 4% +2791.0% 1416511 ± 64% numa-meminfo.node3.Active(anon) 352399 ± 4% +387.4% 1717609 ± 54% numa-meminfo.node3.FilePages 864759 ± 10% +185.9% 2472106 ± 41% numa-meminfo.node3.MemUsed 52258 ± 7% +2619.5% 1421190 ± 64% numa-meminfo.node3.Shmem 13805 ± 3% +4877.1% 687100 ± 8% proc-vmstat.nr_active_anon 68161 +13.4% 77273 proc-vmstat.nr_anon_pages 88.00 +13.3% 99.67 proc-vmstat.nr_anon_transparent_hugepages 4835093 -1.8% 4748853 proc-vmstat.nr_dirty_background_threshold 9682010 -1.8% 9509318 proc-vmstat.nr_dirty_threshold 314263 +214.2% 987332 ± 5% proc-vmstat.nr_file_pages 48604647 -1.8% 47740981 proc-vmstat.nr_free_pages 73174 +12.0% 81950 proc-vmstat.nr_inactive_anon 9737 +1.7% 9901 proc-vmstat.nr_mapped 1143 +9.9% 1257 ± 3% proc-vmstat.nr_page_table_pages 18896 ± 2% +3561.8% 691965 ± 8% proc-vmstat.nr_shmem 73004 ± 2% +4.5% 76305 proc-vmstat.nr_slab_unreclaimable 13805 ± 3% +4877.1% 687100 ± 8% proc-vmstat.nr_zone_active_anon 73174 +12.0% 81950 proc-vmstat.nr_zone_inactive_anon 1245634 +139.6% 2984321 ± 4% proc-vmstat.numa_hit 985877 +176.4% 2724559 ± 5% proc-vmstat.numa_local 18296 ± 4% +408.0% 92953 ± 7% proc-vmstat.pgactivate 1337635 +215.9% 4225989 ± 7% proc-vmstat.pgalloc_normal 1180413 -2.9% 1145893 proc-vmstat.pgfault 1433097 +78.9% 2563488 ± 9% proc-vmstat.pgfree 36.17 ± 16% -36.2 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 35.53 ± 9% -35.5 0.00 perf-profile.calltrace.cycles-pp.__libc_read 35.31 ± 16% -35.3 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 35.31 ± 16% -35.3 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 35.30 ± 16% -35.3 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 33.49 ± 18% -33.5 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 32.53 ± 18% -32.5 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 31.39 ± 9% -31.4 0.00 perf-profile.calltrace.cycles-pp.__libc_write 29.95 ± 21% -30.0 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 24.63 ± 9% -24.6 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_read 22.68 ± 9% -22.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 21.76 ± 9% -21.8 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 20.33 ± 9% -20.3 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_write 18.38 ± 9% -18.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write 17.45 ± 9% -17.4 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write 16.99 ± 9% -17.0 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 12.74 ± 9% -12.7 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write 9.94 ± 8% -9.9 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 8.35 ± 9% -8.4 0.00 perf-profile.calltrace.cycles-pp.eventfd_read.new_sync_read.vfs_read.ksys_read.do_syscall_64 7.38 ± 8% -7.4 0.00 perf-profile.calltrace.cycles-pp.eventfd_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 6.14 ± 9% -6.1 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_read 6.14 ± 9% -6.1 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_write 4.91 ± 8% -4.9 0.00 perf-profile.calltrace.cycles-pp._copy_to_iter.eventfd_read.new_sync_read.vfs_read.ksys_read 45.19 ± 9% -45.2 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 41.18 ± 9% -41.2 0.00 perf-profile.children.cycles-pp.do_syscall_64 36.17 ± 16% -36.2 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 36.17 ± 16% -36.2 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 36.17 ± 16% -36.2 0.00 perf-profile.children.cycles-pp.do_idle 35.71 ± 9% -35.7 0.00 perf-profile.children.cycles-pp.__libc_read 35.31 ± 16% -35.3 0.00 perf-profile.children.cycles-pp.start_secondary 34.36 ± 18% -34.4 0.00 perf-profile.children.cycles-pp.cpuidle_enter 34.36 ± 18% -34.4 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 31.50 ± 9% -31.5 0.00 perf-profile.children.cycles-pp.__libc_write 29.98 ± 21% -30.0 0.00 perf-profile.children.cycles-pp.intel_idle 21.86 ± 9% -21.9 0.00 perf-profile.children.cycles-pp.ksys_read 17.54 ± 9% -17.5 0.00 perf-profile.children.cycles-pp.ksys_write 17.13 ± 9% -17.1 0.00 perf-profile.children.cycles-pp.vfs_read 12.86 ± 9% -12.9 0.00 perf-profile.children.cycles-pp.vfs_write 10.08 ± 8% -10.1 0.00 perf-profile.children.cycles-pp.new_sync_read 8.52 ± 9% -8.5 0.00 perf-profile.children.cycles-pp.eventfd_read 7.84 ± 9% -7.8 0.00 perf-profile.children.cycles-pp.__entry_text_start 7.50 ± 8% -7.5 0.00 perf-profile.children.cycles-pp.eventfd_write 6.93 ± 13% -6.9 0.00 perf-profile.children.cycles-pp.security_file_permission 5.61 ± 10% -5.6 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 5.26 ± 9% -5.3 0.00 perf-profile.children.cycles-pp.__fdget_pos 4.94 ± 8% -4.9 0.00 perf-profile.children.cycles-pp._copy_to_iter 4.87 ± 9% -4.9 0.00 perf-profile.children.cycles-pp.__fget_light 29.98 ± 21% -30.0 0.00 perf-profile.self.cycles-pp.intel_idle 5.61 ± 10% -5.6 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 0.27 ± 4% +187.6% 0.78 ± 4% perf-stat.i.MPKI 1.01 -0.1 0.94 perf-stat.i.branch-miss-rate% 1.684e+08 -8.1% 1.548e+08 perf-stat.i.branch-misses 5263026 ± 16% +284.1% 20213018 ± 27% perf-stat.i.cache-misses 21258325 ± 4% +191.1% 61876726 ± 4% perf-stat.i.cache-references 2125 ± 2% +11.9% 2378 perf-stat.i.context-switches 0.70 +15.1% 0.81 perf-stat.i.cpi 5.638e+10 +14.6% 6.461e+10 perf-stat.i.cpu-cycles 200.10 -2.2% 195.71 perf-stat.i.cpu-migrations 11051 ± 12% -52.8% 5212 ± 28% perf-stat.i.cycles-between-cache-misses 2.319e+10 -3.7% 2.233e+10 perf-stat.i.dTLB-loads 1.578e+10 -4.3% 1.51e+10 perf-stat.i.dTLB-stores 1.625e+08 -8.6% 1.485e+08 perf-stat.i.iTLB-load-misses 4212709 +5.2% 4432071 perf-stat.i.iTLB-loads 495.19 +8.9% 539.34 perf-stat.i.instructions-per-iTLB-miss 1.42 -13.1% 1.24 perf-stat.i.ipc 1.57 ± 25% -82.1% 0.28 ±116% perf-stat.i.major-faults 0.29 +14.6% 0.34 perf-stat.i.metric.GHz 289.95 -2.9% 281.60 perf-stat.i.metric.M/sec 3789 -2.9% 3680 perf-stat.i.minor-faults 91.19 ± 3% +3.8 95.02 perf-stat.i.node-load-miss-rate% 540603 ± 6% +803.0% 4881488 ± 31% perf-stat.i.node-load-misses 54777 ± 29% +332.8% 237066 ± 53% perf-stat.i.node-loads 81627 ± 9% +1713.7% 1480480 ± 27% perf-stat.i.node-store-misses 7216 ± 50% +412.7% 36993 ± 13% perf-stat.i.node-stores 3791 -2.9% 3680 perf-stat.i.page-faults 0.26 ± 4% +192.5% 0.77 ± 4% perf-stat.overall.MPKI 1.01 -0.1 0.93 perf-stat.overall.branch-miss-rate% 0.70 +15.1% 0.81 perf-stat.overall.cpi 10940 ± 12% -66.9% 3625 ± 44% perf-stat.overall.cycles-between-cache-misses 494.00 +8.9% 537.89 perf-stat.overall.instructions-per-iTLB-miss 1.42 -13.2% 1.24 perf-stat.overall.ipc 91.80 ± 4% +5.5 97.27 perf-stat.overall.node-store-miss-rate% 559399 +12.0% 626315 perf-stat.overall.path-length 1.678e+08 -8.1% 1.542e+08 perf-stat.ps.branch-misses 5245703 ± 16% +284.1% 20149392 ± 27% perf-stat.ps.cache-misses 21185953 ± 4% +191.1% 61663943 ± 4% perf-stat.ps.cache-references 2117 ± 2% +11.9% 2370 perf-stat.ps.context-switches 5.618e+10 +14.6% 6.439e+10 perf-stat.ps.cpu-cycles 199.47 -2.2% 195.05 perf-stat.ps.cpu-migrations 2.311e+10 -3.7% 2.226e+10 perf-stat.ps.dTLB-loads 1.573e+10 -4.3% 1.505e+10 perf-stat.ps.dTLB-stores 1.619e+08 -8.6% 1.48e+08 perf-stat.ps.iTLB-load-misses 4198384 +5.2% 4417013 perf-stat.ps.iTLB-loads 1.56 ± 25% -82.1% 0.28 ±116% perf-stat.ps.major-faults 3776 -2.9% 3668 perf-stat.ps.minor-faults 538931 ± 6% +802.9% 4866157 ± 31% perf-stat.ps.node-load-misses 54593 ± 28% +333.0% 236386 ± 53% perf-stat.ps.node-loads 81359 ± 9% +1714.0% 1475868 ± 27% perf-stat.ps.node-store-misses 7209 ± 49% +411.4% 36869 ± 13% perf-stat.ps.node-stores 3777 -2.9% 3668 perf-stat.ps.page-faults 0.01 ± 32% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 8% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 27% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 25% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.00 ± 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.01 ± 45% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 24% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 ± 27% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 54% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 66% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 0.01 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.01 ± 37% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 24% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 31% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 46% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.03 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 24% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.02 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.05 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.01 ± 35% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 7% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 13% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.85 ±218% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 61% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 4.55 ± 99% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 61% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 3.03 -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.00 ± 20% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 6.38 ± 43% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 213.38 ± 2% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 11290 -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8764 ± 2% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 213.37 ± 2% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8764 ± 2% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.48 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2244 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 808.95 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2244 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 223.77 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.73 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.02 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 51.96 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 4.56 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.83 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 615.34 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 642.95 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.58 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 5.95 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 695.58 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.00 ± 17% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 433.84 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3.83 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 22.33 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3.83 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 301.67 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 300.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 230.33 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3072 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 2142 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 115.33 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 21.00 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 54.33 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 79.33 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1660 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2528 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 71.00 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 626.50 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.62 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 6643 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 6643 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 4627 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 16.87 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.13 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 6646 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1478 ± 39% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.84 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 6281 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 7307 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.85 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 193.84 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1031 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 61% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 8764 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.47 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2244 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 808.94 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2244 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 223.76 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.73 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.02 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.02 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 51.96 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 4.56 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.83 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 615.33 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 642.95 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.57 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 42% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 5.94 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 695.57 -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 433.83 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.61 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 6643 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 6643 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 4627 ± 19% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 16.86 ± 8% -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.07 ± 19% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.13 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 6646 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1478 ± 39% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.83 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 6281 ± 8% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 7307 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.83 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 193.83 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1031 -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 8764 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 5867 ± 37% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 5867 ± 37% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 117.67 ± 29% +62.7% 191.50 ± 18% interrupts.CPU0.TLB:TLB_shootdowns 5761 ± 41% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 5761 ± 41% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 6590 ± 36% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 6590 ± 36% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 5358 ± 37% -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 5358 ± 37% -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 5160 ± 50% -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 5160 ± 50% -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 5041 ± 42% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 5041 ± 42% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 6314 ± 39% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 6314 ± 39% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 4350 ± 54% -100.0% 0.00 interrupts.CPU104.NMI:Non-maskable_interrupts 4350 ± 54% -100.0% 0.00 interrupts.CPU104.PMI:Performance_monitoring_interrupts 5312 ± 54% -100.0% 0.00 interrupts.CPU105.NMI:Non-maskable_interrupts 5312 ± 54% -100.0% 0.00 interrupts.CPU105.PMI:Performance_monitoring_interrupts 3795 ± 39% -100.0% 0.00 interrupts.CPU106.NMI:Non-maskable_interrupts 3795 ± 39% -100.0% 0.00 interrupts.CPU106.PMI:Performance_monitoring_interrupts 5374 ± 45% -100.0% 0.00 interrupts.CPU107.NMI:Non-maskable_interrupts 5374 ± 45% -100.0% 0.00 interrupts.CPU107.PMI:Performance_monitoring_interrupts 4353 ± 48% -100.0% 0.00 interrupts.CPU108.NMI:Non-maskable_interrupts 4353 ± 48% -100.0% 0.00 interrupts.CPU108.PMI:Performance_monitoring_interrupts 5211 ± 38% -100.0% 0.00 interrupts.CPU109.NMI:Non-maskable_interrupts 5211 ± 38% -100.0% 0.00 interrupts.CPU109.PMI:Performance_monitoring_interrupts 5091 ± 43% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 5091 ± 43% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 5897 ± 32% -100.0% 0.00 interrupts.CPU110.NMI:Non-maskable_interrupts 5897 ± 32% -100.0% 0.00 interrupts.CPU110.PMI:Performance_monitoring_interrupts 6172 ± 34% -100.0% 0.00 interrupts.CPU111.NMI:Non-maskable_interrupts 6172 ± 34% -100.0% 0.00 interrupts.CPU111.PMI:Performance_monitoring_interrupts 112.67 ± 31% -100.0% 0.00 interrupts.CPU112.NMI:Non-maskable_interrupts 112.67 ± 31% -100.0% 0.00 interrupts.CPU112.PMI:Performance_monitoring_interrupts 123.17 ± 18% -100.0% 0.00 interrupts.CPU113.NMI:Non-maskable_interrupts 123.17 ± 18% -100.0% 0.00 interrupts.CPU113.PMI:Performance_monitoring_interrupts 121.83 ± 22% -100.0% 0.00 interrupts.CPU114.NMI:Non-maskable_interrupts 121.83 ± 22% -100.0% 0.00 interrupts.CPU114.PMI:Performance_monitoring_interrupts 124.67 ± 26% -100.0% 0.00 interrupts.CPU115.NMI:Non-maskable_interrupts 124.67 ± 26% -100.0% 0.00 interrupts.CPU115.PMI:Performance_monitoring_interrupts 117.67 ± 21% -100.0% 0.00 interrupts.CPU116.NMI:Non-maskable_interrupts 117.67 ± 21% -100.0% 0.00 interrupts.CPU116.PMI:Performance_monitoring_interrupts 116.17 ± 20% -100.0% 0.00 interrupts.CPU117.NMI:Non-maskable_interrupts 116.17 ± 20% -100.0% 0.00 interrupts.CPU117.PMI:Performance_monitoring_interrupts 118.33 ± 21% -100.0% 0.00 interrupts.CPU118.NMI:Non-maskable_interrupts 118.33 ± 21% -100.0% 0.00 interrupts.CPU118.PMI:Performance_monitoring_interrupts 119.00 ± 21% -100.0% 0.00 interrupts.CPU119.NMI:Non-maskable_interrupts 119.00 ± 21% -100.0% 0.00 interrupts.CPU119.PMI:Performance_monitoring_interrupts 6727 ± 38% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 6727 ± 38% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 113.00 ± 31% -99.1% 1.00 ±223% interrupts.CPU120.NMI:Non-maskable_interrupts 113.00 ± 31% -99.1% 1.00 ±223% interrupts.CPU120.PMI:Performance_monitoring_interrupts 93.17 ± 31% -100.0% 0.00 interrupts.CPU121.NMI:Non-maskable_interrupts 93.17 ± 31% -100.0% 0.00 interrupts.CPU121.PMI:Performance_monitoring_interrupts 114.50 ± 31% -100.0% 0.00 interrupts.CPU122.NMI:Non-maskable_interrupts 114.50 ± 31% -100.0% 0.00 interrupts.CPU122.PMI:Performance_monitoring_interrupts 101.67 ± 22% -100.0% 0.00 interrupts.CPU123.NMI:Non-maskable_interrupts 101.67 ± 22% -100.0% 0.00 interrupts.CPU123.PMI:Performance_monitoring_interrupts 114.50 ± 29% -100.0% 0.00 interrupts.CPU124.NMI:Non-maskable_interrupts 114.50 ± 29% -100.0% 0.00 interrupts.CPU124.PMI:Performance_monitoring_interrupts 105.67 ± 21% -100.0% 0.00 interrupts.CPU125.NMI:Non-maskable_interrupts 105.67 ± 21% -100.0% 0.00 interrupts.CPU125.PMI:Performance_monitoring_interrupts 119.83 ± 27% -100.0% 0.00 interrupts.CPU126.NMI:Non-maskable_interrupts 119.83 ± 27% -100.0% 0.00 interrupts.CPU126.PMI:Performance_monitoring_interrupts 101.50 ± 22% -100.0% 0.00 interrupts.CPU127.NMI:Non-maskable_interrupts 101.50 ± 22% -100.0% 0.00 interrupts.CPU127.PMI:Performance_monitoring_interrupts 106.17 ± 24% -100.0% 0.00 interrupts.CPU128.NMI:Non-maskable_interrupts 106.17 ± 24% -100.0% 0.00 interrupts.CPU128.PMI:Performance_monitoring_interrupts 104.00 ± 23% -100.0% 0.00 interrupts.CPU129.NMI:Non-maskable_interrupts 104.00 ± 23% -100.0% 0.00 interrupts.CPU129.PMI:Performance_monitoring_interrupts 5413 ± 42% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 5413 ± 42% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 107.00 ± 23% -100.0% 0.00 interrupts.CPU130.NMI:Non-maskable_interrupts 107.00 ± 23% -100.0% 0.00 interrupts.CPU130.PMI:Performance_monitoring_interrupts 103.00 ± 22% -100.0% 0.00 interrupts.CPU131.NMI:Non-maskable_interrupts 103.00 ± 22% -100.0% 0.00 interrupts.CPU131.PMI:Performance_monitoring_interrupts 1950 ± 69% -50.8% 960.67 ± 12% interrupts.CPU132.CAL:Function_call_interrupts 115.67 ± 8% -100.0% 0.00 interrupts.CPU132.NMI:Non-maskable_interrupts 115.67 ± 8% -100.0% 0.00 interrupts.CPU132.PMI:Performance_monitoring_interrupts 117.50 ± 12% -100.0% 0.00 interrupts.CPU133.NMI:Non-maskable_interrupts 117.50 ± 12% -100.0% 0.00 interrupts.CPU133.PMI:Performance_monitoring_interrupts 117.17 ± 11% -100.0% 0.00 interrupts.CPU134.NMI:Non-maskable_interrupts 117.17 ± 11% -100.0% 0.00 interrupts.CPU134.PMI:Performance_monitoring_interrupts 114.83 ± 10% -100.0% 0.00 interrupts.CPU135.NMI:Non-maskable_interrupts 114.83 ± 10% -100.0% 0.00 interrupts.CPU135.PMI:Performance_monitoring_interrupts 114.83 ± 10% -100.0% 0.00 interrupts.CPU136.NMI:Non-maskable_interrupts 114.83 ± 10% -100.0% 0.00 interrupts.CPU136.PMI:Performance_monitoring_interrupts 117.67 ± 12% -100.0% 0.00 interrupts.CPU137.NMI:Non-maskable_interrupts 117.67 ± 12% -100.0% 0.00 interrupts.CPU137.PMI:Performance_monitoring_interrupts 114.67 ± 12% -100.0% 0.00 interrupts.CPU138.NMI:Non-maskable_interrupts 114.67 ± 12% -100.0% 0.00 interrupts.CPU138.PMI:Performance_monitoring_interrupts 117.17 ± 17% -100.0% 0.00 interrupts.CPU139.NMI:Non-maskable_interrupts 117.17 ± 17% -100.0% 0.00 interrupts.CPU139.PMI:Performance_monitoring_interrupts 5217 ± 46% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 5217 ± 46% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 116.50 ± 10% -100.0% 0.00 interrupts.CPU140.NMI:Non-maskable_interrupts 116.50 ± 10% -100.0% 0.00 interrupts.CPU140.PMI:Performance_monitoring_interrupts 113.17 ± 24% -100.0% 0.00 interrupts.CPU141.NMI:Non-maskable_interrupts 113.17 ± 24% -100.0% 0.00 interrupts.CPU141.PMI:Performance_monitoring_interrupts 106.00 ± 22% -100.0% 0.00 interrupts.CPU142.NMI:Non-maskable_interrupts 106.00 ± 22% -100.0% 0.00 interrupts.CPU142.PMI:Performance_monitoring_interrupts 138.50 ± 42% -100.0% 0.00 interrupts.CPU143.NMI:Non-maskable_interrupts 138.50 ± 42% -100.0% 0.00 interrupts.CPU143.PMI:Performance_monitoring_interrupts 147.17 ± 54% -100.0% 0.00 interrupts.CPU144.NMI:Non-maskable_interrupts 147.17 ± 54% -100.0% 0.00 interrupts.CPU144.PMI:Performance_monitoring_interrupts 209.67 ± 97% -100.0% 0.00 interrupts.CPU145.NMI:Non-maskable_interrupts 209.67 ± 97% -100.0% 0.00 interrupts.CPU145.PMI:Performance_monitoring_interrupts 104.50 ± 22% -100.0% 0.00 interrupts.CPU146.NMI:Non-maskable_interrupts 104.50 ± 22% -100.0% 0.00 interrupts.CPU146.PMI:Performance_monitoring_interrupts 117.00 ± 34% -100.0% 0.00 interrupts.CPU147.NMI:Non-maskable_interrupts 117.00 ± 34% -100.0% 0.00 interrupts.CPU147.PMI:Performance_monitoring_interrupts 106.00 ± 22% -100.0% 0.00 interrupts.CPU148.NMI:Non-maskable_interrupts 106.00 ± 22% -100.0% 0.00 interrupts.CPU148.PMI:Performance_monitoring_interrupts 108.67 ± 25% -100.0% 0.00 interrupts.CPU149.NMI:Non-maskable_interrupts 108.67 ± 25% -100.0% 0.00 interrupts.CPU149.PMI:Performance_monitoring_interrupts 5011 ± 53% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 5011 ± 53% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 104.50 ± 22% -100.0% 0.00 interrupts.CPU150.NMI:Non-maskable_interrupts 104.50 ± 22% -100.0% 0.00 interrupts.CPU150.PMI:Performance_monitoring_interrupts 104.50 ± 21% -100.0% 0.00 interrupts.CPU151.NMI:Non-maskable_interrupts 104.50 ± 21% -100.0% 0.00 interrupts.CPU151.PMI:Performance_monitoring_interrupts 104.83 ± 19% -100.0% 0.00 interrupts.CPU152.NMI:Non-maskable_interrupts 104.83 ± 19% -100.0% 0.00 interrupts.CPU152.PMI:Performance_monitoring_interrupts 104.00 ± 20% -100.0% 0.00 interrupts.CPU153.NMI:Non-maskable_interrupts 104.00 ± 20% -100.0% 0.00 interrupts.CPU153.PMI:Performance_monitoring_interrupts 103.67 ± 22% -100.0% 0.00 interrupts.CPU154.NMI:Non-maskable_interrupts 103.67 ± 22% -100.0% 0.00 interrupts.CPU154.PMI:Performance_monitoring_interrupts 114.00 ± 9% -100.0% 0.00 interrupts.CPU155.NMI:Non-maskable_interrupts 114.00 ± 9% -100.0% 0.00 interrupts.CPU155.PMI:Performance_monitoring_interrupts 115.50 ± 10% -100.0% 0.00 interrupts.CPU156.NMI:Non-maskable_interrupts 115.50 ± 10% -100.0% 0.00 interrupts.CPU156.PMI:Performance_monitoring_interrupts 112.83 ± 9% -100.0% 0.00 interrupts.CPU157.NMI:Non-maskable_interrupts 112.83 ± 9% -100.0% 0.00 interrupts.CPU157.PMI:Performance_monitoring_interrupts 114.00 ± 9% -99.1% 1.00 ±223% interrupts.CPU158.NMI:Non-maskable_interrupts 114.00 ± 9% -99.1% 1.00 ±223% interrupts.CPU158.PMI:Performance_monitoring_interrupts 114.33 ± 10% -100.0% 0.00 interrupts.CPU159.NMI:Non-maskable_interrupts 114.33 ± 10% -100.0% 0.00 interrupts.CPU159.PMI:Performance_monitoring_interrupts 122.00 ± 22% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 122.00 ± 22% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 186.67 ±100% -100.0% 0.00 interrupts.CPU160.NMI:Non-maskable_interrupts 186.67 ±100% -100.0% 0.00 interrupts.CPU160.PMI:Performance_monitoring_interrupts 106.67 ± 25% -99.1% 1.00 ±223% interrupts.CPU161.NMI:Non-maskable_interrupts 106.67 ± 25% -99.1% 1.00 ±223% interrupts.CPU161.PMI:Performance_monitoring_interrupts 113.00 ± 9% -100.0% 0.00 interrupts.CPU162.NMI:Non-maskable_interrupts 113.00 ± 9% -100.0% 0.00 interrupts.CPU162.PMI:Performance_monitoring_interrupts 116.00 ± 11% -100.0% 0.00 interrupts.CPU163.NMI:Non-maskable_interrupts 116.00 ± 11% -100.0% 0.00 interrupts.CPU163.PMI:Performance_monitoring_interrupts 117.17 ± 11% -100.0% 0.00 interrupts.CPU164.NMI:Non-maskable_interrupts 117.17 ± 11% -100.0% 0.00 interrupts.CPU164.PMI:Performance_monitoring_interrupts 114.83 ± 9% -100.0% 0.00 interrupts.CPU165.NMI:Non-maskable_interrupts 114.83 ± 9% -100.0% 0.00 interrupts.CPU165.PMI:Performance_monitoring_interrupts 117.67 ± 11% -99.2% 1.00 ±223% interrupts.CPU166.NMI:Non-maskable_interrupts 117.67 ± 11% -99.2% 1.00 ±223% interrupts.CPU166.PMI:Performance_monitoring_interrupts 125.83 ± 22% -100.0% 0.00 interrupts.CPU167.NMI:Non-maskable_interrupts 125.83 ± 22% -100.0% 0.00 interrupts.CPU167.PMI:Performance_monitoring_interrupts 121.83 ± 8% -100.0% 0.00 interrupts.CPU168.NMI:Non-maskable_interrupts 121.83 ± 8% -100.0% 0.00 interrupts.CPU168.PMI:Performance_monitoring_interrupts 122.00 ± 29% -100.0% 0.00 interrupts.CPU169.NMI:Non-maskable_interrupts 122.00 ± 29% -100.0% 0.00 interrupts.CPU169.PMI:Performance_monitoring_interrupts 135.50 ± 9% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 135.50 ± 9% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 111.33 ± 22% -100.0% 0.00 interrupts.CPU170.NMI:Non-maskable_interrupts 111.33 ± 22% -100.0% 0.00 interrupts.CPU170.PMI:Performance_monitoring_interrupts 113.83 ± 24% -100.0% 0.00 interrupts.CPU171.NMI:Non-maskable_interrupts 113.83 ± 24% -100.0% 0.00 interrupts.CPU171.PMI:Performance_monitoring_interrupts 109.33 ± 22% -100.0% 0.00 interrupts.CPU172.NMI:Non-maskable_interrupts 109.33 ± 22% -100.0% 0.00 interrupts.CPU172.PMI:Performance_monitoring_interrupts 118.00 ± 29% -100.0% 0.00 interrupts.CPU173.NMI:Non-maskable_interrupts 118.00 ± 29% -100.0% 0.00 interrupts.CPU173.PMI:Performance_monitoring_interrupts 108.67 ± 23% -100.0% 0.00 interrupts.CPU174.NMI:Non-maskable_interrupts 108.67 ± 23% -100.0% 0.00 interrupts.CPU174.PMI:Performance_monitoring_interrupts 23.67 ±178% -89.4% 2.50 ± 38% interrupts.CPU174.TLB:TLB_shootdowns 107.67 ± 22% -100.0% 0.00 interrupts.CPU175.NMI:Non-maskable_interrupts 107.67 ± 22% -100.0% 0.00 interrupts.CPU175.PMI:Performance_monitoring_interrupts 107.50 ± 22% -100.0% 0.00 interrupts.CPU176.NMI:Non-maskable_interrupts 107.50 ± 22% -100.0% 0.00 interrupts.CPU176.PMI:Performance_monitoring_interrupts 108.00 ± 22% -100.0% 0.00 interrupts.CPU177.NMI:Non-maskable_interrupts 108.00 ± 22% -100.0% 0.00 interrupts.CPU177.PMI:Performance_monitoring_interrupts 206.17 ±105% -100.0% 0.00 interrupts.CPU178.NMI:Non-maskable_interrupts 206.17 ±105% -100.0% 0.00 interrupts.CPU178.PMI:Performance_monitoring_interrupts 108.33 ± 22% -100.0% 0.00 interrupts.CPU179.NMI:Non-maskable_interrupts 108.33 ± 22% -100.0% 0.00 interrupts.CPU179.PMI:Performance_monitoring_interrupts 131.83 ± 7% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 131.83 ± 7% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 120.17 ± 11% -100.0% 0.00 interrupts.CPU180.NMI:Non-maskable_interrupts 120.17 ± 11% -100.0% 0.00 interrupts.CPU180.PMI:Performance_monitoring_interrupts 177.83 ± 93% -100.0% 0.00 interrupts.CPU181.NMI:Non-maskable_interrupts 177.83 ± 93% -100.0% 0.00 interrupts.CPU181.PMI:Performance_monitoring_interrupts 117.00 ± 8% -100.0% 0.00 interrupts.CPU182.NMI:Non-maskable_interrupts 117.00 ± 8% -100.0% 0.00 interrupts.CPU182.PMI:Performance_monitoring_interrupts 117.67 ± 8% -100.0% 0.00 interrupts.CPU183.NMI:Non-maskable_interrupts 117.67 ± 8% -100.0% 0.00 interrupts.CPU183.PMI:Performance_monitoring_interrupts 117.50 ± 8% -100.0% 0.00 interrupts.CPU184.NMI:Non-maskable_interrupts 117.50 ± 8% -100.0% 0.00 interrupts.CPU184.PMI:Performance_monitoring_interrupts 141.67 ± 63% -100.0% 0.00 interrupts.CPU185.NMI:Non-maskable_interrupts 141.67 ± 63% -100.0% 0.00 interrupts.CPU185.PMI:Performance_monitoring_interrupts 107.00 ± 17% -100.0% 0.00 interrupts.CPU186.NMI:Non-maskable_interrupts 107.00 ± 17% -100.0% 0.00 interrupts.CPU186.PMI:Performance_monitoring_interrupts 123.17 ± 13% -100.0% 0.00 interrupts.CPU187.NMI:Non-maskable_interrupts 123.17 ± 13% -100.0% 0.00 interrupts.CPU187.PMI:Performance_monitoring_interrupts 107.67 ± 22% -100.0% 0.00 interrupts.CPU188.NMI:Non-maskable_interrupts 107.67 ± 22% -100.0% 0.00 interrupts.CPU188.PMI:Performance_monitoring_interrupts 108.00 ± 22% -100.0% 0.00 interrupts.CPU189.NMI:Non-maskable_interrupts 108.00 ± 22% -100.0% 0.00 interrupts.CPU189.PMI:Performance_monitoring_interrupts 135.17 ± 13% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 135.17 ± 13% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 108.83 ± 23% -99.1% 1.00 ±223% interrupts.CPU190.NMI:Non-maskable_interrupts 108.83 ± 23% -99.1% 1.00 ±223% interrupts.CPU190.PMI:Performance_monitoring_interrupts 488.00 ± 35% -100.0% 0.00 interrupts.CPU191.NMI:Non-maskable_interrupts 488.00 ± 35% -100.0% 0.00 interrupts.CPU191.PMI:Performance_monitoring_interrupts 5252 ± 52% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 5252 ± 52% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 126.83 ± 5% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 126.83 ± 5% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 126.50 ± 4% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 126.50 ± 4% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 129.50 ± 6% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 129.50 ± 6% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 130.83 ± 5% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 130.83 ± 5% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 137.83 ± 42% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 137.83 ± 42% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 112.50 ± 9% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 112.50 ± 9% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 117.67 ± 33% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 117.67 ± 33% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 92.17 ± 30% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 92.17 ± 30% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 95.67 ± 27% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 95.67 ± 27% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 94.83 ± 30% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 94.83 ± 30% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 5115 ± 53% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 5115 ± 53% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 100.50 ± 29% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 100.50 ± 29% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 92.50 ± 32% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 92.50 ± 32% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 103.67 ± 24% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 103.67 ± 24% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 93.33 ± 31% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 93.33 ± 31% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 93.50 ± 31% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 93.50 ± 31% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 113.67 ± 8% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 113.67 ± 8% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 104.00 ± 22% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 104.00 ± 22% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 103.00 ± 23% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 103.00 ± 23% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 107.50 ± 23% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 107.50 ± 23% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 102.83 ± 23% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 102.83 ± 23% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 5586 ± 40% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 5586 ± 40% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 103.17 ± 22% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 103.17 ± 22% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 106.17 ± 27% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 106.17 ± 27% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 96.50 ± 34% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 96.50 ± 34% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 94.83 ± 34% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 94.83 ± 34% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 94.33 ± 31% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 94.33 ± 31% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 88.00 ± 31% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 88.00 ± 31% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 101.00 ± 34% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 101.00 ± 34% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 155.17 ± 92% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 155.17 ± 92% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 198.67 ±117% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 198.67 ±117% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 116.50 ± 26% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 116.50 ± 26% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 5877 ± 48% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 5877 ± 48% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 96.00 ± 32% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 96.00 ± 32% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 108.67 ± 27% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 108.67 ± 27% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 94.67 ± 31% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 94.67 ± 31% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 95.83 ± 32% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 95.83 ± 32% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 84.00 ± 36% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 84.00 ± 36% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 84.67 ± 37% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 84.67 ± 37% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 83.83 ± 36% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 83.83 ± 36% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 87.83 ± 40% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 87.83 ± 40% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 74.17 ± 39% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 74.17 ± 39% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 84.50 ± 36% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 84.50 ± 36% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 6081 ± 41% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 6081 ± 41% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 94.17 ± 31% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 94.17 ± 31% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 104.17 ± 26% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 104.17 ± 26% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 104.50 ± 25% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 104.50 ± 25% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 104.17 ± 26% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 104.17 ± 26% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 172.33 ± 87% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 172.33 ± 87% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 105.17 ± 26% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 105.17 ± 26% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 112.00 ± 9% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 112.00 ± 9% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 122.00 ± 20% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 122.00 ± 20% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 112.50 ± 9% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 112.50 ± 9% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 112.17 ± 9% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 112.17 ± 9% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 4078 ± 37% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 4078 ± 37% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 114.17 ± 10% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 114.17 ± 10% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 122.50 ± 21% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 122.50 ± 21% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 119.83 ± 9% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 119.83 ± 9% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 8.17 ± 32% +391.8% 40.17 ±101% interrupts.CPU72.RES:Rescheduling_interrupts 126.50 ± 15% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 126.50 ± 15% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 120.33 ± 7% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 120.33 ± 7% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 115.50 ± 27% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 115.50 ± 27% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 106.50 ± 16% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 106.50 ± 16% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 108.33 ± 18% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 108.33 ± 18% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 105.67 ± 16% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 105.67 ± 16% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 104.50 ± 16% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 104.50 ± 16% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 5311 ± 46% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 5311 ± 46% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 104.67 ± 16% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 104.67 ± 16% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 95.83 ± 28% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 95.83 ± 28% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 166.67 ± 98% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 166.67 ± 98% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 107.50 ± 23% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 107.50 ± 23% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 108.50 ± 25% -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 108.50 ± 25% -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 128.33 ± 47% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 128.33 ± 47% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 106.00 ± 23% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 106.00 ± 23% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 106.83 ± 24% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 106.83 ± 24% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 106.17 ± 23% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 106.17 ± 23% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 214.00 ±106% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 214.00 ±106% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 5426 ± 37% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 5426 ± 37% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 107.83 ± 24% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 107.83 ± 24% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 118.50 ± 10% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 118.50 ± 10% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 116.33 ± 9% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 116.33 ± 9% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 106.17 ± 23% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 106.17 ± 23% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 106.67 ± 24% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 106.67 ± 24% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 256.33 ± 28% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 256.33 ± 28% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 5873 ± 32% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 5873 ± 32% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 6149 ± 34% -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 6149 ± 34% -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 6343 ± 34% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 6343 ± 34% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 5764 ± 48% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 5764 ± 48% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 193783 ± 15% -100.0% 5.00 ± 44% interrupts.NMI:Non-maskable_interrupts 193783 ± 15% -100.0% 5.00 ± 44% interrupts.PMI:Performance_monitoring_interrupts *************************************************************************************************** lkp-ivb-2ep1: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/thread/50%/debian-10.4-x86_64-20200603.cgz/lkp-ivb-2ep1/pthread_mutex2/will-it-scale/0x42e commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 1.011e+09 -1.1% 1e+09 will-it-scale.24.threads 49.37 -8.3% 45.27 will-it-scale.24.threads_idle 42139506 -1.1% 41666665 will-it-scale.per_thread_ops 9462 -1.2% 9353 will-it-scale.time.maximum_resident_set_size 1.011e+09 -1.1% 1e+09 will-it-scale.workload 2.183e+08 ±158% -98.5% 3326559 ± 57% cpuidle.C1.time 2039102 ±145% -95.7% 88045 ± 20% cpuidle.C1.usage 0.03 ± 7% +0.0 0.06 ± 8% mpstat.cpu.all.soft% 0.07 +3.6 3.63 mpstat.cpu.all.sys% 335382 ± 3% +168.4% 900180 ± 3% numa-numastat.node0.local_node 358572 +157.8% 924386 ± 4% numa-numastat.node0.numa_hit 357922 ± 4% +140.6% 861062 ± 13% numa-numastat.node1.local_node 378211 +132.8% 880335 ± 13% numa-numastat.node1.numa_hit 49.00 -8.2% 45.00 vmstat.cpu.id 1331524 +119.7% 2925854 ± 4% vmstat.memory.cache 1204 ± 4% +29.6% 1560 ± 12% vmstat.system.cs 97341 -9.6% 88013 ± 20% vmstat.system.in 1215742 ± 9% +21.4% 1476302 ± 9% sched_debug.cfs_rq:/.min_vruntime.stddev 1215743 ± 9% +21.4% 1476304 ± 9% sched_debug.cfs_rq:/.spread0.stddev 631.82 ± 5% +12.4% 710.17 ± 5% sched_debug.cpu.clock_task.stddev 16279 ± 10% +45.1% 23620 ± 28% sched_debug.cpu.nr_switches.max 3630 ± 6% +43.8% 5221 ± 19% sched_debug.cpu.nr_switches.stddev 55854 +2840.8% 1642581 ± 8% meminfo.Active 55854 +2840.8% 1642581 ± 8% meminfo.Active(anon) 1249460 +126.9% 2835486 ± 4% meminfo.Cached 569208 +286.4% 2199528 ± 6% meminfo.Committed_AS 2154458 +83.0% 3943062 ± 3% meminfo.Memused 68481 +2316.0% 1654510 ± 8% meminfo.Shmem 2358961 +67.2% 3943062 ± 3% meminfo.max_used_kB 28861 ± 20% +64.7% 47539 ± 18% softirqs.CPU17.RCU 26156 ± 29% +91.5% 50084 ± 32% softirqs.CPU20.RCU 27321 ± 22% +81.3% 49544 ± 36% softirqs.CPU21.RCU 27722 ± 24% +73.1% 47979 ± 34% softirqs.CPU23.RCU 23717 ± 36% +83.5% 43525 ± 29% softirqs.CPU25.RCU 22980 ± 17% +48.4% 34105 ± 27% softirqs.CPU26.RCU 31940 ± 23% +71.3% 54714 ± 24% softirqs.CPU3.RCU 21891 ± 33% +76.5% 38632 ± 32% softirqs.CPU33.RCU 23364 ± 9% +78.3% 41664 ± 3% softirqs.TIMER 7695 ±158% +10976.3% 852396 ± 3% numa-meminfo.node0.Active 7695 ±158% +10976.3% 852396 ± 3% numa-meminfo.node0.Active(anon) 582895 ± 2% +148.2% 1446798 ± 2% numa-meminfo.node0.FilePages 1016604 ± 5% +100.1% 2034165 ± 3% numa-meminfo.node0.MemUsed 12403 ±124% +6833.6% 859972 ± 3% numa-meminfo.node0.Shmem 48162 ± 25% +1545.1% 792318 ± 16% numa-meminfo.node1.Active 48162 ± 25% +1545.1% 792318 ± 16% numa-meminfo.node1.Active(anon) 666609 ± 2% +108.6% 1390820 ± 9% numa-meminfo.node1.FilePages 1137162 ± 5% +68.0% 1910988 ± 6% numa-meminfo.node1.MemUsed 56122 ± 27% +1319.5% 796670 ± 16% numa-meminfo.node1.Shmem 1300 ± 3% +6.7% 1386 ± 5% slabinfo.PING.active_objs 1300 ± 3% +6.7% 1386 ± 5% slabinfo.PING.num_objs 14822 ± 3% +16.2% 17223 ± 3% slabinfo.filp.active_objs 15079 ± 3% +15.8% 17454 ± 3% slabinfo.filp.num_objs 5680 ± 5% +13.1% 6425 ± 6% slabinfo.kmalloc-256.num_objs 9120 ± 2% -9.9% 8216 slabinfo.proc_inode_cache.active_objs 9120 ± 2% -9.9% 8216 slabinfo.proc_inode_cache.num_objs 17802 +34.7% 23981 ± 2% slabinfo.radix_tree_node.active_objs 635.33 +34.7% 855.83 ± 2% slabinfo.radix_tree_node.active_slabs 17802 +34.7% 23981 ± 2% slabinfo.radix_tree_node.num_objs 635.33 +34.7% 855.83 ± 2% slabinfo.radix_tree_node.num_slabs 1928 ±158% +10951.9% 213080 ± 3% numa-vmstat.node0.nr_active_anon 145728 ± 2% +148.2% 361681 ± 2% numa-vmstat.node0.nr_file_pages 3104 ±124% +6824.2% 214974 ± 3% numa-vmstat.node0.nr_shmem 1928 ±158% +10951.9% 213080 ± 3% numa-vmstat.node0.nr_zone_active_anon 606912 ± 3% +60.2% 972395 ± 9% numa-vmstat.node0.numa_hit 581707 ± 3% +62.7% 946561 ± 9% numa-vmstat.node0.numa_local 12042 ± 25% +1544.7% 198064 ± 16% numa-vmstat.node1.nr_active_anon 166664 ± 2% +108.6% 347689 ± 9% numa-vmstat.node1.nr_file_pages 14042 ± 27% +1318.2% 199151 ± 16% numa-vmstat.node1.nr_shmem 12042 ± 25% +1544.7% 198064 ± 16% numa-vmstat.node1.nr_zone_active_anon 786995 ± 2% +20.5% 948188 ± 8% numa-vmstat.node1.numa_hit 595281 ± 3% +27.2% 757182 ± 11% numa-vmstat.node1.numa_local 13967 +2845.7% 411425 ± 8% proc-vmstat.nr_active_anon 60843 +4.5% 63565 proc-vmstat.nr_anon_pages 2812307 -1.6% 2767564 proc-vmstat.nr_dirty_background_threshold 5631492 -1.6% 5541896 proc-vmstat.nr_dirty_threshold 312380 +127.2% 709652 ± 4% proc-vmstat.nr_file_pages 28305732 -1.6% 27857642 proc-vmstat.nr_free_pages 63916 +3.9% 66424 proc-vmstat.nr_inactive_anon 1032 +9.8% 1134 proc-vmstat.nr_page_table_pages 17135 +2318.4% 414407 ± 8% proc-vmstat.nr_shmem 20477 +2.8% 21044 proc-vmstat.nr_slab_reclaimable 26196 +2.7% 26902 proc-vmstat.nr_slab_unreclaimable 13967 +2845.7% 411425 ± 8% proc-vmstat.nr_zone_active_anon 63916 +3.9% 66424 proc-vmstat.nr_zone_inactive_anon 768626 +138.5% 1832947 ± 7% proc-vmstat.numa_hit 725125 +146.8% 1789403 ± 8% proc-vmstat.numa_local 20705 +167.3% 55350 ± 8% proc-vmstat.pgactivate 824559 +162.5% 2164854 ± 10% proc-vmstat.pgalloc_normal 900993 -3.3% 871699 proc-vmstat.pgfault 793890 +57.7% 1251577 ± 12% proc-vmstat.pgfree 40.43 -40.4 0.00 perf-profile.calltrace.cycles-pp.__pthread_mutex_unlock_usercnt 30.75 -30.7 0.00 perf-profile.calltrace.cycles-pp.__pthread_mutex_lock 25.27 ± 2% -25.3 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 25.12 ± 2% -25.1 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 24.70 ± 3% -24.7 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 24.70 ± 3% -24.7 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 24.70 ± 3% -24.7 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 24.69 ± 3% -24.7 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 24.69 ± 3% -24.7 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 40.45 -40.5 0.00 perf-profile.children.cycles-pp.__pthread_mutex_unlock_usercnt 29.72 -29.7 0.00 perf-profile.children.cycles-pp.__pthread_mutex_lock 25.27 ± 2% -25.3 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 25.27 ± 2% -25.3 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 25.27 ± 2% -25.3 0.00 perf-profile.children.cycles-pp.do_idle 25.27 ± 2% -25.3 0.00 perf-profile.children.cycles-pp.cpuidle_enter 25.27 ± 2% -25.3 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 25.23 ± 2% -25.2 0.00 perf-profile.children.cycles-pp.intel_idle 24.70 ± 3% -24.7 0.00 perf-profile.children.cycles-pp.start_secondary 39.05 -39.0 0.00 perf-profile.self.cycles-pp.__pthread_mutex_unlock_usercnt 28.39 -28.4 0.00 perf-profile.self.cycles-pp.__pthread_mutex_lock 25.23 ± 2% -25.2 0.00 perf-profile.self.cycles-pp.intel_idle 0.06 ± 6% +349.5% 0.26 ± 4% perf-stat.i.MPKI 1.827e+10 +3.9% 1.898e+10 perf-stat.i.branch-instructions 0.03 ± 4% +0.0 0.05 ± 7% perf-stat.i.branch-miss-rate% 3794543 +101.9% 7662013 ± 4% perf-stat.i.branch-misses 8.46 +3.2 11.63 ± 10% perf-stat.i.cache-miss-rate% 271889 ± 2% +560.3% 1795222 ± 11% perf-stat.i.cache-misses 2696131 +464.1% 15208100 ± 3% perf-stat.i.cache-references 1177 ± 4% +30.4% 1534 ± 12% perf-stat.i.context-switches 1.24 +1.4% 1.26 perf-stat.i.cpi 7.216e+10 +8.0% 7.795e+10 perf-stat.i.cpu-cycles 377929 -87.9% 45789 ± 14% perf-stat.i.cycles-between-cache-misses 0.00 ± 9% +0.0 0.04 ± 8% perf-stat.i.dTLB-load-miss-rate% 705819 ± 10% +879.9% 6916409 ± 9% perf-stat.i.dTLB-load-misses 1.625e+10 +5.0% 1.706e+10 perf-stat.i.dTLB-loads 0.00 +0.0 0.01 ± 7% perf-stat.i.dTLB-store-miss-rate% 133719 +704.5% 1075749 ± 8% perf-stat.i.dTLB-store-misses 9.158e+09 +6.1% 9.72e+09 perf-stat.i.dTLB-stores 932128 +61.7% 1507066 ± 19% perf-stat.i.iTLB-load-misses 408295 +51.5% 618581 ± 19% perf-stat.i.iTLB-loads 5.803e+10 +6.6% 6.184e+10 perf-stat.i.instructions 62728 -32.2% 42545 ± 17% perf-stat.i.instructions-per-iTLB-miss 0.80 -1.3% 0.79 perf-stat.i.ipc 0.39 ± 8% -74.3% 0.10 ± 60% perf-stat.i.major-faults 1.50 +8.0% 1.62 perf-stat.i.metric.GHz 0.64 ± 19% -76.7% 0.15 ± 2% perf-stat.i.metric.K/sec 909.94 +4.8% 953.80 perf-stat.i.metric.M/sec 2873 -3.3% 2777 perf-stat.i.minor-faults 47.43 +1.2 48.63 perf-stat.i.node-load-miss-rate% 114583 ± 3% +509.4% 698321 ± 12% perf-stat.i.node-load-misses 137610 ± 2% +436.1% 737758 ± 11% perf-stat.i.node-loads 34.91 ± 8% +10.8 45.68 perf-stat.i.node-store-miss-rate% 62663 ± 10% +1255.3% 849308 ± 13% perf-stat.i.node-store-misses 122110 ± 2% +730.3% 1013883 ± 11% perf-stat.i.node-stores 2874 -3.4% 2777 perf-stat.i.page-faults 0.05 +428.9% 0.25 ± 3% perf-stat.overall.MPKI 0.02 +0.0 0.04 ± 4% perf-stat.overall.branch-miss-rate% 1.24 +1.4% 1.26 perf-stat.overall.cpi 264888 ± 2% -83.4% 44079 ± 13% perf-stat.overall.cycles-between-cache-misses 0.00 ± 10% +0.0 0.04 ± 9% perf-stat.overall.dTLB-load-miss-rate% 0.00 +0.0 0.01 ± 7% perf-stat.overall.dTLB-store-miss-rate% 62253 -31.7% 42506 ± 17% perf-stat.overall.instructions-per-iTLB-miss 0.80 -1.3% 0.79 perf-stat.overall.ipc 45.41 +3.2 48.61 perf-stat.overall.node-load-miss-rate% 33.83 ± 6% +11.7 45.52 perf-stat.overall.node-store-miss-rate% 17279 +7.7% 18615 perf-stat.overall.path-length 1.821e+10 +3.9% 1.892e+10 perf-stat.ps.branch-instructions 3797105 +101.4% 7647672 ± 4% perf-stat.ps.branch-misses 271714 ± 2% +558.7% 1789851 ± 11% perf-stat.ps.cache-misses 2689343 +463.7% 15159877 ± 3% perf-stat.ps.cache-references 1173 ± 4% +30.4% 1529 ± 12% perf-stat.ps.context-switches 7.192e+10 +8.0% 7.769e+10 perf-stat.ps.cpu-cycles 703591 ± 10% +879.7% 6893346 ± 9% perf-stat.ps.dTLB-load-misses 1.619e+10 +5.0% 1.7e+10 perf-stat.ps.dTLB-loads 133302 +704.3% 1072174 ± 8% perf-stat.ps.dTLB-store-misses 9.128e+09 +6.1% 9.687e+09 perf-stat.ps.dTLB-stores 929016 +61.7% 1502046 ± 19% perf-stat.ps.iTLB-load-misses 406921 +51.5% 616502 ± 19% perf-stat.ps.iTLB-loads 5.783e+10 +6.6% 6.164e+10 perf-stat.ps.instructions 0.39 ± 8% -73.9% 0.10 ± 60% perf-stat.ps.major-faults 2864 -3.3% 2769 perf-stat.ps.minor-faults 114340 ± 3% +508.6% 695927 ± 12% perf-stat.ps.node-load-misses 137393 ± 2% +435.2% 735278 ± 11% perf-stat.ps.node-loads 62541 ± 10% +1253.4% 846469 ± 13% perf-stat.ps.node-store-misses 121956 ± 2% +728.7% 1010682 ± 11% perf-stat.ps.node-stores 2864 -3.3% 2769 perf-stat.ps.page-faults 1.748e+13 +6.5% 1.862e+13 perf-stat.total.instructions 71373 -4.0% 68522 ± 2% interrupts.CAL:Function_call_interrupts 1514 ± 33% -46.9% 803.67 ± 23% interrupts.CPU0.CAL:Function_call_interrupts 5403 ± 41% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 5403 ± 41% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 4816 ± 41% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 4816 ± 41% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 6629 ± 33% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 6629 ± 33% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 4804 ± 52% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 4804 ± 52% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 5812 ± 44% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 5812 ± 44% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 6413 ± 43% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 6413 ± 43% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 4832 ± 34% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 4832 ± 34% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 3257 ± 72% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 3257 ± 72% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 5558 ± 43% -100.0% 1.17 ±223% interrupts.CPU16.NMI:Non-maskable_interrupts 5558 ± 43% -100.0% 1.17 ±223% interrupts.CPU16.PMI:Performance_monitoring_interrupts 5157 ± 51% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 5157 ± 51% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 5066 ± 47% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 5066 ± 47% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 5390 ± 55% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 5390 ± 55% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 4525 ± 59% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 4525 ± 59% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 3595 ± 65% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 3595 ± 65% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 1176 ± 3% +31.5% 1547 ± 11% interrupts.CPU21.CAL:Function_call_interrupts 2625 ± 33% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 2625 ± 33% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 3091 ± 38% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 3091 ± 38% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 4912 ± 40% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 4912 ± 40% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 353.83 ± 70% +93.5% 684.50 ± 16% interrupts.CPU23.TLB:TLB_shootdowns 4163 ± 48% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 4163 ± 48% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 3528 ± 34% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 3528 ± 34% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 4547 ± 58% -100.0% 1.00 ±223% interrupts.CPU26.NMI:Non-maskable_interrupts 4547 ± 58% -100.0% 1.00 ±223% interrupts.CPU26.PMI:Performance_monitoring_interrupts 2164 ± 30% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 2164 ± 30% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 2623 ± 46% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 2623 ± 46% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 5794 ± 46% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 5794 ± 46% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 6970 ± 28% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 6970 ± 28% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 4503 ± 58% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 4503 ± 58% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 3705 ± 60% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 3705 ± 60% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 2774 ± 68% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 2774 ± 68% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 3319 ± 70% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 3319 ± 70% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 2910 ± 83% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 2910 ± 83% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 3566 ± 65% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 3566 ± 65% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 3549 ± 66% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 3549 ± 66% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 2859 ± 39% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 2859 ± 39% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 3614 ± 31% -100.0% 1.00 ±223% interrupts.CPU38.NMI:Non-maskable_interrupts 3614 ± 31% -100.0% 1.00 ±223% interrupts.CPU38.PMI:Performance_monitoring_interrupts 6220 ± 41% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 6220 ± 41% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 4899 ± 52% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 4899 ± 52% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 3095 ± 40% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 3095 ± 40% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 4956 ± 59% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 4956 ± 59% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 3084 ± 30% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 3084 ± 30% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 2112 ± 41% -37.6% 1318 ± 14% interrupts.CPU43.CAL:Function_call_interrupts 3760 ± 62% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 3760 ± 62% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 3737 ± 30% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 3737 ± 30% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 4646 ± 31% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 4646 ± 31% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 4689 ± 57% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 4689 ± 57% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 1412 ± 7% -21.9% 1102 ± 12% interrupts.CPU47.CAL:Function_call_interrupts 2319 ± 42% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 2319 ± 42% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 3399 ± 68% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 3399 ± 68% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 3778 ± 59% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 3778 ± 59% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 5151 ± 48% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 5151 ± 48% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 5561 ± 35% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 5561 ± 35% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 5821 ± 46% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 5821 ± 46% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 207608 ± 2% -100.0% 3.17 ±100% interrupts.NMI:Non-maskable_interrupts 207608 ± 2% -100.0% 3.17 ±100% interrupts.PMI:Performance_monitoring_interrupts 0.01 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 8% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 62% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.00 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 39% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 ± 19% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ±144% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 39% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 0.01 ± 35% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 13% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.05 ± 45% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.01 ± 39% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 29% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.02 ± 13% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.03 ± 29% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.04 ± 31% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 57% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.04 ± 37% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 7% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 47% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 73% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.51 ±217% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.87 ±212% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 28% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 4.56 ± 70% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.03 ± 44% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 13.26 ± 46% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.01 ± 11% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 14.67 ± 31% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 170.42 ± 2% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 5988 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8283 -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 170.41 ± 2% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8283 -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.72 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 342.01 ± 45% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 809.07 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 342.00 ± 45% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 269.87 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.17 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.09 ± 40% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.82 ±212% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 68.47 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 17.33 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.42 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 669.52 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.70 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 13.10 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 651.82 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 531.21 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 7.50 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 22.33 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 7.50 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 244.17 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 243.50 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 118.67 ± 23% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 282.83 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 2254 -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 576.00 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 115.00 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 51.67 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 39.83 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 765.83 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 678.67 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 71.00 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 455.33 -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.70 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1835 ± 58% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1835 ± 58% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3368 ± 47% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 25.22 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.43 ±126% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 226.21 ±222% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 1781 ± 61% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1281 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.83 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 8283 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.67 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 282.00 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2007 ± 70% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.03 ± 44% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 5797 ± 26% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.71 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 342.00 ± 45% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 809.06 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 341.99 ± 45% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 269.86 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.16 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.09 ± 40% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.04 ±105% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.82 ±212% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 281.60 ± 84% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 68.47 -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 17.33 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.41 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 247.04 ± 48% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 669.51 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.68 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 47% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 13.09 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 651.81 -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 531.16 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.69 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1835 ± 58% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1835 ± 58% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3368 ± 47% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 25.21 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.43 ±126% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.06 ± 92% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 226.21 ±222% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 1185 ±119% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 1781 ± 61% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1281 ± 22% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.83 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 1574 ± 75% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 8283 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.66 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 47% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 281.99 ± 8% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2007 ± 70% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 5797 ± 26% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork *************************************************************************************************** lkp-skl-fpga01: 104 threads Skylake with 192G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/thread/100%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/eventfd1/will-it-scale/0x2006a0a commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 52186226 -6.7% 48700924 will-it-scale.104.threads 501790 -6.7% 468277 will-it-scale.per_thread_ops 52186226 -6.7% 48700924 will-it-scale.workload 12950 ± 9% -17.0% 10750 ± 5% cpuidle.C1.usage 3.54 ± 3% +2.8 6.38 mpstat.cpu.all.idle% 0.00 ± 65% -0.0 0.00 ±223% mpstat.cpu.all.iowait% 0.02 ± 4% +0.0 0.03 mpstat.cpu.all.soft% 362882 ± 8% +86.4% 676571 ± 8% numa-numastat.node0.local_node 398547 +81.2% 722246 ± 6% numa-numastat.node0.numa_hit 447332 ± 7% +34.5% 601584 ± 8% numa-numastat.node1.local_node 505349 +28.5% 649596 ± 7% numa-numastat.node1.numa_hit 56.00 -4.8% 53.33 vmstat.cpu.sy 42.00 +6.3% 44.67 vmstat.cpu.us 1534901 +42.6% 2188015 ± 2% vmstat.memory.cache 1519 +40.8% 2140 vmstat.system.cs 213074 -2.2% 208376 vmstat.system.in 8630 ± 3% +15.5% 9967 ± 2% slabinfo.kmalloc-256.active_objs 8682 ± 3% +15.3% 10012 ± 2% slabinfo.kmalloc-256.num_objs 14146 -37.0% 8919 slabinfo.proc_inode_cache.active_objs 294.33 -37.1% 185.17 slabinfo.proc_inode_cache.active_slabs 14152 -37.0% 8919 slabinfo.proc_inode_cache.num_objs 294.33 -37.1% 185.17 slabinfo.proc_inode_cache.num_slabs 6642 ± 33% +6663.1% 449248 ± 56% numa-meminfo.node0.Active 6410 ± 31% +6900.9% 448819 ± 56% numa-meminfo.node0.Active(anon) 608437 ± 2% +72.7% 1050670 ± 24% numa-meminfo.node0.FilePages 21523 ± 9% -19.9% 17247 ± 10% numa-meminfo.node0.Mapped 1319901 ± 5% +46.9% 1939109 ± 15% numa-meminfo.node0.MemUsed 13167 ± 25% +3354.6% 454875 ± 56% numa-meminfo.node0.Shmem 28339 ± 9% -26.7% 20783 ± 9% numa-meminfo.node1.Mapped 1602 ± 31% +6895.8% 112130 ± 56% numa-vmstat.node0.nr_active_anon 152107 ± 2% +72.6% 262594 ± 24% numa-vmstat.node0.nr_file_pages 5506 ± 9% -20.6% 4373 ± 9% numa-vmstat.node0.nr_mapped 3290 ± 25% +3354.2% 113644 ± 56% numa-vmstat.node0.nr_shmem 1602 ± 31% +6895.8% 112130 ± 56% numa-vmstat.node0.nr_zone_active_anon 996041 ± 8% +18.0% 1175776 ± 8% numa-vmstat.node0.numa_hit 7109 ± 9% -26.0% 5260 ± 8% numa-vmstat.node1.nr_mapped 224585 ± 2% +294.4% 885847 ± 4% meminfo.Active 224214 ± 2% +294.9% 885315 ± 4% meminfo.Active(anon) 1435267 +45.6% 2089172 ± 2% meminfo.Cached 1461887 +48.7% 2173889 meminfo.Committed_AS 49778 ± 2% -23.6% 38020 meminfo.Mapped 2798807 +37.1% 3836044 meminfo.Memused 251014 ± 2% +259.9% 903353 ± 4% meminfo.Shmem 3271799 +17.3% 3836259 meminfo.max_used_kB 213284 ± 5% +114.5% 457389 ± 23% sched_debug.cfs_rq:/.min_vruntime.stddev 1443 ± 3% +32.9% 1918 sched_debug.cfs_rq:/.runnable_avg.max 89.92 ± 5% +95.9% 176.19 ± 7% sched_debug.cfs_rq:/.runnable_avg.stddev -410176 +373.9% -1943656 sched_debug.cfs_rq:/.spread0.min 213230 ± 5% +114.5% 457463 ± 23% sched_debug.cfs_rq:/.spread0.stddev 1085 ± 2% +20.2% 1304 ± 6% sched_debug.cfs_rq:/.util_avg.max 823.42 ± 2% -28.6% 587.83 ± 13% sched_debug.cfs_rq:/.util_avg.min 55.50 ± 6% +66.5% 92.41 ± 6% sched_debug.cfs_rq:/.util_avg.stddev 1084 ± 6% +56.8% 1699 ± 12% sched_debug.cfs_rq:/.util_est_enqueued.max 418.06 ± 42% -71.1% 121.00 ± 54% sched_debug.cfs_rq:/.util_est_enqueued.min 103.78 ± 48% +102.0% 209.61 ± 23% sched_debug.cfs_rq:/.util_est_enqueued.stddev 11.59 ± 8% +20.6% 13.98 ± 7% sched_debug.cpu.clock.stddev 0.15 ± 8% +27.9% 0.20 ± 6% sched_debug.cpu.nr_running.stddev 3885 +21.9% 4735 sched_debug.cpu.nr_switches.avg 1465 ± 2% +37.7% 2018 ± 8% sched_debug.cpu.nr_switches.min 56001 ± 2% +295.4% 221449 ± 4% proc-vmstat.nr_active_anon 76089 +5.9% 80584 proc-vmstat.nr_anon_pages 102.00 +3.9% 106.00 proc-vmstat.nr_anon_transparent_hugepages 358731 +45.6% 522413 ± 2% proc-vmstat.nr_file_pages 82744 +2.6% 84923 proc-vmstat.nr_inactive_anon 12595 ± 2% -23.5% 9631 proc-vmstat.nr_mapped 1232 +9.7% 1352 proc-vmstat.nr_page_table_pages 62667 ± 2% +260.6% 225958 ± 4% proc-vmstat.nr_shmem 24844 -3.8% 23904 proc-vmstat.nr_slab_reclaimable 48258 +1.5% 48993 proc-vmstat.nr_slab_unreclaimable 56001 ± 2% +295.4% 221449 ± 4% proc-vmstat.nr_zone_active_anon 82744 +2.6% 84923 proc-vmstat.nr_zone_inactive_anon 12085 ± 18% -44.5% 6702 ± 10% proc-vmstat.numa_hint_faults 11125 ± 21% -54.9% 5014 ± 5% proc-vmstat.numa_hint_faults_local 940262 +49.4% 1404671 proc-vmstat.numa_hit 846554 +54.9% 1310943 proc-vmstat.numa_local 252600 ± 7% -37.2% 158665 ± 9% proc-vmstat.numa_pte_updates 86213 ± 2% -64.5% 30588 ± 4% proc-vmstat.pgactivate 1013630 +58.6% 1607409 proc-vmstat.pgalloc_normal 966965 -12.2% 849207 proc-vmstat.pgfault 902240 ± 2% +8.3% 976981 proc-vmstat.pgfree 54724 ± 3% -5.6% 51674 proc-vmstat.pgreuse 55.44 -55.4 0.00 perf-profile.calltrace.cycles-pp.__libc_read 45.23 -45.2 0.00 perf-profile.calltrace.cycles-pp.__libc_write 32.34 -32.3 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_read 25.67 -25.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_write 19.69 -19.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 18.44 -18.4 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 15.22 ± 2% -15.2 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.__libc_read 12.94 -12.9 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write 12.62 -12.6 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_read 12.22 -12.2 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__libc_write 12.13 -12.1 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__libc_read 11.76 -11.8 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write 9.45 -9.4 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_write 8.64 -8.6 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_safe_stack.__libc_write 7.79 -7.8 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe 7.21 -7.2 0.00 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__libc_read 7.18 -7.2 0.00 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__libc_write 6.43 -6.4 0.00 perf-profile.calltrace.cycles-pp.eventfd_read.new_sync_read.vfs_read.ksys_read.do_syscall_64 6.22 -6.2 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_write 6.13 -6.1 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_read 5.84 -5.8 0.00 perf-profile.calltrace.cycles-pp.eventfd_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe 58.12 -58.1 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 55.56 -55.6 0.00 perf-profile.children.cycles-pp.__libc_read 45.37 -45.4 0.00 perf-profile.children.cycles-pp.__libc_write 32.68 -32.7 0.00 perf-profile.children.cycles-pp.do_syscall_64 24.45 -24.4 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 22.04 -22.0 0.00 perf-profile.children.cycles-pp.__entry_text_start 18.50 -18.5 0.00 perf-profile.children.cycles-pp.ksys_read 15.95 -16.0 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 12.74 -12.7 0.00 perf-profile.children.cycles-pp.vfs_read 12.51 ± 2% -12.5 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_safe_stack 11.81 -11.8 0.00 perf-profile.children.cycles-pp.ksys_write 9.52 -9.5 0.00 perf-profile.children.cycles-pp.vfs_write 7.85 -7.8 0.00 perf-profile.children.cycles-pp.new_sync_read 6.62 -6.6 0.00 perf-profile.children.cycles-pp.eventfd_read 5.96 -6.0 0.00 perf-profile.children.cycles-pp.eventfd_write 5.76 ± 6% -5.8 0.00 perf-profile.children.cycles-pp.__fdget_pos 5.47 ± 7% -5.5 0.00 perf-profile.children.cycles-pp.__fget_light 5.26 -5.3 0.00 perf-profile.children.cycles-pp.security_file_permission 23.55 -23.6 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode 20.36 -20.4 0.00 perf-profile.self.cycles-pp.__entry_text_start 15.93 -15.9 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 0.05 ± 5% +420.8% 0.27 ± 4% perf-stat.i.MPKI 2.027e+10 -4.2% 1.943e+10 perf-stat.i.branch-instructions 1.57 -0.0 1.54 perf-stat.i.branch-miss-rate% 3.165e+08 -6.0% 2.976e+08 perf-stat.i.branch-misses 571800 ± 4% +609.6% 4057218 ± 39% perf-stat.i.cache-misses 4228813 ± 4% +488.8% 24898748 ± 4% perf-stat.i.cache-references 1484 +41.8% 2105 perf-stat.i.context-switches 2.86 +4.8% 3.00 perf-stat.i.cpi 182.77 +16.7% 213.25 perf-stat.i.cpu-migrations 608591 ± 5% -83.2% 102479 ± 47% perf-stat.i.cycles-between-cache-misses 0.37 -0.0 0.36 perf-stat.i.dTLB-load-miss-rate% 1.042e+08 -6.5% 97414278 perf-stat.i.dTLB-load-misses 2.833e+10 -4.8% 2.696e+10 perf-stat.i.dTLB-loads 102741 +4.2% 107062 perf-stat.i.dTLB-store-misses 1.95e+10 -5.0% 1.853e+10 perf-stat.i.dTLB-stores 1.092e+08 -6.2% 1.025e+08 perf-stat.i.iTLB-load-misses 9.817e+10 -4.1% 9.41e+10 perf-stat.i.instructions 0.35 -4.5% 0.33 perf-stat.i.ipc 1.38 ± 14% -64.0% 0.50 ± 69% perf-stat.i.major-faults 1.85 -54.8% 0.84 ± 34% perf-stat.i.metric.K/sec 654.89 -4.6% 624.54 perf-stat.i.metric.M/sec 3086 -12.4% 2704 perf-stat.i.minor-faults 106471 +594.9% 739862 ± 42% perf-stat.i.node-load-misses 33307 ± 10% +238.4% 112708 ± 49% perf-stat.i.node-loads 24314 ± 4% +1012.5% 270496 ± 59% perf-stat.i.node-store-misses 8353 ± 4% +116.7% 18105 ± 5% perf-stat.i.node-stores 3088 -12.4% 2704 perf-stat.i.page-faults 0.04 ± 4% +506.7% 0.26 ± 4% perf-stat.overall.MPKI 1.56 -0.0 1.53 perf-stat.overall.branch-miss-rate% 2.87 +4.9% 3.01 perf-stat.overall.cpi 487024 ± 4% -82.4% 85743 ± 50% perf-stat.overall.cycles-between-cache-misses 0.37 -0.0 0.36 perf-stat.overall.dTLB-load-miss-rate% 0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate% 898.84 +2.2% 918.36 perf-stat.overall.instructions-per-iTLB-miss 0.35 -4.6% 0.33 perf-stat.overall.ipc 73.90 ± 2% +13.3 87.16 perf-stat.overall.node-load-miss-rate% 566759 +2.6% 581357 perf-stat.overall.path-length 2.02e+10 -4.2% 1.936e+10 perf-stat.ps.branch-instructions 3.155e+08 -6.0% 2.966e+08 perf-stat.ps.branch-misses 576750 ± 4% +601.1% 4043821 ± 39% perf-stat.ps.cache-misses 4267217 ± 4% +481.5% 24815334 ± 4% perf-stat.ps.cache-references 1479 +41.8% 2098 perf-stat.ps.context-switches 182.12 +16.7% 212.55 perf-stat.ps.cpu-migrations 1.038e+08 -6.5% 97085393 perf-stat.ps.dTLB-load-misses 2.823e+10 -4.8% 2.687e+10 perf-stat.ps.dTLB-loads 102632 +4.0% 106712 perf-stat.ps.dTLB-store-misses 1.944e+10 -5.0% 1.847e+10 perf-stat.ps.dTLB-stores 1.089e+08 -6.2% 1.021e+08 perf-stat.ps.iTLB-load-misses 9.784e+10 -4.1% 9.378e+10 perf-stat.ps.instructions 1.38 ± 14% -64.1% 0.50 ± 69% perf-stat.ps.major-faults 3086 -12.6% 2696 perf-stat.ps.minor-faults 106206 +594.3% 737408 ± 42% perf-stat.ps.node-load-misses 37580 ± 8% +199.0% 112348 ± 48% perf-stat.ps.node-loads 24248 ± 4% +1011.8% 269602 ± 59% perf-stat.ps.node-store-misses 8515 ± 4% +112.2% 18072 ± 5% perf-stat.ps.node-stores 3087 -12.7% 2696 perf-stat.ps.page-faults 2.958e+13 -4.3% 2.831e+13 perf-stat.total.instructions 30844 ± 8% +121.1% 68192 ± 68% softirqs.CPU0.RCU 26956 ± 5% +64.7% 44390 ± 3% softirqs.CPU1.RCU 26308 ± 5% +89.9% 49954 ± 10% softirqs.CPU10.RCU 25219 ± 6% +75.1% 44155 ± 9% softirqs.CPU100.RCU 25232 ± 6% +110.1% 53017 ± 25% softirqs.CPU101.RCU 25519 ± 6% +91.4% 48838 ± 19% softirqs.CPU102.RCU 25905 ± 6% +67.5% 43403 ± 8% softirqs.CPU103.RCU 26265 ± 4% +66.4% 43716 ± 7% softirqs.CPU11.RCU 26380 ± 5% +83.6% 48422 ± 10% softirqs.CPU12.RCU 26459 ± 5% +63.5% 43248 ± 2% softirqs.CPU13.RCU 25917 ± 12% +127.6% 58978 ± 47% softirqs.CPU14.RCU 29771 ± 5% +61.4% 48060 ± 4% softirqs.CPU15.RCU 30022 ± 5% +82.2% 54698 ± 14% softirqs.CPU16.RCU 30271 ± 5% +72.0% 52062 ± 15% softirqs.CPU17.RCU 30537 ± 5% +137.5% 72541 ± 31% softirqs.CPU18.RCU 30382 ± 5% +77.2% 53824 ± 8% softirqs.CPU19.RCU 27357 ± 5% +62.0% 44321 ± 3% softirqs.CPU2.RCU 30050 ± 4% +69.2% 50846 ± 12% softirqs.CPU20.RCU 29922 ± 5% +55.8% 46628 ± 5% softirqs.CPU21.RCU 30046 ± 5% +64.3% 49370 ± 6% softirqs.CPU22.RCU 30110 ± 5% +67.5% 50433 ± 7% softirqs.CPU23.RCU 30349 ± 4% +69.2% 51356 ± 10% softirqs.CPU24.RCU 30574 ± 5% +60.2% 48982 ± 2% softirqs.CPU25.RCU 27893 ± 4% +162.1% 73110 ± 49% softirqs.CPU26.RCU 28501 ± 10% +59.2% 45385 ± 8% softirqs.CPU27.RCU 26934 ± 5% +57.8% 42501 ± 4% softirqs.CPU28.RCU 26483 ± 5% +86.0% 49251 ± 15% softirqs.CPU29.RCU 26642 ± 5% +106.6% 55031 ± 43% softirqs.CPU3.RCU 28062 ± 4% +77.9% 49926 ± 13% softirqs.CPU30.RCU 28177 ± 5% +97.6% 55671 ± 38% softirqs.CPU31.RCU 29044 ± 6% +89.3% 54979 ± 21% softirqs.CPU32.RCU 27996 ± 5% +74.6% 48888 ± 25% softirqs.CPU33.RCU 28168 ± 5% +61.3% 45437 ± 2% softirqs.CPU34.RCU 27892 ± 4% +63.0% 45464 ± 2% softirqs.CPU35.RCU 28029 ± 5% +66.4% 46637 ± 6% softirqs.CPU36.RCU 28327 ± 5% +64.6% 46629 ± 20% softirqs.CPU37.RCU 27932 ± 5% +68.7% 47112 ± 4% softirqs.CPU38.RCU 27780 ± 5% +87.7% 52147 ± 25% softirqs.CPU39.RCU 26054 ± 6% +67.2% 43574 ± 3% softirqs.CPU4.RCU 28089 ± 5% +66.4% 46740 ± 8% softirqs.CPU40.RCU 27568 ± 5% +122.4% 61316 ± 60% softirqs.CPU41.RCU 27950 ± 5% +126.0% 63167 ± 50% softirqs.CPU42.RCU 28375 ± 6% +94.4% 55157 ± 23% softirqs.CPU43.RCU 28146 ± 5% +61.7% 45510 softirqs.CPU44.RCU 25952 ± 11% +84.2% 47793 ± 19% softirqs.CPU45.RCU 26422 ± 5% +89.9% 50181 ± 17% softirqs.CPU46.RCU 26574 ± 6% +124.6% 59680 ± 60% softirqs.CPU47.RCU 26872 ± 6% +67.3% 44949 ± 6% softirqs.CPU48.RCU 27049 ± 5% +93.6% 52361 ± 16% softirqs.CPU49.RCU 26876 ± 4% +70.7% 45880 ± 17% softirqs.CPU5.RCU 26965 ± 5% +77.2% 47770 ± 23% softirqs.CPU50.RCU 26848 ± 5% +71.6% 46072 ± 12% softirqs.CPU51.RCU 32626 ± 5% +112.8% 69424 ± 70% softirqs.CPU52.RCU 30513 ± 10% +50.8% 46015 ± 7% softirqs.CPU53.RCU 31990 ± 6% +51.3% 48403 ± 5% softirqs.CPU54.RCU 30381 ± 6% +98.2% 60203 ± 47% softirqs.CPU55.RCU 28699 ± 10% +63.8% 47011 softirqs.CPU56.RCU 29714 ± 6% +70.0% 50520 ± 12% softirqs.CPU57.RCU 29846 ± 6% +152.9% 75474 ± 77% softirqs.CPU58.RCU 28944 ± 6% +64.8% 47696 ± 4% softirqs.CPU59.RCU 26158 ± 4% +174.1% 71692 ± 87% softirqs.CPU6.RCU 26584 ± 6% +107.9% 55274 ± 23% softirqs.CPU60.RCU 26575 ± 6% +78.6% 47469 ± 11% softirqs.CPU61.RCU 27088 ± 6% +86.2% 50440 ± 8% softirqs.CPU62.RCU 26756 ± 5% +69.4% 45326 ± 5% softirqs.CPU63.RCU 26898 ± 6% +85.7% 49937 ± 8% softirqs.CPU64.RCU 27013 ± 6% +64.7% 44498 ± 2% softirqs.CPU65.RCU 26158 ± 9% +133.5% 61069 ± 43% softirqs.CPU66.RCU 26752 ± 6% +69.9% 45446 ± 6% softirqs.CPU67.RCU 26598 ± 6% +95.6% 52029 ± 17% softirqs.CPU68.RCU 26963 ± 6% +74.4% 47031 ± 8% softirqs.CPU69.RCU 26446 ± 4% +65.0% 43634 ± 4% softirqs.CPU7.RCU 26831 ± 6% +143.6% 65357 ± 26% softirqs.CPU70.RCU 26851 ± 6% +82.0% 48859 ± 8% softirqs.CPU71.RCU 26947 ± 6% +73.8% 46836 ± 12% softirqs.CPU72.RCU 26659 ± 6% +64.0% 43729 ± 5% softirqs.CPU73.RCU 26518 ± 6% +69.7% 45002 ± 5% softirqs.CPU74.RCU 26104 ± 5% +70.6% 44529 ± 10% softirqs.CPU75.RCU 25972 ± 6% +72.7% 44848 ± 9% softirqs.CPU76.RCU 26025 ± 6% +60.7% 41819 ± 2% softirqs.CPU77.RCU 32603 ± 6% +136.8% 77189 ± 48% softirqs.CPU78.RCU 32307 ± 5% +47.2% 47543 ± 5% softirqs.CPU79.RCU 26850 ± 4% +101.4% 54062 ± 22% softirqs.CPU8.RCU 29675 ± 6% +44.6% 42906 ± 10% softirqs.CPU80.RCU 28476 ± 5% +74.8% 49771 ± 12% softirqs.CPU81.RCU 28182 ± 5% +76.1% 49616 ± 13% softirqs.CPU82.RCU 28261 ± 3% +100.7% 56719 ± 43% softirqs.CPU83.RCU 27947 ± 5% +94.3% 54290 ± 20% softirqs.CPU84.RCU 27340 ± 5% +80.7% 49410 ± 21% softirqs.CPU85.RCU 27779 ± 4% +61.8% 44951 softirqs.CPU86.RCU 27559 ± 5% +64.2% 45245 ± 2% softirqs.CPU87.RCU 27840 ± 5% +65.7% 46133 ± 4% softirqs.CPU88.RCU 27854 ± 5% +66.9% 46498 ± 13% softirqs.CPU89.RCU 26383 ± 5% +74.9% 46141 ± 13% softirqs.CPU9.RCU 24979 ± 8% +74.6% 43616 ± 6% softirqs.CPU90.RCU 25158 ± 6% +89.1% 47566 ± 25% softirqs.CPU91.RCU 25079 ± 5% +76.1% 44157 ± 13% softirqs.CPU92.RCU 25724 ± 5% +119.2% 56399 ± 57% softirqs.CPU93.RCU 25350 ± 6% +135.5% 59709 ± 54% softirqs.CPU94.RCU 25702 ± 6% +103.4% 52277 ± 26% softirqs.CPU95.RCU 25483 ± 6% +65.7% 42215 ± 2% softirqs.CPU96.RCU 25024 ± 8% +88.3% 47131 ± 24% softirqs.CPU97.RCU 25152 ± 6% +88.4% 47388 ± 17% softirqs.CPU98.RCU 25216 ± 6% +124.6% 56628 ± 58% softirqs.CPU99.RCU 2874601 ± 5% +83.8% 5284950 softirqs.RCU 42724 ± 3% +38.5% 59180 ± 2% softirqs.TIMER 0.02 -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1.13 ±108% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 2% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3.03 ±103% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.61 ± 30% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.67 ±217% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.03 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.28 ±211% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 8.88 ± 2% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.02 ± 2% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.10 ±141% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.02 ± 12% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 ± 41% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 47% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.88 ± 39% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.04 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.02 ± 8% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 5.00 ±104% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.03 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 9.03 ± 97% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 14.88 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 169.89 ±223% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 6.28 ± 73% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.03 ± 13% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 172.42 ±223% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.04 ± 5% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 27.26 ± 36% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.04 ± 10% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 2.26 ±196% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.02 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.06 ±148% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 26% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.04 ± 27% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 6.27 ±169% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 14.26 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 11.95 ± 33% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.20 ± 22% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 360.16 ±130% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 141.09 ± 5% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 12648 ± 5% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8349 ± 8% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 140.88 ± 5% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8349 ± 8% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.48 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1180 ± 38% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 826.29 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1240 ± 40% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 261.32 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 107.17 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.24 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.18 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.15 ± 53% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.22 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 159.37 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 9.02 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 8.88 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 2.77 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 456.17 ± 17% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 158.50 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.37 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 7.05 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 595.27 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 588.30 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 4.83 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 21.83 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 4.67 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 235.67 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 265.17 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1580 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 212.50 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 195.67 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 3516 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 867.17 ± 24% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 1248 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 212.33 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 123.17 ± 28% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 29.67 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 159.67 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 39.33 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1413 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1813 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 589.17 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.46 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3929 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3932 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1348 ± 57% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1023 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 22.52 ± 47% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 4.80 ± 85% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 9.99 ± 68% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 50.15 ± 28% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3933 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 2228 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 27.26 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 6.86 ± 59% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 3550 ± 40% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5831 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.02 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 325.68 ± 41% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 7587 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 8210 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.47 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1178 ± 38% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 826.27 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1237 ± 40% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 260.71 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 106.50 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.24 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.18 ± 31% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.07 ± 67% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_irq_work.[unknown] 0.12 ± 59% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.22 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 1311 ± 59% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 159.09 ± 21% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 397.64 ± 95% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 1.75 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.khugepaged.kthread.ret_from_fork 9.02 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.75 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 456.07 ± 17% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 158.50 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.35 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 3.66 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 7.03 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 595.26 -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 2.22 ± 20% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 588.26 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.44 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3927 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3928 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1348 ± 57% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1023 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 22.52 ± 47% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 4.80 ± 85% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.44 ± 88% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_irq_work.[unknown] 6.02 ±107% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 50.15 ± 28% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3494 ± 58% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 3933 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 3420 ± 92% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 3.48 ± 6% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.khugepaged.kthread.ret_from_fork 2228 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 6.84 ± 59% -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 3550 ± 40% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5831 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.00 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 3.66 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 325.66 ± 41% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 7587 ± 12% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 12.63 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 8210 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 286194 -11.1% 254358 interrupts.CAL:Function_call_interrupts 6254 ± 28% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 6254 ± 28% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 6880 ± 20% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 6880 ± 20% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 542.67 ± 17% -22.8% 419.00 ± 19% interrupts.CPU1.RES:Rescheduling_interrupts 6878 ± 20% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 6878 ± 20% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 336.00 ± 8% +26.3% 424.50 ± 21% interrupts.CPU10.RES:Rescheduling_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 6531 ± 28% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 6531 ± 28% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 6877 ± 20% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 6877 ± 20% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 322.83 +43.7% 464.00 ± 50% interrupts.CPU11.RES:Rescheduling_interrupts 6877 ± 20% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 6877 ± 20% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 6880 ± 20% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 6880 ± 20% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 7501 -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 7501 -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 323.17 ± 2% +61.5% 521.83 ± 40% interrupts.CPU14.RES:Rescheduling_interrupts 7501 -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 7501 -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 322.83 +85.3% 598.17 ± 77% interrupts.CPU15.RES:Rescheduling_interrupts 6871 ± 20% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 6871 ± 20% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 7504 -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 7504 -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 318.17 +66.0% 528.00 ± 56% interrupts.CPU17.RES:Rescheduling_interrupts 7497 -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 7497 -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 7496 -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 7496 -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 315.50 +20.6% 380.50 ± 8% interrupts.CPU19.RES:Rescheduling_interrupts 6246 ± 28% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 6246 ± 28% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 7504 -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 7504 -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 320.17 +21.8% 389.83 ± 14% interrupts.CPU20.RES:Rescheduling_interrupts 7502 -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 7502 -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 318.33 +36.7% 435.17 ± 30% interrupts.CPU21.RES:Rescheduling_interrupts 7502 -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 7502 -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 7502 -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 7502 -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 7503 -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 7503 -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 320.17 +83.2% 586.67 ± 84% interrupts.CPU24.RES:Rescheduling_interrupts 7503 -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 7503 -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 2978 ± 14% -23.9% 2265 ± 6% interrupts.CPU26.CAL:Function_call_interrupts 7840 -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 7840 -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 578.00 ± 32% -39.1% 351.83 ± 9% interrupts.CPU26.RES:Rescheduling_interrupts 3379 ± 19% -29.7% 2375 ± 6% interrupts.CPU27.CAL:Function_call_interrupts 7842 -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 7842 -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 555.67 ± 34% -37.6% 346.67 ± 6% interrupts.CPU27.RES:Rescheduling_interrupts 3166 ± 23% -26.0% 2343 ± 3% interrupts.CPU28.CAL:Function_call_interrupts 7841 -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 7841 -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 7843 -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 7843 -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 2850 ± 16% -18.3% 2330 ± 5% interrupts.CPU3.CAL:Function_call_interrupts 6246 ± 28% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 6246 ± 28% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 7843 -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 7843 -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 7188 ± 20% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 7188 ± 20% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 7844 -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 7844 -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 7843 -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 7843 -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 7843 -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 7843 -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 7843 -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 7843 -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 7842 -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 7842 -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 7843 -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 7843 -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 318.17 ± 2% +21.2% 385.50 ± 26% interrupts.CPU38.RES:Rescheduling_interrupts 7190 ± 20% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 7190 ± 20% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 6259 ± 28% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 6259 ± 28% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 315.00 +18.1% 372.00 ± 19% interrupts.CPU40.RES:Rescheduling_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 317.83 +7.9% 343.00 ± 3% interrupts.CPU43.RES:Rescheduling_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 324.33 ± 5% +20.4% 390.50 ± 15% interrupts.CPU44.RES:Rescheduling_interrupts 7843 -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 7843 -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 7843 -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 7843 -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 5620 ± 33% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 5620 ± 33% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 6532 ± 28% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 6532 ± 28% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 6246 ± 28% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 6246 ± 28% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 6486 ± 27% -59.5% 2629 ± 16% interrupts.CPU53.CAL:Function_call_interrupts 6250 ± 28% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 6250 ± 28% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 6971 ± 23% -65.3% 2422 ± 3% interrupts.CPU54.CAL:Function_call_interrupts 6255 ± 28% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 6255 ± 28% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 6176 ± 34% -60.8% 2419 ± 5% interrupts.CPU55.CAL:Function_call_interrupts 6873 ± 20% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 6873 ± 20% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 7497 -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 7497 -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 6873 ± 20% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 6873 ± 20% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 6247 ± 28% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 6247 ± 28% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 6248 ± 28% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 6248 ± 28% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 6242 ± 28% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 6242 ± 28% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 6879 ± 20% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 6879 ± 20% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 7501 -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 7501 -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 7500 -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 7500 -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 6882 ± 20% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 6882 ± 20% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 6877 ± 20% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 6877 ± 20% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 383.33 ± 24% +206.4% 1174 ± 68% interrupts.CPU64.RES:Rescheduling_interrupts 6874 ± 20% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 6874 ± 20% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 6874 ± 20% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 6874 ± 20% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 360.50 ± 17% +191.8% 1052 ± 97% interrupts.CPU66.RES:Rescheduling_interrupts 6874 ± 20% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 6874 ± 20% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 6875 ± 20% -100.0% 1.00 ±223% interrupts.CPU68.NMI:Non-maskable_interrupts 6875 ± 20% -100.0% 1.00 ±223% interrupts.CPU68.PMI:Performance_monitoring_interrupts 6247 ± 28% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 6247 ± 28% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 379.67 ± 34% +230.7% 1255 ± 63% interrupts.CPU69.RES:Rescheduling_interrupts 6251 ± 28% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 6251 ± 28% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 6246 ± 28% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 6246 ± 28% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 492.50 ± 74% +206.2% 1508 ± 72% interrupts.CPU70.RES:Rescheduling_interrupts 6249 ± 28% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 6249 ± 28% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 319.00 +257.1% 1139 ± 66% interrupts.CPU71.RES:Rescheduling_interrupts 6245 ± 28% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 6245 ± 28% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 317.50 ± 2% +458.8% 1774 ± 82% interrupts.CPU72.RES:Rescheduling_interrupts 6244 ± 28% -100.0% 1.00 ±223% interrupts.CPU73.NMI:Non-maskable_interrupts 6244 ± 28% -100.0% 1.00 ±223% interrupts.CPU73.PMI:Performance_monitoring_interrupts 320.83 +536.9% 2043 ±124% interrupts.CPU73.RES:Rescheduling_interrupts 5621 ± 33% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 5621 ± 33% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 341.50 ± 17% +469.9% 1946 ± 85% interrupts.CPU74.RES:Rescheduling_interrupts 5621 ± 33% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 5621 ± 33% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 321.67 ± 2% +318.3% 1345 ± 83% interrupts.CPU75.RES:Rescheduling_interrupts 5621 ± 33% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 5621 ± 33% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 329.00 ± 7% +179.3% 918.83 ± 52% interrupts.CPU76.RES:Rescheduling_interrupts 6247 ± 28% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 6247 ± 28% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 3047 ± 4% -25.1% 2281 ± 6% interrupts.CPU78.CAL:Function_call_interrupts 7188 ± 20% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 7188 ± 20% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 5865 ± 24% -51.0% 2875 ± 21% interrupts.CPU79.CAL:Function_call_interrupts 6534 ± 28% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 6534 ± 28% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 5623 ± 33% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 5623 ± 33% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 348.00 ± 9% +25.1% 435.33 ± 16% interrupts.CPU8.RES:Rescheduling_interrupts 9480 ± 32% -74.7% 2396 ± 3% interrupts.CPU80.CAL:Function_call_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 4180 ± 37% -35.9% 2679 ± 17% interrupts.CPU81.CAL:Function_call_interrupts 7190 ± 20% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 7190 ± 20% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 6534 ± 28% -100.0% 1.00 ±223% interrupts.CPU83.NMI:Non-maskable_interrupts 6534 ± 28% -100.0% 1.00 ±223% interrupts.CPU83.PMI:Performance_monitoring_interrupts 6534 ± 28% -100.0% 1.00 ±223% interrupts.CPU84.NMI:Non-maskable_interrupts 6534 ± 28% -100.0% 1.00 ±223% interrupts.CPU84.PMI:Performance_monitoring_interrupts 447.00 ± 36% +281.5% 1705 ± 78% interrupts.CPU84.RES:Rescheduling_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 5882 ± 33% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 5882 ± 33% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 6876 ± 20% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 6876 ± 20% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 5881 ± 33% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 5882 ± 33% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 5882 ± 33% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 6536 ± 28% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 7843 -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 7843 -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 6535 ± 28% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 7189 ± 20% -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 7191 ± 20% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 7191 ± 20% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 7190 ± 20% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 7190 ± 20% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 188.00 ± 10% -100.0% 0.00 interrupts.IWI:IRQ_work_interrupts 709488 ± 10% -100.0% 4.00 ± 70% interrupts.NMI:Non-maskable_interrupts 709488 ± 10% -100.0% 4.00 ± 70% interrupts.PMI:Performance_monitoring_interrupts 51265 ± 3% +88.4% 96585 ± 4% interrupts.RES:Rescheduling_interrupts *************************************************************************************************** lkp-skl-fpga01: 104 threads Skylake with 192G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/process/50%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/lock1/will-it-scale/0x2006a0a commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 65974942 -11.2% 58555959 will-it-scale.52.processes 49.48 -3.8% 47.59 will-it-scale.52.processes_idle 1268748 -11.2% 1126075 will-it-scale.per_process_ops 65974942 -11.2% 58555959 will-it-scale.workload 44.70 ± 4% -10.1% 40.20 ± 5% boot-time.boot 0.04 ± 3% +0.0 0.05 ± 5% mpstat.cpu.all.soft% 355521 ± 10% +125.7% 802580 ± 13% numa-numastat.node0.local_node 414546 ± 3% +103.3% 842713 ± 10% numa-numastat.node0.numa_hit 433483 ± 8% +69.5% 734853 ± 8% numa-numastat.node1.local_node 468152 ± 3% +68.4% 788406 ± 5% numa-numastat.node1.numa_hit 50.00 -4.0% 48.00 vmstat.cpu.id 1437645 +80.4% 2592998 ± 2% vmstat.memory.cache 1541 ± 2% +31.9% 2033 ± 6% vmstat.system.cs 210877 -1.5% 207702 vmstat.system.in 4268 ±199% +4480.4% 195504 ± 34% numa-vmstat.node0.nr_active_anon 155565 ± 6% +121.0% 343743 ± 20% numa-vmstat.node0.nr_file_pages 6247 ±156% +3062.7% 197602 ± 34% numa-vmstat.node0.nr_shmem 4268 ±199% +4480.4% 195504 ± 34% numa-vmstat.node0.nr_zone_active_anon 986493 ± 6% +22.0% 1203781 ± 5% numa-vmstat.node1.numa_hit 17266 ±197% +4437.2% 783423 ± 34% numa-meminfo.node0.Active 17135 ±199% +4471.0% 783285 ± 34% numa-meminfo.node0.Active(anon) 622314 ± 6% +121.1% 1376242 ± 20% numa-meminfo.node0.FilePages 1371573 ± 5% +64.9% 2261075 ± 12% numa-meminfo.node0.MemUsed 25042 ±156% +3061.4% 791680 ± 34% numa-meminfo.node0.Shmem 1315417 ± 6% +51.6% 1994477 ± 11% numa-meminfo.node1.MemUsed 14029 -35.9% 8998 slabinfo.proc_inode_cache.active_objs 291.83 ± 2% -36.0% 186.83 slabinfo.proc_inode_cache.active_slabs 14029 -35.9% 8998 slabinfo.proc_inode_cache.num_objs 291.83 ± 2% -36.0% 186.83 slabinfo.proc_inode_cache.num_slabs 22345 +20.1% 26842 slabinfo.radix_tree_node.active_objs 22345 +20.1% 26842 slabinfo.radix_tree_node.num_objs 133968 ± 3% +863.6% 1290908 ± 4% meminfo.Active 133763 ± 3% +864.9% 1290706 ± 4% meminfo.Active(anon) 1337910 +86.2% 2491375 ± 2% meminfo.Cached 976696 +124.0% 2187785 ± 2% meminfo.Committed_AS 2687139 +58.3% 4254779 meminfo.Memused 155236 ± 2% +743.0% 1308697 ± 4% meminfo.Shmem 3151136 +35.0% 4254804 meminfo.max_used_kB 3694634 ± 11% +26.2% 4661465 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg 6939032 ± 10% +21.0% 8395000 sched_debug.cfs_rq:/.min_vruntime.max 238.72 ± 8% +11.0% 265.08 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.avg 568.46 +72.0% 977.61 sched_debug.cfs_rq:/.util_est_enqueued.max 5948 ± 5% +8.3% 6445 sched_debug.cpu.curr->pid.max 0.00 ± 15% -19.9% 0.00 ± 2% sched_debug.cpu.next_balance.stddev 3615 ± 7% +26.0% 4554 ± 3% sched_debug.cpu.nr_switches.avg 20808 ± 10% +89.0% 39319 ± 22% sched_debug.cpu.nr_switches.max 2984 ± 4% +66.8% 4979 ± 16% sched_debug.cpu.nr_switches.stddev 33428 ± 3% +864.5% 322403 ± 4% proc-vmstat.nr_active_anon 65499 +7.5% 70395 proc-vmstat.nr_anon_pages 334474 +86.1% 622571 ± 2% proc-vmstat.nr_file_pages 70821 +5.6% 74753 proc-vmstat.nr_inactive_anon 10465 -8.1% 9613 proc-vmstat.nr_mapped 1546 +7.9% 1669 proc-vmstat.nr_page_table_pages 38804 ± 2% +742.4% 326900 ± 4% proc-vmstat.nr_shmem 24856 -1.8% 24416 proc-vmstat.nr_slab_reclaimable 48754 +1.7% 49566 proc-vmstat.nr_slab_unreclaimable 33428 ± 3% +864.5% 322403 ± 4% proc-vmstat.nr_zone_active_anon 70821 +5.6% 74753 proc-vmstat.nr_zone_inactive_anon 913561 +81.6% 1659479 ± 4% proc-vmstat.numa_hit 819833 +91.0% 1565754 ± 4% proc-vmstat.numa_local 79749 ± 14% -52.6% 37805 ± 28% proc-vmstat.numa_pte_updates 50878 ± 3% -14.8% 43362 ± 3% proc-vmstat.pgactivate 970905 +93.9% 1882794 ± 6% proc-vmstat.pgalloc_normal 945118 -7.8% 871445 proc-vmstat.pgfault 899545 +16.9% 1051466 ± 10% proc-vmstat.pgfree 70.10 ± 7% -70.1 0.00 perf-profile.calltrace.cycles-pp.__libc_fcntl64 49.17 ± 7% -49.2 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_fcntl64 30.20 ± 7% -30.2 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64 29.58 ± 7% -29.6 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64 29.25 ± 18% -29.3 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 29.16 ± 18% -29.2 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 28.71 ± 18% -28.7 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 28.71 ± 18% -28.7 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 28.71 ± 18% -28.7 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 28.71 ± 18% -28.7 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 28.71 ± 18% -28.7 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 27.29 ± 7% -27.3 0.00 perf-profile.calltrace.cycles-pp.do_fcntl.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fcntl64 24.24 ± 7% -24.2 0.00 perf-profile.calltrace.cycles-pp.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64.entry_SYSCALL_64_after_hwframe 18.09 ± 7% -18.1 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__libc_fcntl64 17.55 ± 7% -17.5 0.00 perf-profile.calltrace.cycles-pp.do_lock_file_wait.fcntl_setlk.do_fcntl.__x64_sys_fcntl.do_syscall_64 14.01 ± 7% -14.0 0.00 perf-profile.calltrace.cycles-pp.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl.__x64_sys_fcntl 9.67 ± 7% -9.7 0.00 perf-profile.calltrace.cycles-pp.__entry_text_start.__libc_fcntl64 8.80 ± 7% -8.8 0.00 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__libc_fcntl64 6.69 ± 7% -6.7 0.00 perf-profile.calltrace.cycles-pp.locks_alloc_lock.posix_lock_inode.do_lock_file_wait.fcntl_setlk.do_fcntl 5.88 ± 7% -5.9 0.00 perf-profile.calltrace.cycles-pp.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode.do_lock_file_wait.fcntl_setlk 70.13 ± 7% -70.1 0.00 perf-profile.children.cycles-pp.__libc_fcntl64 49.34 ± 7% -49.3 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 30.30 ± 7% -30.3 0.00 perf-profile.children.cycles-pp.do_syscall_64 29.65 ± 7% -29.6 0.00 perf-profile.children.cycles-pp.__x64_sys_fcntl 29.25 ± 18% -29.3 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 29.25 ± 18% -29.3 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 29.25 ± 18% -29.3 0.00 perf-profile.children.cycles-pp.do_idle 29.25 ± 18% -29.3 0.00 perf-profile.children.cycles-pp.cpuidle_enter 29.25 ± 18% -29.3 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 29.25 ± 18% -29.2 0.00 perf-profile.children.cycles-pp.intel_idle 28.71 ± 18% -28.7 0.00 perf-profile.children.cycles-pp.start_secondary 27.44 ± 7% -27.4 0.00 perf-profile.children.cycles-pp.do_fcntl 24.39 ± 7% -24.4 0.00 perf-profile.children.cycles-pp.fcntl_setlk 18.16 ± 7% -18.2 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 17.60 ± 7% -17.6 0.00 perf-profile.children.cycles-pp.do_lock_file_wait 14.21 ± 7% -14.2 0.00 perf-profile.children.cycles-pp.posix_lock_inode 10.33 ± 7% -10.3 0.00 perf-profile.children.cycles-pp.locks_alloc_lock 10.02 ± 7% -10.0 0.00 perf-profile.children.cycles-pp.syscall_return_via_sysret 9.28 ± 7% -9.3 0.00 perf-profile.children.cycles-pp.kmem_cache_alloc 9.24 ± 7% -9.2 0.00 perf-profile.children.cycles-pp.__entry_text_start 29.25 ± 18% -29.2 0.00 perf-profile.self.cycles-pp.intel_idle 17.69 ± 7% -17.7 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode 9.99 ± 7% -10.0 0.00 perf-profile.self.cycles-pp.syscall_return_via_sysret 8.00 ± 7% -8.0 0.00 perf-profile.self.cycles-pp.__entry_text_start 0.05 ± 11% +680.5% 0.40 ± 6% perf-stat.i.MPKI 2.701e+10 -8.5% 2.472e+10 perf-stat.i.branch-instructions 1.369e+08 -8.4% 1.253e+08 perf-stat.i.branch-misses 649310 ± 2% +1654.2% 11390355 ± 27% perf-stat.i.cache-misses 5660579 ± 14% +736.8% 47366764 ± 6% perf-stat.i.cache-references 1510 ± 2% +32.6% 2002 ± 6% perf-stat.i.context-switches 1.11 +13.7% 1.26 perf-stat.i.cpi 1.453e+11 +4.1% 1.513e+11 perf-stat.i.cpu-cycles 131.39 -4.0% 126.20 ± 4% perf-stat.i.cpu-migrations 256155 ± 3% -92.3% 19600 ± 62% perf-stat.i.cycles-between-cache-misses 0.17 -0.0 0.17 perf-stat.i.dTLB-load-miss-rate% 65823761 -10.7% 58748892 perf-stat.i.dTLB-load-misses 3.856e+10 -9.2% 3.5e+10 perf-stat.i.dTLB-loads 2.428e+10 -9.3% 2.204e+10 perf-stat.i.dTLB-stores 66446485 -10.7% 59319204 perf-stat.i.iTLB-load-misses 1.313e+11 -8.5% 1.202e+11 perf-stat.i.instructions 1991 +3.7% 2064 perf-stat.i.instructions-per-iTLB-miss 0.90 -12.0% 0.80 perf-stat.i.ipc 0.95 -75.6% 0.23 ± 10% perf-stat.i.major-faults 1.40 +4.1% 1.45 perf-stat.i.metric.GHz 863.89 -9.0% 786.56 perf-stat.i.metric.M/sec 3019 -7.6% 2790 perf-stat.i.minor-faults 90.57 -4.7 85.84 perf-stat.i.node-load-miss-rate% 134354 ± 3% +1331.3% 1923003 ± 25% perf-stat.i.node-load-misses 24756 ± 10% +1213.4% 325157 ± 27% perf-stat.i.node-loads 25459 ± 3% +2771.4% 731057 ± 33% perf-stat.i.node-store-misses 8577 ± 4% +152.0% 21618 ± 4% perf-stat.i.node-stores 3020 -7.6% 2790 perf-stat.i.page-faults 0.04 ± 14% +811.5% 0.39 ± 6% perf-stat.overall.MPKI 1.11 +13.8% 1.26 perf-stat.overall.cpi 222841 ± 2% -93.3% 14922 ± 41% perf-stat.overall.cycles-between-cache-misses 0.17 -0.0 0.17 perf-stat.overall.dTLB-load-miss-rate% 0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate% 1976 +2.5% 2026 perf-stat.overall.instructions-per-iTLB-miss 0.90 -12.1% 0.79 perf-stat.overall.ipc 74.63 +22.0 96.60 perf-stat.overall.node-store-miss-rate% 599048 +3.2% 618321 perf-stat.overall.path-length 2.691e+10 -8.5% 2.464e+10 perf-stat.ps.branch-instructions 1.365e+08 -8.4% 1.249e+08 perf-stat.ps.branch-misses 649944 ± 2% +1646.7% 11352424 ± 27% perf-stat.ps.cache-misses 5658865 ± 14% +734.2% 47207353 ± 6% perf-stat.ps.cache-references 1505 ± 2% +32.6% 1996 ± 6% perf-stat.ps.context-switches 1.447e+11 +4.1% 1.508e+11 perf-stat.ps.cpu-cycles 131.27 -4.2% 125.82 ± 4% perf-stat.ps.cpu-migrations 65589802 -10.7% 58550657 perf-stat.ps.dTLB-load-misses 3.842e+10 -9.2% 3.488e+10 perf-stat.ps.dTLB-loads 2.42e+10 -9.2% 2.196e+10 perf-stat.ps.dTLB-stores 66209432 -10.7% 59119327 perf-stat.ps.iTLB-load-misses 1.309e+11 -8.5% 1.198e+11 perf-stat.ps.instructions 0.95 ± 2% -75.6% 0.23 ± 10% perf-stat.ps.major-faults 3012 -7.7% 2781 perf-stat.ps.minor-faults 134107 ± 3% +1329.2% 1916630 ± 25% perf-stat.ps.node-load-misses 25006 ± 10% +1196.0% 324084 ± 27% perf-stat.ps.node-loads 25409 ± 3% +2767.5% 728614 ± 33% perf-stat.ps.node-store-misses 8634 ± 4% +149.7% 21560 ± 4% perf-stat.ps.node-stores 3013 -7.7% 2781 perf-stat.ps.page-faults 3.952e+13 -8.4% 3.621e+13 perf-stat.total.instructions 19829 ± 23% +103.1% 40270 ± 14% softirqs.CPU1.RCU 19895 ± 16% +104.3% 40650 ± 36% softirqs.CPU10.RCU 17025 ± 18% +69.7% 28889 ± 23% softirqs.CPU100.RCU 17490 ± 16% +74.0% 30430 ± 29% softirqs.CPU101.RCU 19022 ± 11% +64.7% 31336 ± 30% softirqs.CPU102.RCU 18744 ± 25% +84.2% 34529 ± 25% softirqs.CPU12.RCU 20315 ± 13% +121.3% 44951 ± 36% softirqs.CPU13.RCU 18045 ± 10% +83.1% 33036 ± 27% softirqs.CPU14.RCU 21880 ± 17% +64.2% 35937 ± 20% softirqs.CPU15.RCU 17855 ± 22% +106.2% 36808 ± 25% softirqs.CPU16.RCU 20613 ± 9% +80.6% 37228 ± 20% softirqs.CPU17.RCU 20960 ± 20% +56.0% 32705 ± 19% softirqs.CPU18.RCU 22190 ± 19% +61.7% 35886 ± 21% softirqs.CPU19.RCU 18811 ± 15% +193.5% 55217 ± 87% softirqs.CPU2.RCU 21529 ± 18% +73.6% 37382 ± 20% softirqs.CPU20.RCU 19807 ± 17% +260.3% 71362 ± 86% softirqs.CPU21.RCU 18409 ± 10% +95.9% 36070 ± 19% softirqs.CPU22.RCU 18106 ± 11% +152.1% 45649 ± 52% softirqs.CPU23.RCU 18383 ± 16% +71.9% 31607 ± 17% softirqs.CPU24.RCU 22299 ± 9% +159.5% 57869 ± 80% softirqs.CPU25.RCU 20307 ± 21% +264.6% 74046 ± 88% softirqs.CPU26.RCU 18338 ± 20% +52.8% 28013 ± 26% softirqs.CPU28.RCU 16336 ± 11% +71.8% 28061 ± 14% softirqs.CPU29.RCU 20697 ± 13% +131.4% 47893 ± 37% softirqs.CPU3.RCU 21663 ± 9% +84.7% 40013 ± 18% softirqs.CPU30.RCU 20950 ± 19% +95.8% 41011 ± 19% softirqs.CPU31.RCU 21048 ± 21% +95.3% 41100 ± 16% softirqs.CPU32.RCU 23418 ± 10% +51.7% 35527 ± 12% softirqs.CPU33.RCU 22490 ± 17% +64.2% 36935 ± 18% softirqs.CPU34.RCU 20392 ± 13% +75.5% 35784 ± 19% softirqs.CPU36.RCU 22927 ± 16% +60.5% 36800 ± 17% softirqs.CPU37.RCU 22725 ± 23% +38.9% 31559 ± 14% softirqs.CPU39.RCU 16200 ± 15% +130.2% 37301 ± 17% softirqs.CPU4.RCU 26596 ± 25% -66.0% 9032 ± 43% softirqs.CPU4.SCHED 21544 ± 6% +44.9% 31212 ± 21% softirqs.CPU40.RCU 20278 ± 19% +83.5% 37219 ± 20% softirqs.CPU41.RCU 17581 ± 24% +106.5% 36298 ± 22% softirqs.CPU42.RCU 19704 ± 18% +75.5% 34582 ± 22% softirqs.CPU44.RCU 18043 ± 22% +535.4% 114657 ±164% softirqs.CPU46.RCU 20270 ± 22% +77.2% 35916 ± 17% softirqs.CPU47.RCU 17982 ± 21% +71.1% 30770 ± 21% softirqs.CPU49.RCU 18745 ± 16% +68.0% 31486 ± 19% softirqs.CPU5.RCU 17057 ± 15% +548.8% 110674 ±161% softirqs.CPU50.RCU 15504 ± 6% +70.6% 26452 ± 19% softirqs.CPU51.RCU 22234 ± 11% +74.0% 38690 ± 16% softirqs.CPU52.RCU 17140 ± 14% +192.0% 50056 ± 46% softirqs.CPU55.RCU 15528 ± 42% +130.7% 35830 ± 9% softirqs.CPU56.SCHED 18163 ± 23% +69.6% 30807 ± 23% softirqs.CPU57.RCU 16864 ± 17% +67.3% 28211 ± 29% softirqs.CPU58.RCU 15578 ± 13% +81.9% 28334 ± 14% softirqs.CPU59.RCU 20648 ± 15% +68.2% 34729 ± 23% softirqs.CPU6.RCU 16335 ± 10% +71.1% 27942 ± 15% softirqs.CPU60.RCU 15485 ± 14% +81.2% 28064 ± 28% softirqs.CPU61.RCU 16962 ± 16% +65.6% 28083 ± 26% softirqs.CPU62.RCU 17048 ± 8% +69.4% 28875 ± 24% softirqs.CPU63.RCU 18613 ± 24% +66.5% 30984 ± 20% softirqs.CPU64.RCU 17087 ± 17% +64.9% 28181 ± 20% softirqs.CPU65.RCU 18601 ± 22% +57.7% 29335 ± 20% softirqs.CPU66.RCU 16122 ± 14% +77.4% 28605 ± 17% softirqs.CPU67.RCU 19914 ± 11% +46.7% 29207 ± 21% softirqs.CPU68.RCU 18239 ± 15% +53.3% 27955 ± 18% softirqs.CPU69.RCU 17764 ± 16% +77.4% 31521 ± 23% softirqs.CPU70.RCU 16451 ± 11% +102.4% 33302 ± 24% softirqs.CPU71.RCU 17009 ± 13% +57.8% 26845 ± 26% softirqs.CPU72.RCU 18345 ± 15% +86.8% 34268 ± 22% softirqs.CPU73.RCU 18698 ± 21% +66.6% 31160 ± 24% softirqs.CPU75.RCU 15121 ± 22% +104.0% 30841 ± 25% softirqs.CPU77.RCU 18305 ± 18% +96.4% 35948 ± 25% softirqs.CPU79.RCU 18711 ± 21% +67.3% 31300 ± 20% softirqs.CPU80.RCU 20200 ± 11% +66.4% 33606 ± 21% softirqs.CPU81.RCU 17374 ± 23% +54.7% 26876 ± 18% softirqs.CPU82.RCU 18461 ± 8% +40.6% 25958 ± 13% softirqs.CPU83.RCU 24314 ± 34% +42.8% 34727 ± 11% softirqs.CPU83.SCHED 18104 ± 17% +44.8% 26212 ± 15% softirqs.CPU84.RCU 16493 ± 17% +79.5% 29607 ± 21% softirqs.CPU85.RCU 16837 ± 14% +78.3% 30017 ± 25% softirqs.CPU86.RCU 15333 ± 12% +88.4% 28884 ± 22% softirqs.CPU87.RCU 17462 ± 14% +80.8% 31567 ± 21% softirqs.CPU88.RCU 16445 ± 8% +74.0% 28615 ± 17% softirqs.CPU89.RCU 21703 ± 12% +131.2% 50175 ± 70% softirqs.CPU9.RCU 15358 ± 15% +84.9% 28389 ± 22% softirqs.CPU90.RCU 15272 ± 6% +104.2% 31188 ± 24% softirqs.CPU91.RCU 15982 ± 16% +86.9% 29874 ± 27% softirqs.CPU92.RCU 16737 ± 14% +49.9% 25097 ± 21% softirqs.CPU93.RCU 15313 ± 10% +102.3% 30980 ± 27% softirqs.CPU95.RCU 16946 ± 21% +72.1% 29164 ± 27% softirqs.CPU96.RCU 16867 ± 15% +83.2% 30893 ± 29% softirqs.CPU97.RCU 17850 ± 19% +75.6% 31343 ± 33% softirqs.CPU98.RCU 15962 ± 4% +48.5% 23703 ± 20% softirqs.CPU99.RCU 1956393 ± 10% +105.5% 4019733 ± 21% softirqs.RCU 35299 ± 8% +40.1% 49468 ± 5% softirqs.TIMER 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 22% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.01 ± 12% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.00 -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.01 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 22% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.00 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.03 ± 30% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.01 ± 9% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 32% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.01 ± 13% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.02 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.04 ± 58% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.03 ± 10% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 27% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.03 ± 12% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 17% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.01 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.01 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 15% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 33% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 30% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.03 ± 45% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.02 ± 63% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 6.62 ± 35% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.01 ± 7% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 6.62 ± 35% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 193.28 ± 4% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 8810 ± 4% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8156 ± 6% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 193.27 ± 4% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8156 ± 6% -100.0% 0.00 perf-sched.total_wait_time.max.ms 899.53 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1890 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 802.45 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1890 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 272.17 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.41 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.05 ± 46% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 5.93 ±222% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 63.55 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.03 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk 0.04 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode 8.11 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.62 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 564.94 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 398.95 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.70 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 7.63 ± 24% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 678.55 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 639.28 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 4.00 -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 22.50 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 4.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 247.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 246.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 132.33 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 281.50 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2516 -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 173.67 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk 128.83 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode 1248 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 147.50 ± 29% -100.0% 0.00 perf-sched.wait_and_delay.count.rcu_gp_kthread.kthread.ret_from_fork 22.83 ± 69% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 70.17 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 39.83 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1349 ± 26% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1464 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 72.00 -100.0% 0.00 perf-sched.wait_and_delay.count.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 548.33 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.50 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 7494 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 7494 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3388 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 33.48 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.05 ±129% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 1250 ±223% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 6424 ± 37% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.23 ± 90% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk 0.86 ±175% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode 1324 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.99 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 7080 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5476 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.67 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 251.67 ± 37% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2192 ± 56% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.03 ± 58% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 8065 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 899.52 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1890 -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 802.45 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1890 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 272.16 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.40 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.05 ± 46% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.03 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 5.93 ±222% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 32.13 ± 25% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 63.54 -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 2484 ±111% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 0.03 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk 0.04 ± 34% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode 0.03 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.posix_lock_inode.do_lock_file_wait.fcntl_setlk 8.11 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2.62 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 564.93 ± 20% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 398.95 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.69 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 41% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 7.62 ± 24% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 678.54 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 639.26 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 999.50 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 7494 -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 7494 -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3388 ± 27% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 33.47 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.05 ±129% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.05 ± 6% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 1250 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 64.26 ± 25% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 6424 ± 37% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 3723 ± 99% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 0.23 ± 90% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk 0.86 ±175% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.locks_alloc_lock.posix_lock_inode 0.07 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.posix_lock_inode.do_lock_file_wait.fcntl_setlk 1324 ± 36% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 4.99 -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 7080 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5476 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.66 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.01 ± 41% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 251.66 ± 37% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2192 ± 56% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 8065 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 4276 ± 35% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 4276 ± 35% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 4855 ± 43% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 4855 ± 43% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 6146 ± 28% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 6146 ± 28% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 3135 ± 12% -100.0% 0.00 interrupts.CPU100.NMI:Non-maskable_interrupts 3135 ± 12% -100.0% 0.00 interrupts.CPU100.PMI:Performance_monitoring_interrupts 5181 ± 42% -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 5181 ± 42% -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 3803 ± 34% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 3803 ± 34% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 4290 ± 19% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 4290 ± 19% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 5109 ± 39% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 5109 ± 39% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 3328 ± 64% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 3328 ± 64% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 701.50 ± 22% -21.5% 550.83 ± 4% interrupts.CPU13.CAL:Function_call_interrupts 6967 ± 26% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 6967 ± 26% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 3669 ± 54% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 3669 ± 54% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 6310 ± 27% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 6310 ± 27% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 3101 ± 41% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 3101 ± 41% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 99.33 ± 76% +114.1% 212.67 ± 38% interrupts.CPU16.RES:Rescheduling_interrupts 3094 ± 41% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 3094 ± 41% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 4029 ± 67% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 4029 ± 67% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 4074 ± 46% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 4074 ± 46% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 5435 ± 41% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 5435 ± 41% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 5057 ± 32% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 5057 ± 32% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 4015 ± 42% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 4015 ± 42% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 5160 ± 47% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 5160 ± 47% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 846.83 ± 18% -26.5% 622.17 ± 7% interrupts.CPU23.CAL:Function_call_interrupts 4581 ± 42% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 4581 ± 42% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 5973 ± 32% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 5973 ± 32% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 5668 ± 37% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 5668 ± 37% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 5983 ± 45% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 5983 ± 45% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 5656 ± 39% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 5656 ± 39% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 5334 ± 50% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 5334 ± 50% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 5222 ± 36% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 5222 ± 36% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 6220 ± 28% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 6220 ± 28% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 6472 ± 31% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 6472 ± 31% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 5196 ± 41% -100.0% 0.83 ±223% interrupts.CPU31.NMI:Non-maskable_interrupts 5196 ± 41% -100.0% 0.83 ±223% interrupts.CPU31.PMI:Performance_monitoring_interrupts 690.00 ± 18% -18.5% 562.50 ± 4% interrupts.CPU32.CAL:Function_call_interrupts 5348 ± 49% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 5348 ± 49% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 4394 ± 36% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 4394 ± 36% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 7239 ± 20% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 7239 ± 20% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 5561 ± 42% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 5561 ± 42% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 5286 ± 35% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 5286 ± 35% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 6401 ± 33% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 6401 ± 33% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 5177 ± 41% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 5177 ± 41% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 5604 ± 46% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 5604 ± 46% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 5734 ± 38% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 5734 ± 38% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 129.67 ± 48% +103.1% 263.33 ± 17% interrupts.CPU4.RES:Rescheduling_interrupts 4673 ± 40% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 4673 ± 40% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 6165 ± 41% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 6165 ± 41% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 4070 ± 68% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 4070 ± 68% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 6821 ± 35% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 6821 ± 35% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 5695 ± 39% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 5695 ± 39% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 6808 ± 24% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 6808 ± 24% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 6491 ± 24% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 6491 ± 24% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 6851 ± 26% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 6851 ± 26% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 6355 ± 30% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 6355 ± 30% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 5183 ± 48% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 5183 ± 48% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 5847 ± 34% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 5847 ± 34% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 6184 ± 29% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 6184 ± 29% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 6308 ± 23% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 6308 ± 23% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 6860 ± 21% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 6860 ± 21% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 4740 ± 52% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 4740 ± 52% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 4564 ± 53% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 4564 ± 53% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 3780 ± 54% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 3780 ± 54% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 4332 ± 42% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 4332 ± 42% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 212.50 ± 23% -58.7% 87.83 ± 73% interrupts.CPU56.RES:Rescheduling_interrupts 3919 ± 54% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 3919 ± 54% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 3056 ± 42% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 3056 ± 42% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 3859 ± 46% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 3859 ± 46% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 39.50 ± 82% +210.1% 122.50 ± 41% interrupts.CPU59.RES:Rescheduling_interrupts 6107 ± 28% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 6107 ± 28% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 4474 ± 50% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 4474 ± 50% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 4384 ± 44% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 4384 ± 44% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 3533 ± 56% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 3533 ± 56% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 4918 ± 37% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 4918 ± 37% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 6675 ± 21% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 6675 ± 21% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 3920 ± 54% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 3920 ± 54% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 6567 ± 21% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 6567 ± 21% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 854.83 ± 51% -30.9% 590.67 ± 6% interrupts.CPU67.CAL:Function_call_interrupts 2716 ± 53% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 2716 ± 53% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 6487 ± 28% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 6487 ± 28% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 6018 ± 29% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 6018 ± 29% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 5327 ± 36% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 5327 ± 36% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 5734 ± 41% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 5734 ± 41% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 756.50 ± 17% -25.6% 563.00 ± 13% interrupts.CPU71.CAL:Function_call_interrupts 3628 ± 27% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 3628 ± 27% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 3882 ± 51% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 3882 ± 51% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 4030 ± 43% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 4030 ± 43% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 4364 ± 47% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 4364 ± 47% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 4699 ± 47% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 4699 ± 47% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 3676 ± 51% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 3676 ± 51% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 3685 ± 49% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 3685 ± 49% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 659.83 ± 13% -17.0% 547.33 ± 16% interrupts.CPU78.CAL:Function_call_interrupts 3885 ± 47% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 3885 ± 47% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 4279 ± 48% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 4279 ± 48% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 987.33 ± 57% -39.3% 599.00 ± 9% interrupts.CPU8.CAL:Function_call_interrupts 6416 ± 23% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 6416 ± 23% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 5426 ± 45% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 5426 ± 45% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 5155 ± 45% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 5155 ± 45% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 4074 ± 56% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 4074 ± 56% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 4064 ± 43% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 4064 ± 43% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 5177 ± 53% -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 5177 ± 53% -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 4647 ± 50% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 4647 ± 50% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 2146 ± 33% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 2146 ± 33% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 1285 ± 64% -50.8% 633.00 ± 8% interrupts.CPU87.CAL:Function_call_interrupts 3644 ± 55% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 3644 ± 55% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 3034 ± 25% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 3034 ± 25% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 3280 ± 39% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 3280 ± 39% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 5204 ± 40% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 5204 ± 40% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 767.17 ± 20% -18.0% 628.83 ± 13% interrupts.CPU90.CAL:Function_call_interrupts 4530 ± 55% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 4530 ± 55% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 4458 ± 61% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 4458 ± 61% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 3999 ± 55% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 3999 ± 55% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 3043 ± 48% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 3043 ± 48% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 5798 ± 42% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 5798 ± 42% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 4178 ± 47% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 4178 ± 47% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 4272 ± 32% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 4272 ± 32% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 3219 ± 33% -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 3219 ± 33% -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 2999 ± 42% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 2999 ± 42% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 2835 ± 42% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 2835 ± 42% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 96.00 ± 19% -100.0% 0.00 interrupts.IWI:IRQ_work_interrupts 505275 ± 12% -100.0% 0.83 ±223% interrupts.NMI:Non-maskable_interrupts 505275 ± 12% -100.0% 0.83 ±223% interrupts.PMI:Performance_monitoring_interrupts *************************************************************************************************** lkp-hsw-4ex1: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory ========================================================================================= compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/x86_64-rhel-8.3/thread/50%/debian-10.4-x86_64-20200603.cgz/lkp-hsw-4ex1/sched_yield/will-it-scale/0x16 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 79675793 +3.3% 82340904 will-it-scale.72.threads 48.74 -2.1% 47.71 will-it-scale.72.threads_idle 1106607 +3.3% 1143623 will-it-scale.per_thread_ops 79675793 +3.3% 82340904 will-it-scale.workload 2952204 ± 25% -88.3% 344715 ± 10% numa-numastat.node3.local_node 2995934 ± 24% -86.3% 411199 ± 7% numa-numastat.node3.numa_hit 56.89 ±141% +257.4% 203.33 ± 31% sched_debug.cfs_rq:/.removed.load_avg.max 4567 ± 7% +42.2% 6493 ± 6% sched_debug.cpu.nr_switches.stddev 10246002 -76.3% 2424603 vmstat.memory.cache 1861 +34.9% 2510 ± 4% vmstat.system.cs 291827 -1.5% 287569 vmstat.system.in 68.19 -6.4 61.78 mpstat.cpu.all.idle% 0.94 ± 4% +0.1 1.06 ± 2% mpstat.cpu.all.irq% 0.04 ± 5% +0.0 0.05 ± 3% mpstat.cpu.all.soft% 11.65 +3.1 14.76 mpstat.cpu.all.sys% 19.19 +3.2 22.35 mpstat.cpu.all.usr% 20125 -31.8% 13733 ± 2% slabinfo.proc_inode_cache.active_objs 418.83 -31.6% 286.50 ± 2% slabinfo.proc_inode_cache.active_slabs 20130 -31.6% 13773 ± 2% slabinfo.proc_inode_cache.num_objs 418.83 -31.6% 286.50 ± 2% slabinfo.proc_inode_cache.num_slabs 59257 -52.2% 28303 slabinfo.radix_tree_node.active_objs 1057 -52.3% 505.00 slabinfo.radix_tree_node.active_slabs 59257 -52.2% 28303 slabinfo.radix_tree_node.num_objs 1057 -52.3% 505.00 slabinfo.radix_tree_node.num_slabs 2335532 ± 13% -52.5% 1108266 meminfo.Active 2335532 ± 13% -52.5% 1108266 meminfo.Active(anon) 360617 -10.9% 321358 meminfo.AnonPages 10096923 -77.1% 2311092 meminfo.Cached 9926633 -78.5% 2129694 meminfo.Committed_AS 6940122 ± 5% -95.1% 342279 meminfo.Inactive 6940122 ± 5% -95.1% 342279 meminfo.Inactive(anon) 134615 -18.0% 110430 meminfo.KReclaimable 3409827 ± 5% -98.8% 39266 meminfo.Mapped 12409511 -61.3% 4807055 meminfo.Memused 11835 ± 2% -54.5% 5379 meminfo.PageTables 134615 -18.0% 110430 meminfo.SReclaimable 8915650 -87.3% 1129819 meminfo.Shmem 12754128 -62.3% 4809943 meminfo.max_used_kB 175049 ±125% -98.8% 2081 ± 18% numa-vmstat.node0.nr_mapped 9483 ±188% +674.2% 73416 ± 89% numa-vmstat.node1.nr_active_anon 28574 ± 89% -91.3% 2499 ± 23% numa-vmstat.node1.nr_mapped 9483 ±188% +674.2% 73416 ± 89% numa-vmstat.node1.nr_zone_active_anon 18160 ±141% +293.8% 71514 ± 89% numa-vmstat.node2.nr_active_anon 56897 ±111% -95.5% 2556 ± 19% numa-vmstat.node2.nr_mapped 18160 ±141% +293.8% 71514 ± 89% numa-vmstat.node2.nr_zone_active_anon 487396 ± 35% -73.4% 129578 ± 7% numa-vmstat.node3.nr_active_anon 1857283 ± 24% -89.0% 203720 ± 4% numa-vmstat.node3.nr_file_pages 1323446 ± 23% -99.0% 13407 ± 43% numa-vmstat.node3.nr_inactive_anon 590670 ± 29% -99.5% 2811 ± 15% numa-vmstat.node3.nr_mapped 1821 ± 30% -87.3% 231.17 ± 36% numa-vmstat.node3.nr_page_table_pages 1785330 ± 25% -92.6% 131845 ± 6% numa-vmstat.node3.nr_shmem 11383 ± 25% -44.7% 6296 ± 9% numa-vmstat.node3.nr_slab_reclaimable 487396 ± 35% -73.4% 129578 ± 7% numa-vmstat.node3.nr_zone_active_anon 1323447 ± 23% -99.0% 13407 ± 43% numa-vmstat.node3.nr_zone_inactive_anon 2276506 ± 22% -74.6% 578796 ± 8% numa-vmstat.node3.numa_hit 2188429 ± 25% -80.3% 431844 ± 17% numa-vmstat.node3.numa_local 701375 ±125% -98.8% 8160 ± 16% numa-meminfo.node0.Mapped 37784 ±187% +675.4% 292990 ± 89% numa-meminfo.node1.Active 37784 ±187% +675.4% 292990 ± 89% numa-meminfo.node1.Active(anon) 113875 ± 89% -91.3% 9915 ± 23% numa-meminfo.node1.Mapped 72804 ±141% +292.0% 285408 ± 89% numa-meminfo.node2.Active 72804 ±141% +292.0% 285408 ± 89% numa-meminfo.node2.Active(anon) 227112 ±111% -95.6% 10054 ± 21% numa-meminfo.node2.Mapped 1948333 ± 35% -73.5% 516997 ± 7% numa-meminfo.node3.Active 1948333 ± 35% -73.5% 516997 ± 7% numa-meminfo.node3.Active(anon) 7429241 ± 24% -89.0% 813562 ± 4% numa-meminfo.node3.FilePages 5295082 ± 23% -99.0% 53670 ± 43% numa-meminfo.node3.Inactive 5295082 ± 23% -99.0% 53670 ± 43% numa-meminfo.node3.Inactive(anon) 45538 ± 25% -44.7% 25185 ± 9% numa-meminfo.node3.KReclaimable 2364015 ± 29% -99.5% 11162 ± 15% numa-meminfo.node3.Mapped 7982630 ± 23% -82.9% 1361977 ± 4% numa-meminfo.node3.MemUsed 7285 ± 30% -87.3% 925.67 ± 36% numa-meminfo.node3.PageTables 45538 ± 25% -44.7% 25185 ± 9% numa-meminfo.node3.SReclaimable 7141431 ± 25% -92.6% 526059 ± 6% numa-meminfo.node3.Shmem 92978 ± 20% -21.1% 73344 ± 14% numa-meminfo.node3.Slab 582601 ± 13% -52.5% 276870 proc-vmstat.nr_active_anon 90280 -11.0% 80387 proc-vmstat.nr_anon_pages 116.00 -7.5% 107.33 proc-vmstat.nr_anon_transparent_hugepages 12833539 +1.5% 13023313 proc-vmstat.nr_dirty_background_threshold 25698458 +1.5% 26078469 proc-vmstat.nr_dirty_threshold 2523382 -77.1% 577576 proc-vmstat.nr_file_pages 1.29e+08 +1.5% 1.309e+08 proc-vmstat.nr_free_pages 1735590 ± 5% -95.1% 85618 proc-vmstat.nr_inactive_anon 852438 ± 5% -98.8% 9944 proc-vmstat.nr_mapped 2953 ± 2% -54.4% 1345 proc-vmstat.nr_page_table_pages 2228064 -87.3% 282257 proc-vmstat.nr_shmem 33649 -18.0% 27606 proc-vmstat.nr_slab_reclaimable 582601 ± 13% -52.5% 276870 proc-vmstat.nr_zone_active_anon 1735590 ± 5% -95.1% 85618 proc-vmstat.nr_zone_inactive_anon 306960 ± 3% -97.6% 7282 ± 51% proc-vmstat.numa_hint_faults 184855 ± 8% -98.0% 3650 ± 38% proc-vmstat.numa_hint_faults_local 4646677 -62.7% 1732933 proc-vmstat.numa_hit 1836 ± 5% -87.0% 239.33 ± 4% proc-vmstat.numa_huge_pte_updates 4413387 -66.0% 1499777 proc-vmstat.numa_local 19698 ± 38% -87.7% 2423 ± 98% proc-vmstat.numa_pages_migrated 1361074 ± 2% -88.9% 151213 ± 11% proc-vmstat.numa_pte_updates 3413232 -98.9% 37257 proc-vmstat.pgactivate 4768806 -61.6% 1831266 ± 2% proc-vmstat.pgalloc_normal 1800815 -36.6% 1141702 proc-vmstat.pgfault 1267410 ± 2% -23.1% 974963 ± 4% proc-vmstat.pgfree 19698 ± 38% -87.7% 2423 ± 98% proc-vmstat.pgmigrate_success 701992 ±129% -100.0% 0.00 syscalls.sys_mmap.max 8640 -100.0% 0.00 syscalls.sys_mmap.med 4172 ± 13% -100.0% 0.00 syscalls.sys_mmap.min 2.278e+08 ± 20% -2.3e+08 0.00 syscalls.sys_mmap.noise.100% 3.163e+08 ± 14% -3.2e+08 0.00 syscalls.sys_mmap.noise.2% 2.961e+08 ± 15% -3e+08 0.00 syscalls.sys_mmap.noise.25% 3.152e+08 ± 14% -3.2e+08 0.00 syscalls.sys_mmap.noise.5% 2.618e+08 ± 17% -2.6e+08 0.00 syscalls.sys_mmap.noise.50% 2.452e+08 ± 18% -2.5e+08 0.00 syscalls.sys_mmap.noise.75% 147828 ± 37% +131.9% 342867 ± 22% syscalls.sys_openat.max 7274 +44.2% 10488 ± 2% syscalls.sys_openat.med 1.669e+08 ± 8% -9.8e+07 69235420 ± 4% syscalls.sys_openat.noise.100% 2.483e+08 ± 5% -9e+07 1.582e+08 ± 2% syscalls.sys_openat.noise.2% 2.283e+08 ± 6% -1.2e+08 1.118e+08 ± 2% syscalls.sys_openat.noise.25% 2.474e+08 ± 5% -9.1e+07 1.567e+08 ± 2% syscalls.sys_openat.noise.5% 1.976e+08 ± 7% -1.1e+08 84937336 ± 3% syscalls.sys_openat.noise.50% 1.828e+08 ± 7% -1.1e+08 75871834 ± 4% syscalls.sys_openat.noise.75% 4071 +43.1% 5828 ± 7% syscalls.sys_read.med 1.792e+12 ± 2% -8.2e+11 9.682e+11 ± 3% syscalls.sys_read.noise.100% 1.792e+12 ± 2% -8.2e+11 9.682e+11 ± 3% syscalls.sys_read.noise.2% 1.792e+12 ± 2% -8.2e+11 9.682e+11 ± 3% syscalls.sys_read.noise.25% 1.792e+12 ± 2% -8.2e+11 9.682e+11 ± 3% syscalls.sys_read.noise.5% 1.792e+12 ± 2% -8.2e+11 9.682e+11 ± 3% syscalls.sys_read.noise.50% 1.792e+12 ± 2% -8.2e+11 9.682e+11 ± 3% syscalls.sys_read.noise.75% 2.353e+10 -17.1% 1.951e+10 ± 19% syscalls.sys_sched_yield.max 4305 -83.9% 694.00 syscalls.sys_sched_yield.med 4.869e+09 ± 18% -2.5e+09 2.356e+09 ± 41% syscalls.sys_sched_yield.noise.100% 4.887e+09 ± 18% -2.5e+09 2.357e+09 ± 41% syscalls.sys_sched_yield.noise.2% 4.877e+09 ± 18% -2.5e+09 2.357e+09 ± 41% syscalls.sys_sched_yield.noise.25% 4.887e+09 ± 18% -2.5e+09 2.357e+09 ± 41% syscalls.sys_sched_yield.noise.5% 4.873e+09 ± 18% -2.5e+09 2.357e+09 ± 41% syscalls.sys_sched_yield.noise.50% 4.872e+09 ± 18% -2.5e+09 2.356e+09 ± 41% syscalls.sys_sched_yield.noise.75% 1.471e+08 ±151% -99.8% 259661 ± 36% syscalls.sys_write.max 7419 ± 37% -58.9% 3046 syscalls.sys_write.med 1.206e+11 ± 29% -1.2e+11 55543167 ± 3% syscalls.sys_write.noise.100% 1.207e+11 ± 29% -1.2e+11 77975416 ± 3% syscalls.sys_write.noise.2% 1.207e+11 ± 29% -1.2e+11 69494432 ± 3% syscalls.sys_write.noise.25% 1.207e+11 ± 29% -1.2e+11 77110427 ± 4% syscalls.sys_write.noise.5% 1.206e+11 ± 29% -1.2e+11 60264909 ± 2% syscalls.sys_write.noise.50% 1.206e+11 ± 29% -1.2e+11 56684242 ± 3% syscalls.sys_write.noise.75% 0.78 -0.0 0.74 perf-stat.i.branch-miss-rate% 1.964e+08 -8.7% 1.793e+08 perf-stat.i.branch-misses 14.29 ± 3% +6.7 21.00 ± 7% perf-stat.i.cache-miss-rate% 2689774 +23.8% 3330713 ± 4% perf-stat.i.cache-misses 1815 +35.6% 2461 ± 4% perf-stat.i.context-switches 1.86 -1.6% 1.83 perf-stat.i.cpi 2.108e+11 +2.0% 2.15e+11 perf-stat.i.cpu-cycles 149.66 -3.1% 145.09 perf-stat.i.cpu-migrations 318970 ± 3% -79.7% 64857 ± 4% perf-stat.i.cycles-between-cache-misses 0.23 -0.0 0.22 perf-stat.i.dTLB-load-miss-rate% 3.621e+10 +2.7% 3.719e+10 perf-stat.i.dTLB-loads 80228533 +3.4% 82965065 perf-stat.i.dTLB-store-misses 2.479e+10 +2.3% 2.537e+10 perf-stat.i.dTLB-stores 86055004 +7.4% 92400555 ± 4% perf-stat.i.iTLB-load-misses 1.162e+11 +1.2% 1.176e+11 perf-stat.i.instructions 1385 -6.9% 1290 ± 3% perf-stat.i.instructions-per-iTLB-miss 1.29 ± 7% -88.5% 0.15 ± 34% perf-stat.i.major-faults 1.46 +2.0% 1.49 perf-stat.i.metric.GHz 592.72 +1.9% 604.18 perf-stat.i.metric.M/sec 5773 -36.6% 3661 perf-stat.i.minor-faults 63.14 ± 16% +35.7 98.83 perf-stat.i.node-load-miss-rate% 768241 ± 48% +182.1% 2166921 ± 6% perf-stat.i.node-load-misses 1289488 ± 27% -98.1% 25110 ± 7% perf-stat.i.node-loads 33.64 ± 9% +55.1 88.70 perf-stat.i.node-store-miss-rate% 72802 ± 8% +1269.2% 996837 perf-stat.i.node-store-misses 454906 ± 9% -72.1% 126837 ± 11% perf-stat.i.node-stores 5775 -36.6% 3661 perf-stat.i.page-faults 0.81 -0.1 0.74 perf-stat.overall.branch-miss-rate% 16.74 +3.4 20.18 ± 7% perf-stat.overall.cache-miss-rate% 77877 -17.0% 64673 ± 4% perf-stat.overall.cycles-between-cache-misses 0.23 -0.0 0.22 perf-stat.overall.dTLB-load-miss-rate% 1351 -5.7% 1274 ± 3% perf-stat.overall.instructions-per-iTLB-miss 36.91 ± 48% +61.9 98.85 perf-stat.overall.node-load-miss-rate% 13.96 ± 13% +74.8 88.73 perf-stat.overall.node-store-miss-rate% 439289 -2.0% 430377 perf-stat.overall.path-length 1.957e+08 -8.7% 1.787e+08 perf-stat.ps.branch-misses 2697675 +23.0% 3319475 ± 4% perf-stat.ps.cache-misses 1804 +35.9% 2452 ± 4% perf-stat.ps.context-switches 2.101e+11 +2.0% 2.143e+11 perf-stat.ps.cpu-cycles 148.68 -2.7% 144.60 perf-stat.ps.cpu-migrations 3.61e+10 +2.7% 3.707e+10 perf-stat.ps.dTLB-loads 79963812 +3.4% 82684664 perf-stat.ps.dTLB-store-misses 2.471e+10 +2.3% 2.528e+10 perf-stat.ps.dTLB-stores 85771986 +7.4% 92088759 ± 4% perf-stat.ps.iTLB-load-misses 1.159e+11 +1.2% 1.172e+11 perf-stat.ps.instructions 1.28 ± 7% -88.5% 0.15 ± 34% perf-stat.ps.major-faults 5829 -37.4% 3649 perf-stat.ps.minor-faults 765799 ± 49% +182.0% 2159603 ± 6% perf-stat.ps.node-load-misses 1301758 ± 27% -98.1% 25025 ± 7% perf-stat.ps.node-loads 72613 ± 8% +1268.2% 993470 perf-stat.ps.node-store-misses 452750 ± 9% -72.1% 126409 ± 11% perf-stat.ps.node-stores 5830 -37.4% 3649 perf-stat.ps.page-faults 3.5e+13 +1.2% 3.544e+13 perf-stat.total.instructions 69.22 ± 8% -69.2 0.00 perf-profile.calltrace.cycles-pp.__sched_yield 62.37 ± 8% -62.4 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__sched_yield 47.95 ± 8% -48.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield 44.51 ± 8% -44.5 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield 42.76 ± 8% -42.8 0.00 perf-profile.calltrace.cycles-pp.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe.__sched_yield 42.32 ± 8% -42.3 0.00 perf-profile.calltrace.cycles-pp.__schedule.schedule.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe 39.91 ± 8% -39.9 0.00 perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.__x64_sys_sched_yield.do_syscall_64 38.70 ± 8% -38.7 0.00 perf-profile.calltrace.cycles-pp.update_curr.pick_next_task_fair.__schedule.schedule.__x64_sys_sched_yield 36.67 ± 8% -36.7 0.00 perf-profile.calltrace.cycles-pp.perf_trace_sched_stat_runtime.update_curr.pick_next_task_fair.__schedule.schedule 35.75 ± 8% -35.8 0.00 perf-profile.calltrace.cycles-pp.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.pick_next_task_fair.__schedule 34.92 ± 8% -34.9 0.00 perf-profile.calltrace.cycles-pp.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr.pick_next_task_fair 34.71 ± 8% -34.7 0.00 perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime.update_curr 34.51 ± 8% -34.5 0.00 perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_stat_runtime 32.53 ± 8% -32.5 0.00 perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event 30.20 ± 8% -30.2 0.00 perf-profile.calltrace.cycles-pp.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow 30.10 ± 8% -30.1 0.00 perf-profile.calltrace.cycles-pp.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow 28.66 ± 19% -28.7 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 28.32 ± 19% -28.3 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 28.32 ± 19% -28.3 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 28.32 ± 19% -28.3 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 28.32 ± 19% -28.3 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 28.32 ± 19% -28.3 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 28.32 ± 19% -28.3 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 16.07 ± 8% -16.1 0.00 perf-profile.calltrace.cycles-pp.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward 14.92 ± 8% -14.9 0.00 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.__get_user_nocheck_8.perf_callchain_user.get_perf_callchain.perf_callchain 13.77 ± 7% -13.8 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__sched_yield 13.15 ± 8% -13.2 0.00 perf-profile.calltrace.cycles-pp.perf_callchain_user.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward 9.38 ± 8% -9.4 0.00 perf-profile.calltrace.cycles-pp.__get_user_nocheck_8.perf_callchain_user.get_perf_callchain.perf_callchain.perf_prepare_sample 8.74 ± 8% -8.7 0.00 perf-profile.calltrace.cycles-pp.__unwind_start.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample 7.73 ± 8% -7.7 0.00 perf-profile.calltrace.cycles-pp.unwind_next_frame.__unwind_start.perf_callchain_kernel.get_perf_callchain.perf_callchain 4.87 ± 9% -4.9 0.00 perf-profile.calltrace.cycles-pp.unwind_next_frame.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample 69.78 ± 8% -69.8 0.00 perf-profile.children.cycles-pp.__sched_yield 63.44 ± 8% -63.4 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 48.99 ± 8% -49.0 0.00 perf-profile.children.cycles-pp.do_syscall_64 44.55 ± 8% -44.6 0.00 perf-profile.children.cycles-pp.__x64_sys_sched_yield 42.78 ± 8% -42.8 0.00 perf-profile.children.cycles-pp.schedule 42.38 ± 8% -42.4 0.00 perf-profile.children.cycles-pp.__schedule 40.05 ± 8% -40.0 0.00 perf-profile.children.cycles-pp.pick_next_task_fair 38.96 ± 8% -39.0 0.00 perf-profile.children.cycles-pp.update_curr 36.89 ± 8% -36.9 0.00 perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime 35.95 ± 8% -35.9 0.00 perf-profile.children.cycles-pp.perf_tp_event 35.10 ± 8% -35.1 0.00 perf-profile.children.cycles-pp.perf_swevent_overflow 34.89 ± 8% -34.9 0.00 perf-profile.children.cycles-pp.__perf_event_overflow 34.70 ± 8% -34.7 0.00 perf-profile.children.cycles-pp.perf_event_output_forward 32.74 ± 8% -32.7 0.00 perf-profile.children.cycles-pp.perf_prepare_sample 30.38 ± 8% -30.4 0.00 perf-profile.children.cycles-pp.perf_callchain 30.27 ± 8% -30.3 0.00 perf-profile.children.cycles-pp.get_perf_callchain 28.66 ± 19% -28.7 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 28.66 ± 19% -28.7 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 28.66 ± 19% -28.7 0.00 perf-profile.children.cycles-pp.do_idle 28.66 ± 19% -28.7 0.00 perf-profile.children.cycles-pp.cpuidle_enter 28.66 ± 19% -28.7 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 28.66 ± 19% -28.7 0.00 perf-profile.children.cycles-pp.intel_idle 28.32 ± 19% -28.3 0.00 perf-profile.children.cycles-pp.start_secondary 16.31 ± 8% -16.3 0.00 perf-profile.children.cycles-pp.perf_callchain_kernel 13.81 ± 7% -13.8 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 13.21 ± 8% -13.2 0.00 perf-profile.children.cycles-pp.perf_callchain_user 12.98 ± 8% -13.0 0.00 perf-profile.children.cycles-pp.unwind_next_frame 12.55 ± 8% -12.5 0.00 perf-profile.children.cycles-pp.__get_user_nocheck_8 9.99 ± 8% -10.0 0.00 perf-profile.children.cycles-pp.asm_exc_page_fault 8.87 ± 8% -8.9 0.00 perf-profile.children.cycles-pp.__unwind_start 28.66 ± 19% -28.7 0.00 perf-profile.self.cycles-pp.intel_idle 10.90 ± 6% -10.9 0.00 perf-profile.self.cycles-pp.syscall_exit_to_user_mode 5.29 ± 8% -5.3 0.00 perf-profile.self.cycles-pp.__get_user_nocheck_8 5.14 ± 8% -5.1 0.00 perf-profile.self.cycles-pp.unwind_next_frame 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 0.01 ± 32% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 0.01 ± 26% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 0.01 -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 0.02 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.futex_wait_queue_me.futex_wait.do_futex 0.06 ±211% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.01 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 0.01 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait 0.00 ± 19% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 0.01 ± 29% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 0.86 ±158% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 9% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 2.13 ±134% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 0.02 ± 35% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 0.02 ± 31% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 0.02 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 0.02 ± 15% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 0.03 ± 45% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 0.02 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 0.02 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.futex_wait_queue_me.futex_wait.do_futex 168.17 ±223% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.21 ± 68% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 0.01 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 0.03 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait 0.03 ± 65% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 0.03 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 2.37 ±219% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 1101 ±160% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 17% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 955.88 ±118% -100.0% 0.00 perf-sched.sch_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 0.28 ± 82% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 2050 ± 73% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 191.40 ± 4% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 9706 -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8453 ± 11% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 191.11 ± 3% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8034 ± 9% -100.0% 0.00 perf-sched.total_wait_time.max.ms 882.67 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 33.71 ± 68% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe 274.93 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 821.02 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 274.94 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 227.25 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 1.51 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 50.10 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 7.96 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 206.33 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 478.72 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 4.52 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 925.54 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 449.99 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 10.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 237.17 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe 6.17 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 22.00 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 6.17 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 314.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 311.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 3117 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 1121 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 127.50 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 79.83 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 2222 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 1301 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 73.33 -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 647.17 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 999.29 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 3461 ± 67% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe 1575 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 1575 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 1855 ± 33% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 37.00 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 1821 ± 29% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 3551 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 505.01 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 164.00 ± 29% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 8271 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 17% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.wait_for_partner.fifo_open.do_dentry_open 7029 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 882.66 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 33.71 ± 68% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe 274.92 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 821.01 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 274.93 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 227.24 -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 1.50 -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 74.24 ±223% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt 0.03 ± 45% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 20.26 ± 35% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.futex_wait_queue_me.futex_wait.do_futex 50.04 -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.07 ± 58% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask 0.21 ±148% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.generic_perform_write 7.96 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 1.39 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 260.43 ± 31% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait 206.33 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 478.71 -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 0.02 ±157% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.khugepaged.kthread 4.51 -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 924.69 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 447.87 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 999.28 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe 3461 ± 67% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.__x64_sys_sched_yield.do_syscall_64.entry_SYSCALL_64_after_hwframe 1575 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.devkmsg_read.vfs_read.ksys_read 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep 1575 ± 2% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_syslog.part.0 1855 ± 33% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_task_dead.do_exit.do_group_exit 36.99 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.do_wait.kernel_wait4.__do_sys_wait4 367.67 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt 0.06 ± 31% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 39.61 ± 30% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.futex_wait_queue_me.futex_wait.do_futex 1653 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.pipe_read.new_sync_read.vfs_read 0.18 ± 74% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask 2.82 ±191% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.generic_perform_write 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.preempt_schedule_common.__cond_resched.stop_one_cpu 4.15 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.rcu_gp_kthread.kthread.ret_from_fork 1413 ± 34% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait 3551 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop 504.99 -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.kcompactd.kthread 0.02 ±157% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.khugepaged.kthread 163.98 ± 29% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.schedule_timeout.rcu_gp_kthread.kthread 7699 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.smpboot_thread_fn.kthread.ret_from_fork 7029 ± 19% -100.0% 0.00 perf-sched.wait_time.max.ms.__traceiter_sched_switch.__traceiter_sched_switch.worker_thread.kthread.ret_from_fork 16402 ± 18% +146.7% 40467 ± 27% softirqs.CPU0.RCU 15834 ± 21% +198.0% 47187 ± 28% softirqs.CPU1.RCU 20070 ± 17% +178.0% 55790 ± 26% softirqs.CPU10.RCU 16463 ± 14% +232.5% 54736 ± 24% softirqs.CPU100.RCU 17915 ± 17% +196.0% 53030 ± 24% softirqs.CPU101.RCU 17252 ± 20% +202.0% 52096 ± 23% softirqs.CPU102.RCU 19689 ± 12% +157.8% 50765 ± 22% softirqs.CPU103.RCU 16458 ± 23% +168.2% 44143 ± 25% softirqs.CPU104.RCU 18119 ± 23% +171.5% 49201 ± 23% softirqs.CPU105.RCU 17443 ± 20% +612.4% 124271 ±128% softirqs.CPU106.RCU 19283 ± 12% +134.5% 45223 ± 14% softirqs.CPU107.RCU 16751 ± 45% +101.1% 33684 ± 19% softirqs.CPU107.SCHED 15825 ± 13% +168.5% 42498 ± 29% softirqs.CPU108.RCU 15977 ± 14% +174.7% 43896 ± 20% softirqs.CPU109.RCU 19331 ± 19% +170.9% 52360 ± 27% softirqs.CPU11.RCU 16597 ± 9% +151.8% 41799 ± 19% softirqs.CPU110.RCU 15667 ± 14% +185.9% 44797 ± 24% softirqs.CPU111.RCU 18154 ± 11% +181.2% 51041 ± 28% softirqs.CPU112.RCU 21011 ± 10% +169.9% 56704 ± 23% softirqs.CPU113.RCU 20566 ± 8% +128.5% 46992 ± 32% softirqs.CPU114.RCU 19229 ± 13% +173.9% 52662 ± 25% softirqs.CPU115.RCU 19442 ± 14% +128.2% 44358 ± 31% softirqs.CPU116.RCU 20990 ± 9% +101.4% 42273 ± 23% softirqs.CPU117.RCU 18854 ± 14% +140.6% 45354 ± 27% softirqs.CPU118.RCU 19036 ± 17% +154.8% 48509 ± 24% softirqs.CPU119.RCU 23099 ± 4% +194.2% 67966 ± 14% softirqs.CPU12.RCU 18940 ± 21% +116.9% 41088 ± 15% softirqs.CPU120.RCU 21136 ± 60% +70.6% 36057 ± 12% softirqs.CPU120.SCHED 18000 ± 15% +143.2% 43783 ± 18% softirqs.CPU121.RCU 16267 ± 17% +213.2% 50946 ± 28% softirqs.CPU123.RCU 16210 ± 18% +186.2% 46389 ± 27% softirqs.CPU124.RCU 18507 ± 14% +185.1% 52762 ± 19% softirqs.CPU125.RCU 14311 ± 9% +157.3% 36819 ± 17% softirqs.CPU126.RCU 12795 ± 21% +146.6% 31558 ± 27% softirqs.CPU127.RCU 14804 ± 16% +183.8% 42013 ± 22% softirqs.CPU128.RCU 19510 ± 11% +190.7% 56724 ± 14% softirqs.CPU129.RCU 18981 ± 23% +221.8% 61075 ± 21% softirqs.CPU13.RCU 16997 ± 14% +609.8% 120657 ±122% softirqs.CPU130.RCU 17880 ± 13% +225.6% 58216 ± 16% softirqs.CPU131.RCU 16348 ± 18% +251.3% 57433 ± 22% softirqs.CPU132.RCU 17923 ± 10% +208.2% 55247 ± 19% softirqs.CPU133.RCU 15612 ± 18% +256.9% 55713 ± 20% softirqs.CPU134.RCU 18558 ± 13% +206.2% 56827 ± 20% softirqs.CPU135.RCU 17202 ± 14% +211.0% 53497 ± 28% softirqs.CPU136.RCU 19097 ± 11% +214.2% 60000 ± 19% softirqs.CPU137.RCU 16759 ± 29% +278.6% 63448 ± 17% softirqs.CPU138.RCU 18959 ± 16% +165.3% 50306 ± 25% softirqs.CPU139.RCU 19505 ± 19% +149.6% 48693 ± 21% softirqs.CPU14.RCU 16135 ± 15% +168.2% 43272 ± 19% softirqs.CPU140.RCU 17794 ± 15% +184.8% 50687 ± 21% softirqs.CPU141.RCU 16472 ± 17% +196.7% 48869 ± 28% softirqs.CPU142.RCU 14839 ± 14% +174.5% 40736 ± 12% softirqs.CPU143.RCU 16384 ± 25% +211.4% 51022 ± 28% softirqs.CPU15.RCU 15547 ± 13% +173.7% 42548 ± 19% softirqs.CPU16.RCU 13862 ± 21% +208.6% 42779 ± 23% softirqs.CPU17.RCU 18183 ± 20% +583.2% 124235 ±119% softirqs.CPU18.RCU 17802 ± 9% +141.4% 42976 ± 15% softirqs.CPU19.RCU 20787 ± 13% +120.3% 45789 ± 33% softirqs.CPU2.RCU 18366 ± 10% +140.8% 44227 ± 23% softirqs.CPU20.RCU 15948 ± 12% +218.4% 50775 ± 16% softirqs.CPU21.RCU 19221 ± 11% +222.5% 61995 ± 16% softirqs.CPU22.RCU 16334 ± 17% +227.9% 53557 ± 23% softirqs.CPU23.RCU 21251 ± 9% +188.7% 61345 ± 16% softirqs.CPU24.RCU 16451 ± 14% +201.1% 49535 ± 21% softirqs.CPU25.RCU 20500 ± 12% +213.9% 64349 ± 6% softirqs.CPU26.RCU 17928 ± 20% +183.5% 50819 ± 27% softirqs.CPU27.RCU 20846 ± 7% +163.3% 54888 ± 21% softirqs.CPU28.RCU 18914 ± 12% +163.4% 49811 ± 29% softirqs.CPU29.RCU 18556 ± 18% +165.5% 49259 ± 35% softirqs.CPU3.RCU 20496 ± 15% +181.7% 57744 ± 24% softirqs.CPU30.RCU 17054 ± 17% +664.3% 130348 ±118% softirqs.CPU31.RCU 15549 ± 13% +212.6% 48611 ± 19% softirqs.CPU32.RCU 15335 ± 16% +193.8% 45049 ± 19% softirqs.CPU33.RCU 16661 ± 10% +158.2% 43026 ± 23% softirqs.CPU34.RCU 15384 ± 13% +221.6% 49479 ± 12% softirqs.CPU35.RCU 19525 ± 14% +531.4% 123281 ±123% softirqs.CPU36.RCU 19019 ± 14% +153.6% 48227 ± 28% softirqs.CPU37.RCU 18423 ± 12% +166.7% 49141 ± 30% softirqs.CPU38.RCU 18633 ± 12% +152.4% 47028 ± 32% softirqs.CPU39.RCU 21585 ± 11% +184.9% 61505 ± 19% softirqs.CPU4.RCU 20189 ± 11% +148.1% 50090 ± 30% softirqs.CPU40.RCU 17534 ± 14% +166.0% 46641 ± 28% softirqs.CPU41.RCU 17539 ± 12% +218.6% 55878 ± 20% softirqs.CPU42.RCU 18750 ± 11% +177.1% 51963 ± 18% softirqs.CPU43.RCU 18486 ± 13% +223.7% 59837 ± 19% softirqs.CPU44.RCU 17581 ± 7% +211.5% 54762 ± 26% softirqs.CPU45.RCU 18952 ± 12% +595.6% 131826 ±112% softirqs.CPU46.RCU 18765 ± 15% +177.3% 52040 ± 34% softirqs.CPU47.RCU 15465 ± 12% +273.8% 57805 ± 11% softirqs.CPU48.RCU 17804 ± 13% +215.6% 56199 ± 13% softirqs.CPU49.RCU 18414 ± 18% +148.9% 45836 ± 33% softirqs.CPU5.RCU 15662 ± 19% +198.0% 46680 ± 24% softirqs.CPU50.RCU 17973 ± 12% +164.9% 47605 ± 17% softirqs.CPU51.RCU 18884 ± 10% +165.9% 50206 ± 25% softirqs.CPU52.RCU 17031 ± 13% +167.6% 45569 ± 25% softirqs.CPU53.RCU 16886 ± 18% +627.9% 122912 ±120% softirqs.CPU54.RCU 18278 ± 14% +211.8% 56989 ± 17% softirqs.CPU55.RCU 18145 ± 8% +217.4% 57594 ± 14% softirqs.CPU56.RCU 17511 ± 21% +172.7% 47747 ± 36% softirqs.CPU57.RCU 16802 ± 15% +225.6% 54711 ± 18% softirqs.CPU58.RCU 16774 ± 14% +179.1% 46814 ± 27% softirqs.CPU59.RCU 17951 ± 18% +216.0% 56731 ± 24% softirqs.CPU6.RCU 17108 ± 18% +506.2% 103706 ±125% softirqs.CPU60.RCU 14780 ± 11% +690.3% 116811 ±135% softirqs.CPU61.RCU 17382 ± 17% +162.5% 45635 ± 32% softirqs.CPU62.RCU 15487 ± 14% +599.2% 108291 ±128% softirqs.CPU63.RCU 16035 ± 23% +194.1% 47163 ± 26% softirqs.CPU64.RCU 14855 ± 15% +184.8% 42300 ± 29% softirqs.CPU65.RCU 16111 ± 18% +144.1% 39334 ± 13% softirqs.CPU66.RCU 17195 ± 82% +81.4% 31195 ± 25% softirqs.CPU66.SCHED 16405 ± 34% +230.1% 54155 ± 18% softirqs.CPU67.RCU 17042 ± 16% +647.0% 127307 ±120% softirqs.CPU68.RCU 15816 ± 19% +218.2% 50337 ± 29% softirqs.CPU69.RCU 17797 ± 18% +229.0% 58546 ± 14% softirqs.CPU7.RCU 16084 ± 24% +236.4% 54100 ± 18% softirqs.CPU70.RCU 18164 ± 11% +226.2% 59244 ± 9% softirqs.CPU71.RCU 18018 ± 5% +205.3% 55012 ± 15% softirqs.CPU72.RCU 18890 ± 7% +185.9% 54006 ± 19% softirqs.CPU73.RCU 14625 ± 14% +257.0% 52219 ± 27% softirqs.CPU74.RCU 15595 ± 12% +238.0% 52709 ± 20% softirqs.CPU75.RCU 14102 ± 18% +177.9% 39194 ± 27% softirqs.CPU76.RCU 16796 ± 13% +224.1% 54433 ± 18% softirqs.CPU77.RCU 15631 ± 13% +189.7% 45281 ± 19% softirqs.CPU78.RCU 16722 ± 12% +151.5% 42050 ± 10% softirqs.CPU79.RCU 20349 ± 9% +231.0% 67348 ± 9% softirqs.CPU8.RCU 15655 ± 14% +144.7% 38303 ± 8% softirqs.CPU80.RCU 16539 ± 18% +200.5% 49702 ± 19% softirqs.CPU81.RCU 15683 ± 14% +170.8% 42474 ± 22% softirqs.CPU82.RCU 16865 ± 20% +206.4% 51681 ± 23% softirqs.CPU83.RCU 14396 ± 6% +178.2% 40052 ± 8% softirqs.CPU84.RCU 17387 ± 14% +164.1% 45915 ± 26% softirqs.CPU85.RCU 15672 ± 17% +224.7% 50892 ± 29% softirqs.CPU86.RCU 19287 ± 19% +176.6% 53347 ± 28% softirqs.CPU87.RCU 16346 ± 17% +211.2% 50878 ± 17% softirqs.CPU88.RCU 18634 ± 10% +171.6% 50611 ± 14% softirqs.CPU89.RCU 19253 ± 16% +164.0% 50822 ± 38% softirqs.CPU9.RCU 16349 ± 9% +205.6% 49964 ± 26% softirqs.CPU90.RCU 16658 ± 13% +244.9% 57451 ± 7% softirqs.CPU91.RCU 15832 ± 12% +252.0% 55724 ± 6% softirqs.CPU92.RCU 20185 ± 59% -70.1% 6025 ± 25% softirqs.CPU92.SCHED 16782 ± 16% +193.7% 49299 ± 21% softirqs.CPU93.RCU 13832 ± 13% +182.4% 39065 ± 26% softirqs.CPU94.RCU 17192 ± 10% +164.6% 45483 ± 25% softirqs.CPU95.RCU 15533 ± 22% +208.0% 47836 ± 28% softirqs.CPU96.RCU 19214 ± 14% +198.1% 57287 ± 21% softirqs.CPU97.RCU 15814 ± 19% +183.6% 44845 ± 20% softirqs.CPU98.RCU 18369 ± 10% +207.8% 56534 ± 20% softirqs.CPU99.RCU 2516265 ± 2% +218.1% 8003915 ± 9% softirqs.RCU 25843 +66.6% 43053 ± 2% softirqs.TIMER 3312 ± 52% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 3312 ± 52% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 2926 ± 20% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 2926 ± 20% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 5321 ± 45% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 5321 ± 45% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 3999 ± 55% -100.0% 1.00 ±223% interrupts.CPU100.NMI:Non-maskable_interrupts 3999 ± 55% -100.0% 1.00 ±223% interrupts.CPU100.PMI:Performance_monitoring_interrupts 4173 ± 52% -100.0% 0.00 interrupts.CPU101.NMI:Non-maskable_interrupts 4173 ± 52% -100.0% 0.00 interrupts.CPU101.PMI:Performance_monitoring_interrupts 2583 ± 35% -100.0% 0.00 interrupts.CPU102.NMI:Non-maskable_interrupts 2583 ± 35% -100.0% 0.00 interrupts.CPU102.PMI:Performance_monitoring_interrupts 4341 ± 65% -100.0% 0.00 interrupts.CPU103.NMI:Non-maskable_interrupts 4341 ± 65% -100.0% 0.00 interrupts.CPU103.PMI:Performance_monitoring_interrupts 4223 ± 66% -100.0% 0.00 interrupts.CPU104.NMI:Non-maskable_interrupts 4223 ± 66% -100.0% 0.00 interrupts.CPU104.PMI:Performance_monitoring_interrupts 4018 ± 39% -100.0% 0.00 interrupts.CPU105.NMI:Non-maskable_interrupts 4018 ± 39% -100.0% 0.00 interrupts.CPU105.PMI:Performance_monitoring_interrupts 4266 ± 63% -100.0% 0.00 interrupts.CPU106.NMI:Non-maskable_interrupts 4266 ± 63% -100.0% 0.00 interrupts.CPU106.PMI:Performance_monitoring_interrupts 3842 ± 58% -100.0% 0.00 interrupts.CPU107.NMI:Non-maskable_interrupts 3842 ± 58% -100.0% 0.00 interrupts.CPU107.PMI:Performance_monitoring_interrupts 3789 ± 35% -100.0% 0.00 interrupts.CPU108.NMI:Non-maskable_interrupts 3789 ± 35% -100.0% 0.00 interrupts.CPU108.PMI:Performance_monitoring_interrupts 2838 ± 26% -100.0% 0.00 interrupts.CPU109.NMI:Non-maskable_interrupts 2838 ± 26% -100.0% 0.00 interrupts.CPU109.PMI:Performance_monitoring_interrupts 5445 ± 51% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 5445 ± 51% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 3525 ± 36% -100.0% 0.00 interrupts.CPU110.NMI:Non-maskable_interrupts 3525 ± 36% -100.0% 0.00 interrupts.CPU110.PMI:Performance_monitoring_interrupts 3953 ± 60% -100.0% 0.00 interrupts.CPU111.NMI:Non-maskable_interrupts 3953 ± 60% -100.0% 0.00 interrupts.CPU111.PMI:Performance_monitoring_interrupts 5058 ± 49% -100.0% 0.00 interrupts.CPU112.NMI:Non-maskable_interrupts 5058 ± 49% -100.0% 0.00 interrupts.CPU112.PMI:Performance_monitoring_interrupts 4609 ± 39% -100.0% 0.00 interrupts.CPU113.NMI:Non-maskable_interrupts 4609 ± 39% -100.0% 0.00 interrupts.CPU113.PMI:Performance_monitoring_interrupts 4525 ± 43% -100.0% 0.00 interrupts.CPU114.NMI:Non-maskable_interrupts 4525 ± 43% -100.0% 0.00 interrupts.CPU114.PMI:Performance_monitoring_interrupts 3237 ± 29% -100.0% 0.00 interrupts.CPU115.NMI:Non-maskable_interrupts 3237 ± 29% -100.0% 0.00 interrupts.CPU115.PMI:Performance_monitoring_interrupts 52.00 ± 67% -69.2% 16.00 ±100% interrupts.CPU115.RES:Rescheduling_interrupts 3683 ± 60% -100.0% 0.00 interrupts.CPU116.NMI:Non-maskable_interrupts 3683 ± 60% -100.0% 0.00 interrupts.CPU116.PMI:Performance_monitoring_interrupts 5148 ± 42% -100.0% 0.00 interrupts.CPU117.NMI:Non-maskable_interrupts 5148 ± 42% -100.0% 0.00 interrupts.CPU117.PMI:Performance_monitoring_interrupts 5757 ± 41% -100.0% 0.00 interrupts.CPU118.NMI:Non-maskable_interrupts 5757 ± 41% -100.0% 0.00 interrupts.CPU118.PMI:Performance_monitoring_interrupts 72.17 ± 60% -73.4% 19.17 ± 81% interrupts.CPU118.RES:Rescheduling_interrupts 5366 ± 52% -100.0% 0.00 interrupts.CPU119.NMI:Non-maskable_interrupts 5366 ± 52% -100.0% 0.00 interrupts.CPU119.PMI:Performance_monitoring_interrupts 6093 ± 33% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 6093 ± 33% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 5966 ± 39% -100.0% 0.00 interrupts.CPU120.NMI:Non-maskable_interrupts 5966 ± 39% -100.0% 0.00 interrupts.CPU120.PMI:Performance_monitoring_interrupts 2344 ± 42% -100.0% 0.00 interrupts.CPU121.NMI:Non-maskable_interrupts 2344 ± 42% -100.0% 0.00 interrupts.CPU121.PMI:Performance_monitoring_interrupts 2720 ± 34% -100.0% 0.00 interrupts.CPU122.NMI:Non-maskable_interrupts 2720 ± 34% -100.0% 0.00 interrupts.CPU122.PMI:Performance_monitoring_interrupts 2301 ± 38% -100.0% 0.00 interrupts.CPU123.NMI:Non-maskable_interrupts 2301 ± 38% -100.0% 0.00 interrupts.CPU123.PMI:Performance_monitoring_interrupts 2391 ± 27% -100.0% 0.00 interrupts.CPU124.NMI:Non-maskable_interrupts 2391 ± 27% -100.0% 0.00 interrupts.CPU124.PMI:Performance_monitoring_interrupts 3276 ± 29% -100.0% 0.00 interrupts.CPU125.NMI:Non-maskable_interrupts 3276 ± 29% -100.0% 0.00 interrupts.CPU125.PMI:Performance_monitoring_interrupts 4621 ± 52% -100.0% 0.00 interrupts.CPU126.NMI:Non-maskable_interrupts 4621 ± 52% -100.0% 0.00 interrupts.CPU126.PMI:Performance_monitoring_interrupts 3735 ± 64% -100.0% 0.00 interrupts.CPU127.NMI:Non-maskable_interrupts 3735 ± 64% -100.0% 0.00 interrupts.CPU127.PMI:Performance_monitoring_interrupts 77.67 ±109% -89.3% 8.33 ± 76% interrupts.CPU127.RES:Rescheduling_interrupts 3500 ± 73% -100.0% 0.00 interrupts.CPU128.NMI:Non-maskable_interrupts 3500 ± 73% -100.0% 0.00 interrupts.CPU128.PMI:Performance_monitoring_interrupts 3670 ± 61% -100.0% 0.00 interrupts.CPU129.NMI:Non-maskable_interrupts 3670 ± 61% -100.0% 0.00 interrupts.CPU129.PMI:Performance_monitoring_interrupts 4563 ± 56% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 4563 ± 56% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 6265 ± 41% -100.0% 0.00 interrupts.CPU130.NMI:Non-maskable_interrupts 6265 ± 41% -100.0% 0.00 interrupts.CPU130.PMI:Performance_monitoring_interrupts 4520 ± 36% -100.0% 0.00 interrupts.CPU131.NMI:Non-maskable_interrupts 4520 ± 36% -100.0% 0.00 interrupts.CPU131.PMI:Performance_monitoring_interrupts 4381 ± 48% -100.0% 0.00 interrupts.CPU132.NMI:Non-maskable_interrupts 4381 ± 48% -100.0% 0.00 interrupts.CPU132.PMI:Performance_monitoring_interrupts 4564 ± 48% -100.0% 0.00 interrupts.CPU133.NMI:Non-maskable_interrupts 4564 ± 48% -100.0% 0.00 interrupts.CPU133.PMI:Performance_monitoring_interrupts 5056 ± 49% -100.0% 0.00 interrupts.CPU134.NMI:Non-maskable_interrupts 5056 ± 49% -100.0% 0.00 interrupts.CPU134.PMI:Performance_monitoring_interrupts 5139 ± 29% -100.0% 0.00 interrupts.CPU135.NMI:Non-maskable_interrupts 5139 ± 29% -100.0% 0.00 interrupts.CPU135.PMI:Performance_monitoring_interrupts 4884 ± 50% -100.0% 0.00 interrupts.CPU136.NMI:Non-maskable_interrupts 4884 ± 50% -100.0% 0.00 interrupts.CPU136.PMI:Performance_monitoring_interrupts 5210 ± 40% -100.0% 0.00 interrupts.CPU137.NMI:Non-maskable_interrupts 5210 ± 40% -100.0% 0.00 interrupts.CPU137.PMI:Performance_monitoring_interrupts 4586 ± 57% -100.0% 0.00 interrupts.CPU138.NMI:Non-maskable_interrupts 4586 ± 57% -100.0% 0.00 interrupts.CPU138.PMI:Performance_monitoring_interrupts 4974 ± 48% -100.0% 0.00 interrupts.CPU139.NMI:Non-maskable_interrupts 4974 ± 48% -100.0% 0.00 interrupts.CPU139.PMI:Performance_monitoring_interrupts 5041 ± 47% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 5041 ± 47% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 3906 ± 53% -100.0% 0.00 interrupts.CPU140.NMI:Non-maskable_interrupts 3906 ± 53% -100.0% 0.00 interrupts.CPU140.PMI:Performance_monitoring_interrupts 3811 ± 57% -100.0% 0.00 interrupts.CPU141.NMI:Non-maskable_interrupts 3811 ± 57% -100.0% 0.00 interrupts.CPU141.PMI:Performance_monitoring_interrupts 3457 ± 31% -100.0% 0.00 interrupts.CPU142.NMI:Non-maskable_interrupts 3457 ± 31% -100.0% 0.00 interrupts.CPU142.PMI:Performance_monitoring_interrupts 3062 ± 49% -100.0% 0.00 interrupts.CPU143.NMI:Non-maskable_interrupts 3062 ± 49% -100.0% 0.00 interrupts.CPU143.PMI:Performance_monitoring_interrupts 94.83 ± 50% -77.0% 21.83 ±118% interrupts.CPU143.RES:Rescheduling_interrupts 4927 ± 44% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 4927 ± 44% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 5899 ± 38% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 5899 ± 38% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 3144 ± 28% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 3144 ± 28% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 4787 ± 46% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 4787 ± 46% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 3679 ± 37% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 3679 ± 37% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 6025 ± 35% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 6025 ± 35% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 4572 ± 56% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 4572 ± 56% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 875.50 ± 49% -73.8% 229.00 ± 78% interrupts.CPU20.TLB:TLB_shootdowns 2461 ± 30% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 2461 ± 30% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 5090 ± 43% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 5090 ± 43% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 2360 ± 41% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 2360 ± 41% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 6158 ± 26% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 6158 ± 26% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 2405 ± 24% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 2405 ± 24% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 5228 ± 34% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 5228 ± 34% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 3886 ± 53% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 3886 ± 53% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 4800 ± 45% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 4800 ± 45% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 6130 ± 46% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 6130 ± 46% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 4383 ± 60% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 4383 ± 60% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 4984 ± 48% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 4984 ± 48% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 3877 ± 52% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 3877 ± 52% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 4015 ± 32% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 4015 ± 32% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 3525 ± 16% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 3525 ± 16% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 4705 ± 43% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 4705 ± 43% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 4700 ± 51% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 4700 ± 51% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 4586 ± 57% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 4586 ± 57% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 5631 ± 36% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 5631 ± 36% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 3902 ± 47% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 3902 ± 47% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 4126 ± 49% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 4126 ± 49% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 7459 ± 12% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 7459 ± 12% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 3440 ± 70% -100.0% 1.00 ±223% interrupts.CPU40.NMI:Non-maskable_interrupts 3440 ± 70% -100.0% 1.00 ±223% interrupts.CPU40.PMI:Performance_monitoring_interrupts 69.33 ± 60% -80.8% 13.33 ± 83% interrupts.CPU40.RES:Rescheduling_interrupts 4158 ± 44% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 4158 ± 44% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 3729 ± 64% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 3729 ± 64% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 54.83 ± 52% -74.2% 14.17 ± 88% interrupts.CPU42.RES:Rescheduling_interrupts 3589 ± 36% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 3589 ± 36% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 4511 ± 52% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 4511 ± 52% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 80.83 ± 47% -87.4% 10.17 ± 81% interrupts.CPU44.RES:Rescheduling_interrupts 2897 ± 51% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 2897 ± 51% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 3879 ± 53% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 3879 ± 53% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 3966 ± 49% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 3966 ± 49% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 4141 ± 62% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 4141 ± 62% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 4590 ± 35% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 4590 ± 35% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 4363 ± 64% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 4363 ± 64% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 5560 ± 45% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 5560 ± 45% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 5974 ± 34% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 5974 ± 34% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 6513 ± 27% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 6513 ± 27% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 74.50 ± 61% -55.0% 33.50 ±119% interrupts.CPU52.RES:Rescheduling_interrupts 4063 ± 41% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 4063 ± 41% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 5161 ± 41% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 5161 ± 41% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 6614 ± 27% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 6614 ± 27% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 6638 ± 27% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 6638 ± 27% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 4840 ± 47% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 4840 ± 47% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 4352 ± 62% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 4352 ± 62% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 3277 ± 64% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 3277 ± 64% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 4906 ± 33% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 4906 ± 33% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 3907 ± 53% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 3907 ± 53% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 2777 ± 28% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 2777 ± 28% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 4597 ± 56% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 4597 ± 56% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 79.33 ± 44% -87.6% 9.83 ± 85% interrupts.CPU62.RES:Rescheduling_interrupts 2846 ± 63% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 2846 ± 63% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 3713 ± 57% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 3713 ± 57% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 3500 ± 60% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 3500 ± 60% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 4631 ± 42% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 4631 ± 42% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 3063 ± 40% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 3063 ± 40% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 4632 ± 55% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 4632 ± 55% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 4402 ± 33% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 4402 ± 33% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 3438 ± 38% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 3438 ± 38% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 4654 ± 48% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 4654 ± 48% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 6027 ± 32% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 6027 ± 32% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 5898 ± 38% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 5898 ± 38% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 5698 ± 32% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 5698 ± 32% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 3009 ± 78% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 3009 ± 78% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 4320 ± 50% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 4320 ± 50% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 2476 ± 37% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 2476 ± 37% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 5025 ± 62% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 5025 ± 62% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 4699 ± 52% -100.0% 0.00 interrupts.CPU78.NMI:Non-maskable_interrupts 4699 ± 52% -100.0% 0.00 interrupts.CPU78.PMI:Performance_monitoring_interrupts 4360 ± 36% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 4360 ± 36% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 6023 ± 34% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 6023 ± 34% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 2771 ± 24% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 2771 ± 24% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 4778 ± 61% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 4778 ± 61% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 3329 ± 67% -100.0% 0.00 interrupts.CPU82.NMI:Non-maskable_interrupts 3329 ± 67% -100.0% 0.00 interrupts.CPU82.PMI:Performance_monitoring_interrupts 4239 ± 55% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 4239 ± 55% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 2495 ± 61% -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 2495 ± 61% -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 4589 ± 45% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 4589 ± 45% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 2819 ± 26% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 2819 ± 26% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 3227 ± 29% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 3227 ± 29% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 3685 ± 66% -100.0% 0.00 interrupts.CPU88.NMI:Non-maskable_interrupts 3685 ± 66% -100.0% 0.00 interrupts.CPU88.PMI:Performance_monitoring_interrupts 4793 ± 34% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 4793 ± 34% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 5179 ± 45% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 5179 ± 45% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 4387 ± 50% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 4387 ± 50% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 4942 ± 38% -100.0% 1.00 ±223% interrupts.CPU91.NMI:Non-maskable_interrupts 4942 ± 38% -100.0% 1.00 ±223% interrupts.CPU91.PMI:Performance_monitoring_interrupts 4550 ± 58% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 4550 ± 58% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 602.83 ± 67% +97.3% 1189 ± 13% interrupts.CPU92.TLB:TLB_shootdowns 7200 ± 21% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 7200 ± 21% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 3667 ± 28% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 3667 ± 28% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 4750 ± 35% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 4750 ± 35% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 3023 ± 78% -100.0% 0.00 interrupts.CPU96.NMI:Non-maskable_interrupts 3023 ± 78% -100.0% 0.00 interrupts.CPU96.PMI:Performance_monitoring_interrupts 5742 ± 31% -100.0% 0.00 interrupts.CPU97.NMI:Non-maskable_interrupts 5742 ± 31% -100.0% 0.00 interrupts.CPU97.PMI:Performance_monitoring_interrupts 2808 ± 29% -100.0% 0.00 interrupts.CPU98.NMI:Non-maskable_interrupts 2808 ± 29% -100.0% 0.00 interrupts.CPU98.PMI:Performance_monitoring_interrupts 3906 ± 53% -100.0% 0.00 interrupts.CPU99.NMI:Non-maskable_interrupts 3906 ± 53% -100.0% 0.00 interrupts.CPU99.PMI:Performance_monitoring_interrupts 6695 -100.0% 0.00 interrupts.IWI:IRQ_work_interrupts 623215 ± 6% -100.0% 3.00 ±100% interrupts.NMI:Non-maskable_interrupts 623215 ± 6% -100.0% 3.00 ±100% interrupts.PMI:Performance_monitoring_interrupts *************************************************************************************************** lkp-knm01: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory ========================================================================================= compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/bufferedio/1HDD/ext4/x86_64-rhel-8.3/hdd/debian-10.4-x86_64-20200603.cgz/lkp-knm01/MWRL/fxmark/0x11 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 3.31 -9.6% 2.99 ± 2% fxmark.hdd_ext4_MWRL_18_bufferedio.idle_sec 0.61 -9.6% 0.55 ± 2% fxmark.hdd_ext4_MWRL_18_bufferedio.idle_util 0.01 ± 48% -80.0% 0.00 ±128% fxmark.hdd_ext4_MWRL_18_bufferedio.iowait_util 65.15 ± 5% -11.2% 57.87 ± 3% fxmark.hdd_ext4_MWRL_18_bufferedio.user_sec 11.97 ± 5% -11.2% 10.63 ± 3% fxmark.hdd_ext4_MWRL_18_bufferedio.user_util 0.83 +15.4% 0.96 ± 2% fxmark.hdd_ext4_MWRL_1_bufferedio.irq_sec 2.75 +13.1% 3.11 ± 2% fxmark.hdd_ext4_MWRL_1_bufferedio.irq_util 0.37 ± 5% +18.6% 0.44 ± 9% fxmark.hdd_ext4_MWRL_1_bufferedio.softirq_sec 1.22 ± 5% +16.2% 1.41 ± 9% fxmark.hdd_ext4_MWRL_1_bufferedio.softirq_util 557579 -68.2% 177372 fxmark.hdd_ext4_MWRL_1_bufferedio.works 18585 -68.2% 5912 fxmark.hdd_ext4_MWRL_1_bufferedio.works/sec 0.15 ± 15% -100.0% 0.00 fxmark.hdd_ext4_MWRL_2_bufferedio.idle_sec 0.25 ± 15% -100.0% 0.00 fxmark.hdd_ext4_MWRL_2_bufferedio.idle_util 1.62 ± 2% +16.7% 1.89 ± 2% fxmark.hdd_ext4_MWRL_2_bufferedio.irq_sec 2.68 ± 2% +15.7% 3.10 ± 2% fxmark.hdd_ext4_MWRL_2_bufferedio.irq_util 0.52 ± 3% +35.2% 0.70 ± 9% fxmark.hdd_ext4_MWRL_2_bufferedio.softirq_sec 0.85 ± 3% +34.1% 1.15 ± 9% fxmark.hdd_ext4_MWRL_2_bufferedio.softirq_util 1252608 -50.3% 622803 fxmark.hdd_ext4_MWRL_2_bufferedio.works 41753 -50.3% 20759 fxmark.hdd_ext4_MWRL_2_bufferedio.works/sec 3.57 ± 2% +13.8% 4.07 fxmark.hdd_ext4_MWRL_45_bufferedio.softirq_sec 0.26 ± 2% +13.7% 0.30 fxmark.hdd_ext4_MWRL_45_bufferedio.softirq_util 0.54 -65.5% 0.19 ± 18% fxmark.hdd_ext4_MWRL_4_bufferedio.idle_sec 0.45 -65.6% 0.15 ± 18% fxmark.hdd_ext4_MWRL_4_bufferedio.idle_util 0.72 ± 6% +20.9% 0.87 fxmark.hdd_ext4_MWRL_4_bufferedio.softirq_sec 0.59 ± 6% +20.8% 0.72 fxmark.hdd_ext4_MWRL_4_bufferedio.softirq_util 2598830 -25.8% 1927202 fxmark.hdd_ext4_MWRL_4_bufferedio.works 86628 -25.8% 64234 fxmark.hdd_ext4_MWRL_4_bufferedio.works/sec 4.02 ± 3% +9.8% 4.41 fxmark.hdd_ext4_MWRL_54_bufferedio.softirq_sec 0.25 ± 3% +9.8% 0.27 fxmark.hdd_ext4_MWRL_54_bufferedio.softirq_util 52.87 -17.9% 43.41 fxmark.hdd_ext4_MWRL_63_bufferedio.irq_sec 2.77 -17.9% 2.28 fxmark.hdd_ext4_MWRL_63_bufferedio.irq_util 1.53 -25.1% 1.15 ± 3% fxmark.hdd_ext4_MWRL_9_bufferedio.idle_sec 0.56 -25.1% 0.42 ± 3% fxmark.hdd_ext4_MWRL_9_bufferedio.idle_util 1.22 ± 2% +17.8% 1.44 ± 9% fxmark.hdd_ext4_MWRL_9_bufferedio.softirq_sec 0.45 ± 2% +17.8% 0.53 ± 9% fxmark.hdd_ext4_MWRL_9_bufferedio.softirq_util 5972878 -11.4% 5289953 fxmark.hdd_ext4_MWRL_9_bufferedio.works 199096 -11.4% 176339 fxmark.hdd_ext4_MWRL_9_bufferedio.works/sec 543.61 +3.9% 564.84 fxmark.time.elapsed_time 543.61 +3.9% 564.84 fxmark.time.elapsed_time.max 61399 ± 2% -33.3% 40923 fxmark.time.involuntary_context_switches 67.00 -15.4% 56.67 fxmark.time.percent_of_cpu_this_job_got 329.81 -11.4% 292.13 fxmark.time.system_time 37.00 ± 4% -18.3% 30.22 fxmark.time.user_time 1746182 ±209% -94.7% 92230 ± 8% cpuidle.POLL.time 30955 ±191% -89.2% 3340 ± 17% cpuidle.POLL.usage 26.92 -14.9% 22.91 iostat.cpu.idle 2.89 -70.9% 0.84 iostat.cpu.iowait 57.66 +8.3% 62.48 iostat.cpu.system 7.92 ± 4% +9.4% 8.67 iostat.cpu.user 33.70 -4.4 29.31 mpstat.cpu.all.idle% 2.76 -2.0 0.81 mpstat.cpu.all.iowait% 0.48 ± 2% +0.1 0.60 ± 5% mpstat.cpu.all.soft% 7.61 ± 4% +0.8 8.44 mpstat.cpu.all.usr% 26.17 -14.6% 22.33 ± 2% vmstat.cpu.id 61.33 +8.4% 66.50 vmstat.cpu.sy 24643 -3.4% 23798 vmstat.io.bo 1380564 +29.9% 1793355 vmstat.memory.cache 3477 +6.7% 3709 vmstat.system.cs 12276 ± 8% +31.4% 16136 ± 7% softirqs.CPU0.TIMER 23091 ± 4% -10.6% 20641 ± 4% softirqs.CPU2.SCHED 33922 +11.7% 37882 ± 9% softirqs.CPU56.RCU 33741 +13.3% 38216 ± 6% softirqs.CPU61.RCU 16923 +22.9% 20797 ± 20% softirqs.CPU66.RCU 16861 +27.0% 21416 ± 22% softirqs.CPU70.RCU 1573 ± 7% +347.7% 7044 numa-vmstat.node0.nr_active_anon 64182 +23.2% 79044 numa-vmstat.node0.nr_anon_pages 94.33 ± 2% +27.2% 120.00 ± 2% numa-vmstat.node0.nr_anon_transparent_hugepages 253943 +40.3% 356398 numa-vmstat.node0.nr_file_pages 118100 +95.4% 230802 numa-vmstat.node0.nr_inactive_anon 7372 -12.7% 6439 numa-vmstat.node0.nr_mapped 1166 +19.1% 1388 numa-vmstat.node0.nr_page_table_pages 55619 +185.8% 158952 numa-vmstat.node0.nr_shmem 1573 ± 7% +347.7% 7044 numa-vmstat.node0.nr_zone_active_anon 118099 +95.4% 230802 numa-vmstat.node0.nr_zone_inactive_anon 17862 +57.5% 28124 slabinfo.filp.active_objs 32426 +13.2% 36719 slabinfo.filp.num_objs 7278 +48.0% 10771 slabinfo.kmalloc-2k.active_objs 496.50 +43.8% 713.83 slabinfo.kmalloc-2k.active_slabs 7949 +43.7% 11424 slabinfo.kmalloc-2k.num_objs 496.50 +43.8% 713.83 slabinfo.kmalloc-2k.num_slabs 2132 +10.1% 2347 slabinfo.kmalloc-4k.active_objs 592.17 +38.1% 818.00 slabinfo.kmalloc-8k.active_objs 639.83 +35.5% 866.83 slabinfo.kmalloc-8k.num_objs 14914 +10.1% 16414 slabinfo.lsm_file_cache.active_objs 3929 ± 6% -10.0% 3535 ± 7% slabinfo.proc_inode_cache.active_objs 6801 ± 6% +320.7% 28617 meminfo.Active 6176 ± 7% +353.6% 28014 meminfo.Active(anon) 194593 ± 2% +27.0% 247153 ± 2% meminfo.AnonHugePages 256741 +23.1% 316148 meminfo.AnonPages 1286234 +32.3% 1701776 meminfo.Cached 553280 +92.3% 1064201 meminfo.Committed_AS 708501 +63.7% 1159512 meminfo.Inactive 472373 +95.9% 925409 meminfo.Inactive(anon) 2955048 +63.3% 4825732 meminfo.Memused 4677 +19.0% 5564 meminfo.PageTables 222311 +186.9% 637866 meminfo.Shmem 5069197 +10.2% 5587843 meminfo.max_used_kB 6818 ± 6% +318.5% 28532 numa-meminfo.node0.Active 6198 ± 7% +350.6% 27933 numa-meminfo.node0.Active(anon) 194543 ± 2% +27.0% 247090 ± 2% numa-meminfo.node0.AnonHugePages 256752 +23.1% 316135 numa-meminfo.node0.AnonPages 361990 -10.0% 325888 numa-meminfo.node0.AnonPages.max 1012880 +40.9% 1427235 numa-meminfo.node0.FilePages 709812 +63.7% 1161758 numa-meminfo.node0.Inactive 472322 +95.9% 925395 numa-meminfo.node0.Inactive(anon) 29021 -13.3% 25161 numa-meminfo.node0.Mapped 2354348 +79.6% 4227560 numa-meminfo.node0.MemUsed 4669 +19.0% 5556 numa-meminfo.node0.PageTables 222277 +186.9% 637780 numa-meminfo.node0.Shmem 6879039 ± 2% +9.0% 7497053 sched_debug.cfs_rq:/.min_vruntime.avg 0.12 ± 6% +45.7% 0.18 ± 20% sched_debug.cfs_rq:/.nr_running.stddev 805.87 ± 12% +33.3% 1073 ± 15% sched_debug.cfs_rq:/.runnable_avg.avg 1534 ± 6% +21.2% 1860 ± 8% sched_debug.cfs_rq:/.runnable_avg.max 160.45 ± 7% +67.8% 269.19 ± 3% sched_debug.cfs_rq:/.runnable_avg.stddev -647349 -102.1% 13297 ±763% sched_debug.cfs_rq:/.spread0.avg -1159342 -39.9% -697308 sched_debug.cfs_rq:/.spread0.min 92.18 ± 4% +37.0% 126.31 ± 11% sched_debug.cfs_rq:/.util_avg.stddev 458.56 ± 7% +48.5% 681.06 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.avg 787.20 ± 6% +96.4% 1546 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.max 335.43 ± 3% +41.0% 472.93 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.min 109.13 ± 8% +146.0% 268.46 ± 6% sched_debug.cfs_rq:/.util_est_enqueued.stddev 37.18 -19.4% 29.95 ± 4% sched_debug.cpu.clock.stddev 6756 ± 6% -47.6% 3538 ± 21% sched_debug.cpu.curr->pid.min 859.74 ± 10% +144.3% 2100 ± 16% sched_debug.cpu.curr->pid.stddev 0.85 ± 10% +39.0% 1.19 ± 15% sched_debug.cpu.nr_running.avg 0.25 ± 7% +40.7% 0.35 ± 11% sched_debug.cpu.nr_running.stddev 78577 ± 4% +17.3% 92183 ± 3% sched_debug.cpu.nr_switches.max 6926 ± 11% +72.1% 11919 ± 12% sched_debug.cpu.nr_switches.stddev 92.27 -92.3 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe 91.40 -91.4 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe 91.17 -91.2 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe 89.29 -89.3 0.00 perf-profile.calltrace.cycles-pp.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe 70.59 -70.6 0.00 perf-profile.calltrace.cycles-pp.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe 53.42 -53.4 0.00 perf-profile.calltrace.cycles-pp.d_move.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64 52.04 -52.0 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.d_move.vfs_rename.do_renameat2.__x64_sys_rename 51.68 -51.7 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.d_move.vfs_rename.do_renameat2 14.29 ± 4% -14.3 0.00 perf-profile.calltrace.cycles-pp.ext4_rename.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64 7.90 -7.9 0.00 perf-profile.calltrace.cycles-pp.__lookup_hash.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe 7.84 -7.8 0.00 perf-profile.calltrace.cycles-pp.filename_parentat.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe 7.55 -7.6 0.00 perf-profile.calltrace.cycles-pp.path_parentat.filename_parentat.do_renameat2.__x64_sys_rename.do_syscall_64 5.38 ± 4% -5.4 0.00 perf-profile.calltrace.cycles-pp.ext4_add_entry.ext4_rename.vfs_rename.do_renameat2.__x64_sys_rename 92.66 -92.7 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 91.79 -91.8 0.00 perf-profile.children.cycles-pp.do_syscall_64 91.19 -91.2 0.00 perf-profile.children.cycles-pp.__x64_sys_rename 89.37 -89.4 0.00 perf-profile.children.cycles-pp.do_renameat2 70.63 -70.6 0.00 perf-profile.children.cycles-pp.vfs_rename 53.43 -53.4 0.00 perf-profile.children.cycles-pp.d_move 52.63 -52.6 0.00 perf-profile.children.cycles-pp._raw_spin_lock 51.76 -51.8 0.00 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath 14.34 ± 4% -14.3 0.00 perf-profile.children.cycles-pp.ext4_rename 7.91 -7.9 0.00 perf-profile.children.cycles-pp.__lookup_hash 7.85 -7.9 0.00 perf-profile.children.cycles-pp.filename_parentat 7.58 -7.6 0.00 perf-profile.children.cycles-pp.path_parentat 5.41 ± 4% -5.4 0.00 perf-profile.children.cycles-pp.ext4_add_entry 50.41 -50.4 0.00 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath 8827154 ± 9% +16.9% 10315717 ± 7% perf-stat.i.branch-instructions 1246764 ± 7% +20.9% 1506801 ± 8% perf-stat.i.cache-references 3192 +6.5% 3400 perf-stat.i.context-switches 37071 -1.7% 36451 perf-stat.i.cpu-clock 85.26 +9.5% 93.34 perf-stat.i.cpu-migrations 552740 ± 13% +9.9% 607327 ± 8% perf-stat.i.iTLB-load-misses 42086031 ± 8% +17.6% 49507367 ± 7% perf-stat.i.iTLB-loads 42270588 ± 9% +17.6% 49703307 ± 7% perf-stat.i.instructions 2.30 ± 3% -22.2% 1.79 ± 3% perf-stat.i.major-faults 0.18 ± 8% +18.0% 0.21 ± 7% perf-stat.i.metric.M/sec 3218 -5.3% 3047 perf-stat.i.minor-faults 3221 -5.3% 3048 perf-stat.i.page-faults 37071 -1.7% 36451 perf-stat.i.task-clock 14.66 ± 5% -1.9 12.79 perf-stat.overall.cache-miss-rate% 9320033 ± 9% +17.9% 10988050 ± 8% perf-stat.ps.branch-instructions 1318516 ± 8% +21.8% 1606443 ± 10% perf-stat.ps.cache-references 3187 +6.6% 3396 perf-stat.ps.context-switches 37486 -1.9% 36788 perf-stat.ps.cpu-clock 85.63 +9.1% 93.45 perf-stat.ps.cpu-migrations 584438 ± 14% +10.8% 647467 ± 10% perf-stat.ps.iTLB-load-misses 44435533 ± 9% +18.7% 52733794 ± 8% perf-stat.ps.iTLB-loads 44630533 ± 9% +18.6% 52943930 ± 8% perf-stat.ps.instructions 2.31 ± 3% -23.1% 1.78 ± 3% perf-stat.ps.major-faults 3227 -5.8% 3040 perf-stat.ps.minor-faults 3229 -5.8% 3042 perf-stat.ps.page-faults 37486 -1.9% 36788 perf-stat.ps.task-clock 2.427e+10 ± 9% +23.2% 2.989e+10 ± 8% perf-stat.total.instructions 1531 ± 5% +358.4% 7020 proc-vmstat.nr_active_anon 155.50 -2.9% 151.00 proc-vmstat.nr_active_file 64137 +23.3% 79069 proc-vmstat.nr_anon_pages 94.67 ± 2% +27.5% 120.67 proc-vmstat.nr_anon_transparent_hugepages 1973611 -2.4% 1926635 proc-vmstat.nr_dirty_background_threshold 3952049 -2.4% 3857983 proc-vmstat.nr_dirty_threshold 381691 +27.1% 485017 proc-vmstat.nr_file_pages 19833977 -2.4% 19364398 proc-vmstat.nr_free_pages 118322 +96.0% 231929 proc-vmstat.nr_inactive_anon 59790 -1.4% 58934 proc-vmstat.nr_inactive_file 10728 -7.5% 9920 proc-vmstat.nr_mapped 1168 +19.1% 1390 proc-vmstat.nr_page_table_pages 55841 +186.6% 160028 proc-vmstat.nr_shmem 53654 +7.8% 57833 proc-vmstat.nr_slab_unreclaimable 1531 ± 5% +358.4% 7020 proc-vmstat.nr_zone_active_anon 155.50 -2.9% 151.00 proc-vmstat.nr_zone_active_file 118322 +96.0% 231929 proc-vmstat.nr_zone_inactive_anon 59790 -1.4% 58934 proc-vmstat.nr_zone_inactive_file 4831 ± 67% +196.3% 14317 ± 8% proc-vmstat.numa_hint_faults 4831 ± 67% +196.3% 14317 ± 8% proc-vmstat.numa_hint_faults_local 5458711 +4.4% 5699099 proc-vmstat.numa_hit 5458710 +4.4% 5699099 proc-vmstat.numa_local 75731 ± 53% +164.3% 200156 ± 6% proc-vmstat.numa_pte_updates 60367 ± 5% -64.8% 21237 proc-vmstat.pgactivate 6251779 +3.8% 6489305 proc-vmstat.pgalloc_normal 65565 ± 4% +312.4% 270416 proc-vmstat.pgdeactivate 1869987 -1.2% 1847404 proc-vmstat.pgfault 6180967 -7.4% 5720646 proc-vmstat.pgfree 158081 +3.4% 163470 proc-vmstat.pgreuse 80315 -23.0% 61839 proc-vmstat.slabs_scanned 5285278 +3.3% 5459456 proc-vmstat.unevictable_pgs_scanned 518359 ± 23% -11.6% 457978 interrupts.CAL:Function_call_interrupts 156.00 +19.6% 186.50 interrupts.CPU0.9:IR-IO-APIC.9-fasteoi.acpi 6442 ± 28% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 6442 ± 28% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 5795 ± 33% -100.0% 1.17 ±223% interrupts.CPU1.NMI:Non-maskable_interrupts 5795 ± 33% -100.0% 1.17 ±223% interrupts.CPU1.PMI:Performance_monitoring_interrupts 7093 ± 20% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 7093 ± 20% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 5159 ± 35% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 5159 ± 35% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 7095 ± 20% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 7095 ± 20% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 6453 ± 28% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 6453 ± 28% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 27052 ± 17% +16.8% 31608 ± 4% interrupts.CPU138.LOC:Local_timer_interrupts 3872 -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 3872 -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 3870 -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 3870 -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 33529 ± 17% +15.5% 38733 ± 2% interrupts.CPU190.LOC:Local_timer_interrupts 34101 ± 18% +16.2% 39618 ± 2% interrupts.CPU193.LOC:Local_timer_interrupts 34071 ± 18% +18.1% 40241 ± 4% interrupts.CPU194.LOC:Local_timer_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 574.67 ± 3% +82.1% 1046 ± 68% interrupts.CPU2.RES:Rescheduling_interrupts 3871 -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 3871 -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 35364 ± 17% +16.4% 41179 interrupts.CPU204.LOC:Local_timer_interrupts 35086 ± 17% +16.2% 40782 ± 2% interrupts.CPU205.LOC:Local_timer_interrupts 35487 ± 18% +16.3% 41263 ± 3% interrupts.CPU208.LOC:Local_timer_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 36275 ± 18% +18.4% 42945 ± 5% interrupts.CPU213.LOC:Local_timer_interrupts 36888 ± 15% +13.8% 41960 ± 3% interrupts.CPU216.LOC:Local_timer_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 38076 ± 15% +13.3% 43132 ± 2% interrupts.CPU222.LOC:Local_timer_interrupts 38350 ± 14% +14.0% 43735 ± 3% interrupts.CPU223.LOC:Local_timer_interrupts 38722 ± 14% +14.9% 44496 ± 3% interrupts.CPU229.LOC:Local_timer_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 38052 ± 16% +16.8% 44439 ± 3% interrupts.CPU230.LOC:Local_timer_interrupts 38887 ± 16% +15.0% 44719 ± 2% interrupts.CPU232.LOC:Local_timer_interrupts 39267 ± 15% +15.4% 45322 interrupts.CPU238.LOC:Local_timer_interrupts 39534 ± 14% +14.5% 45255 ± 2% interrupts.CPU239.LOC:Local_timer_interrupts 3872 -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 3872 -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 39503 ± 15% +14.8% 45347 ± 2% interrupts.CPU242.LOC:Local_timer_interrupts 39749 ± 15% +15.1% 45767 ± 3% interrupts.CPU243.LOC:Local_timer_interrupts 39952 ± 15% +15.0% 45937 ± 3% interrupts.CPU244.LOC:Local_timer_interrupts 39838 ± 16% +15.1% 45848 ± 2% interrupts.CPU245.LOC:Local_timer_interrupts 40225 ± 15% +14.3% 45971 ± 3% interrupts.CPU248.LOC:Local_timer_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 40786 ± 15% +14.5% 46718 ± 2% interrupts.CPU253.LOC:Local_timer_interrupts 40461 ± 16% +16.2% 47020 ± 2% interrupts.CPU255.LOC:Local_timer_interrupts 40566 ± 16% +15.2% 46751 ± 3% interrupts.CPU256.LOC:Local_timer_interrupts 40841 ± 16% +15.6% 47195 ± 2% interrupts.CPU257.LOC:Local_timer_interrupts 40885 ± 16% +15.8% 47339 ± 3% interrupts.CPU258.LOC:Local_timer_interrupts 40801 ± 16% +16.4% 47509 ± 2% interrupts.CPU259.LOC:Local_timer_interrupts 6448 ± 28% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 6448 ± 28% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 40982 ± 15% +15.7% 47400 ± 2% interrupts.CPU260.LOC:Local_timer_interrupts 40986 ± 16% +16.0% 47562 ± 2% interrupts.CPU262.LOC:Local_timer_interrupts 41302 ± 16% +15.3% 47631 ± 2% interrupts.CPU263.LOC:Local_timer_interrupts 41123 ± 17% +16.3% 47840 ± 2% interrupts.CPU264.LOC:Local_timer_interrupts 41245 ± 17% +16.2% 47922 ± 2% interrupts.CPU265.LOC:Local_timer_interrupts 41037 ± 17% +17.0% 48025 ± 2% interrupts.CPU266.LOC:Local_timer_interrupts 40990 ± 17% +17.5% 48160 ± 2% interrupts.CPU267.LOC:Local_timer_interrupts 41250 ± 18% +17.7% 48563 ± 2% interrupts.CPU268.LOC:Local_timer_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 41395 ± 17% +17.3% 48563 ± 2% interrupts.CPU270.LOC:Local_timer_interrupts 41644 ± 17% +17.0% 48730 ± 2% interrupts.CPU272.LOC:Local_timer_interrupts 41789 ± 17% +17.6% 49142 ± 2% interrupts.CPU273.LOC:Local_timer_interrupts 41604 ± 17% +18.1% 49129 interrupts.CPU274.LOC:Local_timer_interrupts 41786 ± 17% +17.7% 49189 ± 2% interrupts.CPU276.LOC:Local_timer_interrupts 41879 ± 16% +17.6% 49271 ± 2% interrupts.CPU277.LOC:Local_timer_interrupts 41677 ± 16% +18.0% 49191 ± 3% interrupts.CPU278.LOC:Local_timer_interrupts 41988 ± 17% +17.8% 49478 ± 2% interrupts.CPU279.LOC:Local_timer_interrupts 4519 ± 31% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 4519 ± 31% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 41976 ± 17% +18.6% 49781 interrupts.CPU280.LOC:Local_timer_interrupts 42368 ± 17% +17.1% 49613 ± 2% interrupts.CPU281.LOC:Local_timer_interrupts 42548 ± 17% +17.1% 49806 interrupts.CPU283.LOC:Local_timer_interrupts 42638 ± 17% +17.2% 49965 ± 2% interrupts.CPU284.LOC:Local_timer_interrupts 42838 ± 17% +17.0% 50135 ± 2% interrupts.CPU285.LOC:Local_timer_interrupts 42844 ± 17% +17.1% 50149 ± 2% interrupts.CPU286.LOC:Local_timer_interrupts 42981 ± 17% +17.8% 50635 interrupts.CPU287.LOC:Local_timer_interrupts 4518 ± 31% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 4518 ± 31% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 5809 ± 33% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 5809 ± 33% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 3871 -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 3871 -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 1882 ± 20% -18.7% 1529 ± 4% interrupts.CPU37.CAL:Function_call_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 4516 ± 31% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 4516 ± 31% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 5806 ± 33% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 5806 ± 33% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 1937 ± 19% -19.5% 1559 ± 7% interrupts.CPU41.CAL:Function_call_interrupts 6451 ± 28% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 6451 ± 28% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 6444 ± 28% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 6444 ± 28% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 5804 ± 33% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 5804 ± 33% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 5803 ± 33% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 5803 ± 33% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 6447 ± 28% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 6447 ± 28% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 5162 ± 35% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 5162 ± 35% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 5162 ± 35% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 5162 ± 35% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 4513 ± 31% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 4513 ± 31% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 5161 ± 35% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 5805 ± 33% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 5805 ± 33% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 5159 ± 35% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 5159 ± 35% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 3870 -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 3870 -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 4515 ± 31% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 5158 ± 35% -100.0% 1.17 ±223% interrupts.CPU56.NMI:Non-maskable_interrupts 5158 ± 35% -100.0% 1.17 ±223% interrupts.CPU56.PMI:Performance_monitoring_interrupts 4514 ± 31% -100.0% 1.17 ±223% interrupts.CPU57.NMI:Non-maskable_interrupts 4514 ± 31% -100.0% 1.17 ±223% interrupts.CPU57.PMI:Performance_monitoring_interrupts 6451 ± 28% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 6451 ± 28% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 5806 ± 33% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 5806 ± 33% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 4517 ± 31% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 5160 ± 35% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 4516 ± 31% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 4516 ± 31% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 4506 ± 31% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 4506 ± 31% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 57.50 ± 6% +278.8% 217.83 ±154% interrupts.CPU62.RES:Rescheduling_interrupts 3872 -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 3872 -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 5806 ± 33% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 5806 ± 33% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 5803 ± 33% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 5803 ± 33% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 155.83 ± 3% -99.9% 0.17 ±223% interrupts.IWI:IRQ_work_interrupts 323180 ± 6% -100.0% 7.00 interrupts.NMI:Non-maskable_interrupts 323180 ± 6% -100.0% 7.00 interrupts.PMI:Performance_monitoring_interrupts 2867 +104.9% 5875 ± 13% interrupts.TLB:TLB_shootdowns 0.87 ± 86% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 4.57 ±103% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.75 ±149% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 7.77 ± 70% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.68 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.09 ± 92% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.04 ± 29% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.31 ± 30% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.11 ± 36% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.24 ± 46% -100.0% 0.00 perf-sched.sch_delay.avg.ms.io_schedule.bit_wait_io.__wait_on_bit.out_of_line_wait_on_bit 0.07 ± 26% -100.0% 0.00 perf-sched.sch_delay.avg.ms.jbd2_journal_commit_transaction.kjournald2.kthread.ret_from_fork 0.07 ± 13% -100.0% 0.00 perf-sched.sch_delay.avg.ms.kjournald2.kthread.ret_from_fork 0.08 ±123% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.76 ±116% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.00 ±156% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 13.19 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 2.10 ± 98% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.14 ± 32% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.02 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.05 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.ext4_lazyinit_thread.part.0.kthread 0.12 ±130% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait 0.26 ±158% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.07 ± 22% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.09 ±101% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 7.49 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.17 ± 91% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 16.43 ± 90% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 30.65 ± 89% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 9.90 ±158% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 33.54 ± 64% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 24.31 ± 42% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 11.46 ±181% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 14.10 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 33.13 ± 55% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.24 ± 86% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 1.29 ± 47% -100.0% 0.00 perf-sched.sch_delay.max.ms.io_schedule.bit_wait_io.__wait_on_bit.out_of_line_wait_on_bit 0.09 ± 38% -100.0% 0.00 perf-sched.sch_delay.max.ms.jbd2_journal_commit_transaction.kjournald2.kthread.ret_from_fork 0.07 ± 26% -100.0% 0.00 perf-sched.sch_delay.max.ms.kjournald2.kthread.ret_from_fork 185.04 ±203% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 10% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_find_entry.ext4_rename 0.08 ±138% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_lookup.__lookup_hash 0.08 ±162% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_do_update_inode.ext4_mark_iloc_dirty 0.09 ±156% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.add_dirent_to_buf 0.03 ± 85% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.ext4_delete_entry 0.10 ±159% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread 4.70 ± 89% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.02 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.do_renameat2.__x64_sys_rename 0.51 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 0.08 ±175% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 1.11 ±204% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_renameat2.__x64_sys_rename 1.74 ±193% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 37.05 ± 49% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 27.23 ± 90% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.22 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.21 ± 79% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.08 ± 55% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.ext4_lazyinit_thread.part.0.kthread 2.17 ±214% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait 8.27 ±202% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 31.74 ± 70% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 37.32 ± 44% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 26.31 ± 54% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 31.93 ± 58% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.11 ± 27% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 222.59 ±161% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 49.93 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 43735 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 5404 ± 6% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 49.82 ± 3% -100.0% 0.00 perf-sched.total_wait_time.average.ms 5404 ± 6% -100.0% 0.00 perf-sched.total_wait_time.max.ms 850.35 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 881.62 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 691.40 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 934.47 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 257.01 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 70.32 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.18 ± 33% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 1.05 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2481 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.jbd2_journal_commit_transaction.kjournald2.kthread.ret_from_fork 38.17 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.33 ± 23% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_find_entry.ext4_rename 0.35 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_lookup.__lookup_hash 0.38 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_do_update_inode.ext4_mark_iloc_dirty 0.32 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.add_dirent_to_buf 0.35 ± 17% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.ext4_delete_entry 0.34 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread 0.37 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread_batch 0.36 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.vfs_rename.do_renameat2 0.31 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.do_renameat2.__x64_sys_rename 0.38 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 0.35 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 0.35 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.__x64_sys_rename 0.36 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_renameat2.__x64_sys_rename 11.97 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 437.06 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 459.21 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 1545 ± 77% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait 479.30 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 4.83 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 88.11 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 691.86 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 20.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 8.17 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 12.67 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 7.83 ± 17% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 226.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 260.67 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1750 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 2803 ± 26% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2.00 -100.0% 0.00 perf-sched.wait_and_delay.count.jbd2_journal_commit_transaction.kjournald2.kthread.ret_from_fork 3953 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 314.17 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_find_entry.ext4_rename 340.33 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_lookup.__lookup_hash 764.67 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_do_update_inode.ext4_mark_iloc_dirty 696.00 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.add_dirent_to_buf 549.50 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.ext4_delete_entry 361.83 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread 367.33 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread_batch 276.67 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.down_write.vfs_rename.do_renameat2 403.50 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.dput.do_renameat2.__x64_sys_rename 9149 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 521.33 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 299.33 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.__x64_sys_rename 1306 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.mnt_want_write.do_renameat2.__x64_sys_rename 694.00 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 19.83 ± 41% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 50.17 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 14.50 ± 90% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait 40.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 2096 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 13868 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 850.17 -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 1031 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3263 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1019 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3273 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1025 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1022 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 850.61 ± 42% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 680.70 ± 67% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 4963 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.jbd2_journal_commit_transaction.kjournald2.kthread.ret_from_fork 3260 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 16.37 ± 85% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_find_entry.ext4_rename 14.74 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_lookup.__lookup_hash 28.10 ± 73% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_do_update_inode.ext4_mark_iloc_dirty 15.53 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.add_dirent_to_buf 19.31 ± 71% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.ext4_delete_entry 12.13 ± 23% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread 16.39 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread_batch 14.78 ± 38% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.vfs_rename.do_renameat2 14.29 ± 38% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.dput.do_renameat2.__x64_sys_rename 63.52 ± 25% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 19.21 ± 62% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 11.77 ± 42% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.__x64_sys_rename 29.99 ± 46% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_renameat2.__x64_sys_rename 1004 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2950 ± 23% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 4645 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 2922 ± 87% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait 521.12 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 80.85 ± 42% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1314 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 4623 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 849.49 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 877.05 ± 49% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 690.66 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 926.71 ± 49% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 256.33 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 70.23 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.14 ± 34% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.28 ± 74% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.27 ± 45% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.74 ± 51% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 857.30 ± 77% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 1.19 ± 58% -100.0% 0.00 perf-sched.wait_time.avg.ms.io_schedule.bit_wait_io.__wait_on_bit.out_of_line_wait_on_bit 2481 -100.0% 0.00 perf-sched.wait_time.avg.ms.jbd2_journal_commit_transaction.kjournald2.kthread.ret_from_fork 31.02 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.kjournald2.kthread.ret_from_fork 38.09 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 142.25 ±142% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 114.93 ±202% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault 0.32 ± 23% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_find_entry.ext4_rename 0.35 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_lookup.__lookup_hash 0.38 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_do_update_inode.ext4_mark_iloc_dirty 0.32 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.add_dirent_to_buf 0.35 ± 17% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.ext4_delete_entry 0.30 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_journal_get_write_access.add_dirent_to_buf.ext4_add_entry 0.30 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_journal_get_write_access.ext4_delete_entry.ext4_rename 0.23 ± 21% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_journal_get_write_access.ext4_reserve_inode_write.__ext4_mark_inode_dirty 0.44 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_mark_inode_dirty.add_dirent_to_buf.ext4_add_entry 0.42 ± 58% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__ext4_mark_inode_dirty.ext4_rename.vfs_rename 0.37 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.__ext4_get_inode_loc.ext4_get_inode_loc 0.34 ± 15% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread 0.37 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread_batch 0.22 ± 58% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc_node.memcg_alloc_page_obj_cgroups.allocate_slab 0.36 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.apparmor_path_rename.security_path_rename.do_renameat2 207.18 ±148% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.38 ± 38% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.lock_rename.do_renameat2 0.36 ± 14% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.vfs_rename.do_renameat2 9.59 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 0.31 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.do_renameat2.__x64_sys_rename 0.38 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 0.30 ± 18% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.ext4_journal_check_start.__ext4_journal_start_sb.ext4_rename 202.40 ± 74% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 0.35 ± 14% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 0.35 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.__x64_sys_rename 0.30 ± 30% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.jbd2__journal_start.__ext4_journal_start_sb 0.36 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_renameat2.__x64_sys_rename 11.96 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 434.96 ± 34% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 459.19 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 4.99 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.ext4_lazyinit_thread.part.0.kthread 1545 ± 77% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait 479.04 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 2.26 ± 69% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 4.76 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 88.01 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.95 ± 44% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.45 ± 21% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start 691.69 -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 1015 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3259 ± 21% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1009 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3264 ± 21% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1012 -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1022 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 850.57 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 5.56 ± 45% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 2.17 ± 70% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 675.81 ± 68% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 2729 ± 51% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 4.57 ± 88% -100.0% 0.00 perf-sched.wait_time.max.ms.io_schedule.bit_wait_io.__wait_on_bit.out_of_line_wait_on_bit 4963 -100.0% 0.00 perf-sched.wait_time.max.ms.jbd2_journal_commit_transaction.kjournald2.kthread.ret_from_fork 33.21 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.kjournald2.kthread.ret_from_fork 3260 ± 21% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1435 ±144% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 531.51 ±217% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault 16.37 ± 85% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_find_entry.ext4_rename 14.74 ± 10% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_find_entry.ext4_lookup.__lookup_hash 28.10 ± 73% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_do_update_inode.ext4_mark_iloc_dirty 15.53 ± 19% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.add_dirent_to_buf 19.31 ± 71% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_handle_dirty_metadata.ext4_handle_dirty_dirblock.ext4_delete_entry 7.01 ± 74% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_journal_get_write_access.add_dirent_to_buf.ext4_add_entry 3.77 ±106% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_journal_get_write_access.ext4_delete_entry.ext4_rename 3.00 ± 78% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_journal_get_write_access.ext4_reserve_inode_write.__ext4_mark_inode_dirty 12.01 ± 62% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_mark_inode_dirty.add_dirent_to_buf.ext4_add_entry 14.73 ±112% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__ext4_mark_inode_dirty.ext4_rename.vfs_rename 12.05 ± 53% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.__ext4_get_inode_loc.ext4_get_inode_loc 12.13 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread 16.39 ± 22% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__getblk_gfp.ext4_getblk.ext4_bread_batch 0.50 ± 57% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc_node.memcg_alloc_page_obj_cgroups.allocate_slab 13.35 ± 43% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.apparmor_path_rename.security_path_rename.do_renameat2 1677 ±139% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 6.69 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.lock_rename.do_renameat2 14.78 ± 38% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.vfs_rename.do_renameat2 16.77 ± 18% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 14.29 ± 38% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.do_renameat2.__x64_sys_rename 63.52 ± 25% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 6.30 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.ext4_journal_check_start.__ext4_journal_start_sb.ext4_rename 3290 ± 71% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 19.21 ± 62% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 11.77 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.__x64_sys_rename 4.44 ± 56% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.jbd2__journal_start.__ext4_journal_start_sb 29.99 ± 46% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_renameat2.__x64_sys_rename 1002 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 2938 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 4645 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 7.64 ± 79% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.ext4_lazyinit_thread.part.0.kthread 2919 ± 87% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait 512.90 ± 3% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 2.26 ± 69% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 52.15 ± 32% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1314 ± 20% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 13.81 ± 5% -100.0% 0.00 perf-sched.wait_time.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 8.40 ± 95% -100.0% 0.00 perf-sched.wait_time.max.ms.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start 4615 ± 35% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork *************************************************************************************************** lkp-knm01: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory ========================================================================================= compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/bufferedio/1HDD/btrfs/x86_64-rhel-8.3/hdd/debian-10.4-x86_64-20200603.cgz/lkp-knm01/MWRL/fxmark/0x11 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 341.11 +13.8% 388.07 fxmark.hdd_btrfs_MWRL_18_bufferedio.idle_sec 64.50 +13.5% 73.23 fxmark.hdd_btrfs_MWRL_18_bufferedio.idle_util 1.58 +17.3% 1.86 fxmark.hdd_btrfs_MWRL_18_bufferedio.softirq_sec 0.30 +17.0% 0.35 fxmark.hdd_btrfs_MWRL_18_bufferedio.softirq_util 171.29 ± 2% -30.8% 118.49 fxmark.hdd_btrfs_MWRL_18_bufferedio.sys_sec 32.38 -31.0% 22.36 fxmark.hdd_btrfs_MWRL_18_bufferedio.sys_util 4.39 ± 2% +131.0% 10.15 fxmark.hdd_btrfs_MWRL_18_bufferedio.user_sec 0.83 ± 2% +130.5% 1.91 fxmark.hdd_btrfs_MWRL_18_bufferedio.user_util 521758 -25.9% 386576 fxmark.hdd_btrfs_MWRL_18_bufferedio.works 17391 -25.9% 12885 fxmark.hdd_btrfs_MWRL_18_bufferedio.works/sec 0.83 +16.0% 0.97 fxmark.hdd_btrfs_MWRL_1_bufferedio.irq_sec 2.75 +13.4% 3.12 fxmark.hdd_btrfs_MWRL_1_bufferedio.irq_util 0.35 ± 5% +23.4% 0.43 ± 6% fxmark.hdd_btrfs_MWRL_1_bufferedio.softirq_sec 1.15 ± 5% +20.7% 1.39 ± 6% fxmark.hdd_btrfs_MWRL_1_bufferedio.softirq_util 3.48 ± 9% +22.5% 4.26 ± 3% fxmark.hdd_btrfs_MWRL_1_bufferedio.user_sec 11.48 ± 9% +19.8% 13.75 ± 3% fxmark.hdd_btrfs_MWRL_1_bufferedio.user_util 467596 -68.3% 148058 fxmark.hdd_btrfs_MWRL_1_bufferedio.works 15586 -68.3% 4934 fxmark.hdd_btrfs_MWRL_1_bufferedio.works/sec 2.11 ± 2% +14.9% 2.42 fxmark.hdd_btrfs_MWRL_27_bufferedio.softirq_sec 0.27 ± 2% +14.3% 0.30 fxmark.hdd_btrfs_MWRL_27_bufferedio.softirq_util 4.36 +146.2% 10.72 fxmark.hdd_btrfs_MWRL_27_bufferedio.user_sec 0.55 +144.9% 1.35 fxmark.hdd_btrfs_MWRL_27_bufferedio.user_util 452422 -13.6% 390843 fxmark.hdd_btrfs_MWRL_27_bufferedio.works 15080 -13.6% 13027 fxmark.hdd_btrfs_MWRL_27_bufferedio.works/sec 1.10 ± 12% -99.7% 0.00 ±141% fxmark.hdd_btrfs_MWRL_2_bufferedio.idle_sec 1.96 ± 12% -99.7% 0.01 ±141% fxmark.hdd_btrfs_MWRL_2_bufferedio.idle_util 2.61 -20.0% 2.09 fxmark.hdd_btrfs_MWRL_2_bufferedio.irq_sec 4.65 ± 2% -26.0% 3.44 fxmark.hdd_btrfs_MWRL_2_bufferedio.irq_util 1.14 ± 4% -10.9% 1.01 ± 2% fxmark.hdd_btrfs_MWRL_2_bufferedio.softirq_util 5.20 ± 3% +39.5% 7.25 fxmark.hdd_btrfs_MWRL_2_bufferedio.user_sec 9.24 ± 2% +29.1% 11.93 fxmark.hdd_btrfs_MWRL_2_bufferedio.user_util 817200 -56.9% 352501 fxmark.hdd_btrfs_MWRL_2_bufferedio.works 27239 -56.9% 11749 fxmark.hdd_btrfs_MWRL_2_bufferedio.works/sec 2.67 +13.8% 3.03 ± 2% fxmark.hdd_btrfs_MWRL_36_bufferedio.softirq_sec 0.25 +13.5% 0.28 ± 2% fxmark.hdd_btrfs_MWRL_36_bufferedio.softirq_util 113.94 +17.9% 134.39 fxmark.hdd_btrfs_MWRL_36_bufferedio.sys_sec 10.70 +17.7% 12.60 fxmark.hdd_btrfs_MWRL_36_bufferedio.sys_util 4.49 +148.2% 11.13 fxmark.hdd_btrfs_MWRL_36_bufferedio.user_sec 0.42 +147.7% 1.04 fxmark.hdd_btrfs_MWRL_36_bufferedio.user_util 430144 -11.3% 381581 fxmark.hdd_btrfs_MWRL_36_bufferedio.works 14337 -11.3% 12718 fxmark.hdd_btrfs_MWRL_36_bufferedio.works/sec 3.31 ± 2% +11.7% 3.70 fxmark.hdd_btrfs_MWRL_45_bufferedio.softirq_sec 0.25 ± 2% +12.0% 0.28 fxmark.hdd_btrfs_MWRL_45_bufferedio.softirq_util 120.80 ± 2% +16.0% 140.15 fxmark.hdd_btrfs_MWRL_45_bufferedio.sys_sec 9.01 ± 2% +16.3% 10.48 fxmark.hdd_btrfs_MWRL_45_bufferedio.sys_util 5.75 ± 22% +102.5% 11.65 fxmark.hdd_btrfs_MWRL_45_bufferedio.user_sec 0.43 ± 22% +103.0% 0.87 fxmark.hdd_btrfs_MWRL_45_bufferedio.user_util 427816 -9.1% 388760 fxmark.hdd_btrfs_MWRL_45_bufferedio.works 14259 -9.1% 12956 fxmark.hdd_btrfs_MWRL_45_bufferedio.works/sec 20.11 +40.0% 28.17 fxmark.hdd_btrfs_MWRL_4_bufferedio.idle_sec 18.76 +28.6% 24.13 fxmark.hdd_btrfs_MWRL_4_bufferedio.idle_util 3.72 +25.8% 4.68 fxmark.hdd_btrfs_MWRL_4_bufferedio.irq_sec 3.47 +15.5% 4.01 fxmark.hdd_btrfs_MWRL_4_bufferedio.irq_util 72.44 -11.2% 64.35 fxmark.hdd_btrfs_MWRL_4_bufferedio.sys_util 4.77 +65.3% 7.89 ± 2% fxmark.hdd_btrfs_MWRL_4_bufferedio.user_sec 4.45 +51.9% 6.76 ± 2% fxmark.hdd_btrfs_MWRL_4_bufferedio.user_util 689117 -49.1% 350705 fxmark.hdd_btrfs_MWRL_4_bufferedio.works 22970 -49.1% 11689 fxmark.hdd_btrfs_MWRL_4_bufferedio.works/sec 87.98 +66.4% 146.38 fxmark.hdd_btrfs_MWRL_54_bufferedio.sys_sec 5.49 +66.9% 9.17 fxmark.hdd_btrfs_MWRL_54_bufferedio.sys_util 322535 ± 2% +22.6% 395407 fxmark.hdd_btrfs_MWRL_54_bufferedio.works 10749 ± 2% +22.6% 13178 fxmark.hdd_btrfs_MWRL_54_bufferedio.works/sec 4.56 +12.2% 5.11 fxmark.hdd_btrfs_MWRL_63_bufferedio.softirq_sec 0.24 +13.5% 0.28 fxmark.hdd_btrfs_MWRL_63_bufferedio.softirq_util 98.44 +43.1% 140.92 fxmark.hdd_btrfs_MWRL_63_bufferedio.sys_sec 5.24 +44.9% 7.60 fxmark.hdd_btrfs_MWRL_63_bufferedio.sys_util 5.37 +131.8% 12.45 fxmark.hdd_btrfs_MWRL_63_bufferedio.user_sec 0.29 +134.7% 0.67 fxmark.hdd_btrfs_MWRL_63_bufferedio.user_util 0.24 +10.5% 0.27 fxmark.hdd_btrfs_MWRL_72_bufferedio.softirq_util 96.78 ± 2% +44.1% 139.45 fxmark.hdd_btrfs_MWRL_72_bufferedio.sys_sec 4.52 ± 2% +45.7% 6.58 fxmark.hdd_btrfs_MWRL_72_bufferedio.sys_util 12.55 ± 3% +40.4% 17.62 ± 2% fxmark.hdd_btrfs_MWRL_72_bufferedio.user_sec 0.59 ± 3% +41.9% 0.83 ± 2% fxmark.hdd_btrfs_MWRL_72_bufferedio.user_util 120.75 +14.1% 137.83 fxmark.hdd_btrfs_MWRL_9_bufferedio.idle_sec 47.21 +11.6% 52.68 fxmark.hdd_btrfs_MWRL_9_bufferedio.idle_util 5.68 +27.4% 7.23 fxmark.hdd_btrfs_MWRL_9_bufferedio.irq_sec 2.22 +24.5% 2.76 fxmark.hdd_btrfs_MWRL_9_bufferedio.irq_util 1.08 ± 2% +15.1% 1.25 ± 2% fxmark.hdd_btrfs_MWRL_9_bufferedio.softirq_sec 0.42 ± 2% +12.5% 0.48 ± 2% fxmark.hdd_btrfs_MWRL_9_bufferedio.softirq_util 123.58 -14.2% 106.06 fxmark.hdd_btrfs_MWRL_9_bufferedio.sys_sec 48.31 -16.1% 40.54 fxmark.hdd_btrfs_MWRL_9_bufferedio.sys_util 4.55 +97.8% 9.00 fxmark.hdd_btrfs_MWRL_9_bufferedio.user_sec 1.78 +93.4% 3.44 fxmark.hdd_btrfs_MWRL_9_bufferedio.user_util 613250 -31.7% 419044 fxmark.hdd_btrfs_MWRL_9_bufferedio.works 20441 -31.7% 13966 fxmark.hdd_btrfs_MWRL_9_bufferedio.works/sec 407.27 +3.5% 421.41 fxmark.time.elapsed_time 407.27 +3.5% 421.41 fxmark.time.elapsed_time.max 37460 ± 3% -51.1% 18305 fxmark.time.involuntary_context_switches 32.00 -40.6% 19.00 fxmark.time.percent_of_cpu_this_job_got 118.20 -39.2% 71.85 fxmark.time.system_time 13.17 ± 2% -21.1% 10.39 ± 2% fxmark.time.user_time 1851153 -34.8% 1207108 fxmark.time.voluntary_context_switches 20711 ± 3% -4.6% 19755 ± 3% boot-time.idle 60.21 -3.2% 58.28 iostat.cpu.idle 3.11 ± 4% +40.7% 4.38 ± 2% iostat.cpu.user 1695055 +10.4% 1871439 numa-numastat.node0.local_node 1694968 +10.4% 1871394 numa-numastat.node0.numa_hit 54971191 -10.2% 49338510 cpuidle.C1.usage 5762779 -21.4% 4527167 cpuidle.POLL.time 177449 ± 4% -50.6% 87648 cpuidle.POLL.usage 0.34 ± 40% -0.2 0.13 ± 15% mpstat.cpu.all.iowait% 0.40 +0.1 0.46 ± 2% mpstat.cpu.all.soft% 3.07 ± 3% +1.3 4.36 ± 2% mpstat.cpu.all.usr% 60.17 -1.9% 59.00 vmstat.cpu.id 201.67 -2.2% 197.17 vmstat.io.bo 1451006 +20.4% 1746416 vmstat.memory.cache 2.50 ± 20% +60.0% 4.00 vmstat.procs.r 146513 -18.6% 119239 vmstat.system.cs 90165 +1.9% 91840 vmstat.system.in 18982 +30.6% 24791 slabinfo.filp.active_objs 7674 +40.8% 10803 slabinfo.kmalloc-2k.active_objs 522.67 +37.5% 718.83 slabinfo.kmalloc-2k.active_slabs 8369 +37.5% 11511 slabinfo.kmalloc-2k.num_objs 522.67 +37.5% 718.83 slabinfo.kmalloc-2k.num_slabs 644.50 +30.2% 839.17 slabinfo.kmalloc-8k.active_objs 689.83 +28.6% 887.17 slabinfo.kmalloc-8k.num_objs 12294 ± 3% +11.8% 13743 ± 3% slabinfo.kmalloc-96.active_objs 13357 ± 3% +10.1% 14711 ± 3% slabinfo.kmalloc-96.num_objs 3949 ± 5% -10.2% 3545 ± 3% slabinfo.proc_inode_cache.active_objs 1873 ±102% +409.4% 9542 numa-vmstat.node0.nr_active_anon 65134 +20.3% 78370 numa-vmstat.node0.nr_anon_pages 95.50 +22.5% 117.00 numa-vmstat.node0.nr_anon_transparent_hugepages 213602 +34.6% 287422 numa-vmstat.node0.nr_file_pages 138461 +57.3% 217831 numa-vmstat.node0.nr_inactive_anon 9545 ± 2% -32.6% 6431 numa-vmstat.node0.nr_mapped 1234 +15.5% 1425 numa-vmstat.node0.nr_page_table_pages 75295 +98.1% 149160 numa-vmstat.node0.nr_shmem 1873 ±102% +409.4% 9542 numa-vmstat.node0.nr_zone_active_anon 138461 +57.3% 217831 numa-vmstat.node0.nr_zone_inactive_anon 117.83 ± 59% +65.1% 194.50 ± 27% numa-vmstat.node1.nr_mlock 7650 ±103% +398.2% 38117 meminfo.Active 7378 ±106% +413.0% 37847 meminfo.Active(anon) 196370 +22.6% 240828 ± 2% meminfo.AnonHugePages 260375 +20.4% 313431 meminfo.AnonPages 1365794 +21.8% 1664196 meminfo.Cached 637375 +61.1% 1026535 meminfo.Committed_AS 554371 +57.9% 875273 meminfo.Inactive 554115 +57.9% 875006 meminfo.Inactive(anon) 50761 -23.2% 38975 meminfo.Mapped 2946427 +54.7% 4557941 meminfo.Memused 1112 ± 58% +64.1% 1825 ± 27% meminfo.Mlocked 4939 +15.5% 5702 meminfo.PageTables 301480 +99.0% 600043 meminfo.Shmem 7605 ±100% +404.7% 38383 numa-meminfo.node0.Active 7333 ±104% +419.7% 38114 numa-meminfo.node0.Active(anon) 196373 +22.6% 240737 ± 2% numa-meminfo.node0.AnonHugePages 260398 +20.4% 313422 numa-meminfo.node0.AnonPages 356921 -9.8% 321984 numa-meminfo.node0.AnonPages.max 854589 +34.8% 1151720 numa-meminfo.node0.FilePages 554323 +57.6% 873623 numa-meminfo.node0.Inactive 554064 +57.6% 873357 numa-meminfo.node0.Inactive(anon) 37684 -33.3% 25141 numa-meminfo.node0.Mapped 2345591 +68.6% 3955112 numa-meminfo.node0.MemUsed 635.83 ± 59% +64.0% 1042 ± 27% numa-meminfo.node0.Mlocked 4936 +15.5% 5703 numa-meminfo.node0.PageTables 301365 +98.7% 598670 numa-meminfo.node0.Shmem 1722812 ± 11% +24.7% 2147783 ± 6% perf-stat.i.cache-references 146188 -19.4% 117875 perf-stat.i.context-switches 2.59 ± 4% -23.3% 1.99 ± 3% perf-stat.i.major-faults 3543 -7.3% 3284 perf-stat.i.minor-faults 3546 -7.3% 3286 perf-stat.i.page-faults 28.68 +6.4% 30.52 perf-stat.overall.MPKI 6.78 -0.6 6.23 ± 2% perf-stat.overall.branch-miss-rate% 15.23 -2.2 12.98 perf-stat.overall.cache-miss-rate% 11.74 ± 3% -12.2% 10.31 perf-stat.overall.cpi 1.33 ± 3% -0.1 1.22 perf-stat.overall.iTLB-load-miss-rate% 74.80 ± 3% +8.4% 81.11 perf-stat.overall.instructions-per-iTLB-miss 0.09 ± 4% +13.7% 0.10 perf-stat.overall.ipc 1851759 ± 11% +23.8% 2292355 ± 6% perf-stat.ps.cache-references 145099 -19.5% 116870 perf-stat.ps.context-switches 2.60 ± 3% -22.8% 2.01 ± 3% perf-stat.ps.major-faults 3536 -7.3% 3279 perf-stat.ps.minor-faults 3539 -7.3% 3281 perf-stat.ps.page-faults 66983 -28.1% 48178 softirqs.CPU0.SCHED 9905 ± 9% +26.6% 12540 ± 11% softirqs.CPU0.TIMER 117459 +10.4% 129693 ± 3% softirqs.CPU1.RCU 61639 ± 4% -29.0% 43784 softirqs.CPU1.SCHED 66299 +10.6% 73296 softirqs.CPU13.RCU 66125 +10.2% 72863 ± 4% softirqs.CPU15.RCU 62256 +15.3% 71804 ± 8% softirqs.CPU17.RCU 57763 +14.0% 65865 ± 8% softirqs.CPU19.RCU 90070 +10.8% 99788 ± 2% softirqs.CPU2.RCU 52983 -17.3% 43799 softirqs.CPU2.SCHED 57080 +12.4% 64162 ± 5% softirqs.CPU25.RCU 48824 +13.7% 55501 ± 9% softirqs.CPU27.RCU 48932 +11.0% 54322 ± 4% softirqs.CPU29.RCU 53084 -18.1% 43473 softirqs.CPU3.SCHED 44756 +14.6% 51300 ± 11% softirqs.CPU35.RCU 75416 ± 3% +18.9% 89638 ± 8% softirqs.CPU7.RCU 67221 +9.2% 73381 softirqs.CPU9.RCU 653870 ± 8% +18.1% 772471 ± 11% sched_debug.cfs_rq:/.min_vruntime.max 47737 ± 10% +65.2% 78866 ± 11% sched_debug.cfs_rq:/.min_vruntime.stddev 451.06 ± 7% +67.3% 754.57 ± 7% sched_debug.cfs_rq:/.runnable_avg.avg 1122 ± 3% +37.3% 1541 ± 5% sched_debug.cfs_rq:/.runnable_avg.max 369.88 ± 7% +64.1% 607.00 ± 10% sched_debug.cfs_rq:/.runnable_avg.min 146.33 ± 10% +54.0% 225.34 ± 6% sched_debug.cfs_rq:/.runnable_avg.stddev 104818 ± 26% +120.9% 231506 ± 26% sched_debug.cfs_rq:/.spread0.max 47741 ± 10% +65.2% 78868 ± 11% sched_debug.cfs_rq:/.spread0.stddev 877.90 ± 3% +16.4% 1021 sched_debug.cfs_rq:/.util_avg.max 234.05 ± 11% -20.8% 185.29 ± 7% sched_debug.cfs_rq:/.util_avg.min 124.41 ± 6% +58.2% 196.86 ± 6% sched_debug.cfs_rq:/.util_avg.stddev 47.47 ± 33% +709.0% 384.03 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.avg 545.21 ± 16% +114.9% 1171 sched_debug.cfs_rq:/.util_est_enqueued.max 11.17 ± 43% +2298.9% 267.88 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.min 95.72 ± 22% +120.2% 210.78 ± 8% sched_debug.cfs_rq:/.util_est_enqueued.stddev 27.34 ± 2% +38.7% 37.93 ± 3% sched_debug.cpu.clock.stddev 0.00 ± 3% +6353.2% 0.00 ± 69% sched_debug.cpu.next_balance.stddev 0.51 ± 15% +69.1% 0.86 ± 18% sched_debug.cpu.nr_running.avg 0.31 ± 17% +108.7% 0.65 ± 30% sched_debug.cpu.nr_running.min 1206838 -33.8% 798427 sched_debug.cpu.nr_switches.avg 1198391 -35.9% 768489 sched_debug.cpu.nr_switches.min 6128 ± 8% +855.5% 58555 sched_debug.cpu.nr_switches.stddev 1880 ±103% +409.3% 9579 proc-vmstat.nr_active_anon 65067 +20.5% 78397 proc-vmstat.nr_anon_pages 95.67 +22.8% 117.50 ± 2% proc-vmstat.nr_anon_transparent_hugepages 1967884 -2.1% 1927481 proc-vmstat.nr_dirty_background_threshold 3940580 -2.1% 3859676 proc-vmstat.nr_dirty_threshold 341806 +21.9% 416729 proc-vmstat.nr_file_pages 19836429 -2.0% 19431815 proc-vmstat.nr_free_pages 138813 +58.0% 219346 proc-vmstat.nr_inactive_anon 12867 -23.0% 9901 proc-vmstat.nr_mapped 278.83 ± 59% +64.5% 458.67 ± 27% proc-vmstat.nr_mlock 1234 +15.7% 1428 proc-vmstat.nr_page_table_pages 75721 +99.0% 150682 proc-vmstat.nr_shmem 55917 +6.8% 59705 proc-vmstat.nr_slab_unreclaimable 1880 ±103% +409.3% 9579 proc-vmstat.nr_zone_active_anon 138813 +58.0% 219346 proc-vmstat.nr_zone_inactive_anon 10346 ± 28% +38.1% 14288 ± 2% proc-vmstat.numa_hint_faults 10346 ± 28% +38.1% 14288 ± 2% proc-vmstat.numa_hint_faults_local 1717663 +10.4% 1896824 proc-vmstat.numa_hit 1717662 +10.4% 1896824 proc-vmstat.numa_local 143075 ± 34% +32.1% 188940 ± 3% proc-vmstat.numa_pte_updates 86110 ± 4% -75.1% 21483 proc-vmstat.pgactivate 1850719 +9.9% 2033498 proc-vmstat.pgalloc_normal 97021 ± 3% +146.7% 239317 proc-vmstat.pgdeactivate 1550251 -3.0% 1503251 proc-vmstat.pgfault 1749386 -26.0% 1293723 proc-vmstat.pgfree 139950 +1.8% 142511 proc-vmstat.pgreuse 79974 -21.4% 62836 proc-vmstat.slabs_scanned 53.57 -53.6 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe 53.46 -53.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe 52.97 -53.0 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe 52.68 -52.7 0.00 perf-profile.calltrace.cycles-pp.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe 47.11 -47.1 0.00 perf-profile.calltrace.cycles-pp.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64.entry_SYSCALL_64_after_hwframe 46.06 -46.1 0.00 perf-profile.calltrace.cycles-pp.btrfs_rename.vfs_rename.do_renameat2.__x64_sys_rename.do_syscall_64 34.64 ± 2% -34.6 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 34.08 ± 2% -34.1 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 34.08 ± 2% -34.1 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 33.96 ± 2% -34.0 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 19.24 ± 3% -19.2 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 18.73 ± 3% -18.7 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 14.29 ± 3% -14.3 0.00 perf-profile.calltrace.cycles-pp.__btrfs_unlink_inode.btrfs_rename.vfs_rename.do_renameat2.__x64_sys_rename 13.35 ± 2% -13.4 0.00 perf-profile.calltrace.cycles-pp.btrfs_insert_inode_ref.btrfs_rename.vfs_rename.do_renameat2.__x64_sys_rename 12.01 ± 4% -12.0 0.00 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 11.73 ± 2% -11.7 0.00 perf-profile.calltrace.cycles-pp.btrfs_insert_empty_items.btrfs_insert_inode_ref.btrfs_rename.vfs_rename.do_renameat2 11.70 ± 2% -11.7 0.00 perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_insert_empty_items.btrfs_insert_inode_ref.btrfs_rename.vfs_rename 11.43 ± 4% -11.4 0.00 perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle 10.63 ± 8% -10.6 0.00 perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event 9.67 -9.7 0.00 perf-profile.calltrace.cycles-pp.btrfs_add_link.btrfs_rename.vfs_rename.do_renameat2.__x64_sys_rename 9.37 ± 2% -9.4 0.00 perf-profile.calltrace.cycles-pp.btrfs_insert_dir_item.btrfs_add_link.btrfs_rename.vfs_rename.do_renameat2 9.17 ± 10% -9.2 0.00 perf-profile.calltrace.cycles-pp.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow 9.10 ± 10% -9.1 0.00 perf-profile.calltrace.cycles-pp.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow 8.72 ± 2% -8.7 0.00 perf-profile.calltrace.cycles-pp.schedule.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot 8.72 ± 4% -8.7 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_insert_empty_items 8.63 ± 2% -8.6 0.00 perf-profile.calltrace.cycles-pp.__schedule.schedule.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node 8.51 ± 2% -8.5 0.00 perf-profile.calltrace.cycles-pp.insert_with_overflow.btrfs_insert_dir_item.btrfs_add_link.btrfs_rename.vfs_rename 8.47 ± 2% -8.5 0.00 perf-profile.calltrace.cycles-pp.btrfs_insert_empty_items.insert_with_overflow.btrfs_insert_dir_item.btrfs_add_link.btrfs_rename 8.39 ± 4% -8.4 0.00 perf-profile.calltrace.cycles-pp.schedule.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot 8.22 ± 2% -8.2 0.00 perf-profile.calltrace.cycles-pp.__schedule.schedule.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_read_lock_root_node 7.32 ± 2% -7.3 0.00 perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_insert_empty_items.insert_with_overflow.btrfs_insert_dir_item.btrfs_add_link 7.23 ± 6% -7.2 0.00 perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter 6.89 ± 6% -6.9 0.00 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state 6.27 ± 3% -6.3 0.00 perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 6.15 ± 3% -6.1 0.00 perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary 5.92 ± 4% -5.9 0.00 perf-profile.calltrace.cycles-pp.btrfs_lock_root_node.btrfs_search_slot.btrfs_insert_empty_items.btrfs_insert_inode_ref.btrfs_rename 5.90 ± 7% -5.9 0.00 perf-profile.calltrace.cycles-pp.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward 5.89 ± 4% -5.9 0.00 perf-profile.calltrace.cycles-pp.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot.btrfs_insert_empty_items.btrfs_insert_inode_ref 5.89 ± 9% -5.9 0.00 perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch.__schedule 5.79 ± 9% -5.8 0.00 perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch 5.64 ± 12% -5.6 0.00 perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule 5.47 ± 3% -5.5 0.00 perf-profile.calltrace.cycles-pp.btrfs_lookup_dir_item.__btrfs_unlink_inode.btrfs_rename.vfs_rename.do_renameat2 5.29 ± 3% -5.3 0.00 perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_lookup_dir_item.__btrfs_unlink_inode.btrfs_rename.vfs_rename 5.15 ± 2% -5.1 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 58.28 -58.3 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 58.06 -58.1 0.00 perf-profile.children.cycles-pp.do_syscall_64 52.98 -53.0 0.00 perf-profile.children.cycles-pp.__x64_sys_rename 52.68 -52.7 0.00 perf-profile.children.cycles-pp.do_renameat2 47.12 -47.1 0.00 perf-profile.children.cycles-pp.vfs_rename 46.06 -46.1 0.00 perf-profile.children.cycles-pp.btrfs_rename 34.64 ± 2% -34.6 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 34.64 ± 2% -34.6 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 34.62 ± 2% -34.6 0.00 perf-profile.children.cycles-pp.do_idle 34.08 ± 2% -34.1 0.00 perf-profile.children.cycles-pp.start_secondary 33.57 -33.6 0.00 perf-profile.children.cycles-pp.btrfs_search_slot 24.96 ± 2% -25.0 0.00 perf-profile.children.cycles-pp.__schedule 20.88 ± 4% -20.9 0.00 perf-profile.children.cycles-pp.perf_tp_event 20.20 ± 2% -20.2 0.00 perf-profile.children.cycles-pp.btrfs_insert_empty_items 20.19 ± 4% -20.2 0.00 perf-profile.children.cycles-pp.perf_swevent_overflow 20.08 ± 4% -20.1 0.00 perf-profile.children.cycles-pp.__perf_event_overflow 19.87 ± 4% -19.9 0.00 perf-profile.children.cycles-pp.perf_event_output_forward 19.53 ± 3% -19.5 0.00 perf-profile.children.cycles-pp.cpuidle_enter 19.42 ± 3% -19.4 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 18.80 ± 2% -18.8 0.00 perf-profile.children.cycles-pp.schedule 17.27 ± 3% -17.3 0.00 perf-profile.children.cycles-pp.perf_prepare_sample 15.84 ± 4% -15.8 0.00 perf-profile.children.cycles-pp.perf_callchain 15.74 ± 4% -15.7 0.00 perf-profile.children.cycles-pp.get_perf_callchain 14.30 ± 3% -14.3 0.00 perf-profile.children.cycles-pp.__btrfs_unlink_inode 14.15 ± 3% -14.2 0.00 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt 13.40 ± 3% -13.4 0.00 perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt 13.40 ± 3% -13.4 0.00 perf-profile.children.cycles-pp.__btrfs_tree_lock 13.35 ± 2% -13.4 0.00 perf-profile.children.cycles-pp.btrfs_insert_inode_ref 13.07 ± 3% -13.1 0.00 perf-profile.children.cycles-pp.rwsem_down_write_slowpath 12.98 ± 3% -13.0 0.00 perf-profile.children.cycles-pp.btrfs_lock_root_node 12.48 ± 4% -12.5 0.00 perf-profile.children.cycles-pp.perf_callchain_kernel 11.25 -11.2 0.00 perf-profile.children.cycles-pp.__btrfs_tree_read_lock 10.91 -10.9 0.00 perf-profile.children.cycles-pp.rwsem_down_read_slowpath 10.49 -10.5 0.00 perf-profile.children.cycles-pp.btrfs_read_lock_root_node 9.68 -9.7 0.00 perf-profile.children.cycles-pp.btrfs_add_link 9.38 ± 2% -9.4 0.00 perf-profile.children.cycles-pp.btrfs_insert_dir_item 8.94 ± 4% -8.9 0.00 perf-profile.children.cycles-pp.perf_trace_sched_switch 8.91 ± 5% -8.9 0.00 perf-profile.children.cycles-pp.unwind_next_frame 8.79 ± 5% -8.8 0.00 perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt 8.70 ± 3% -8.7 0.00 perf-profile.children.cycles-pp.btrfs_lookup_dir_item 8.51 ± 2% -8.5 0.00 perf-profile.children.cycles-pp.insert_with_overflow 8.48 ± 3% -8.5 0.00 perf-profile.children.cycles-pp.dequeue_task_fair 8.46 ± 3% -8.5 0.00 perf-profile.children.cycles-pp.rwsem_wake 8.41 ± 5% -8.4 0.00 perf-profile.children.cycles-pp.hrtimer_interrupt 8.22 ± 3% -8.2 0.00 perf-profile.children.cycles-pp.dequeue_entity 8.03 ± 4% -8.0 0.00 perf-profile.children.cycles-pp.update_curr 7.80 ± 3% -7.8 0.00 perf-profile.children.cycles-pp.wake_up_q 7.79 ± 3% -7.8 0.00 perf-profile.children.cycles-pp.try_to_wake_up 7.21 ± 3% -7.2 0.00 perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime 6.38 ± 3% -6.4 0.00 perf-profile.children.cycles-pp.schedule_idle 5.61 ± 3% -5.6 0.00 perf-profile.children.cycles-pp.__hrtimer_run_queues 5.61 ± 5% -5.6 0.00 perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template 5.20 ± 2% -5.2 0.00 perf-profile.children.cycles-pp.intel_idle 5.18 ± 2% -5.2 0.00 perf-profile.self.cycles-pp.intel_idle 3115420 +27.3% 3966161 interrupts.CAL:Function_call_interrupts 406791 ± 4% -29.5% 286945 ± 3% interrupts.CPU0.CAL:Function_call_interrupts 510.00 ± 31% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 510.00 ± 31% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 51541 ± 11% +148.4% 128006 ± 12% interrupts.CPU0.RES:Rescheduling_interrupts 409532 ± 5% -24.6% 308740 ± 2% interrupts.CPU1.CAL:Function_call_interrupts 773.50 ± 36% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 773.50 ± 36% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 56239 ± 11% +177.5% 156052 ± 9% interrupts.CPU1.RES:Rescheduling_interrupts 44621 ± 3% +34.2% 59866 ± 7% interrupts.CPU10.CAL:Function_call_interrupts 451.83 ± 28% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 451.83 ± 28% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 324.33 ± 22% +310.1% 1330 ± 14% interrupts.CPU10.RES:Rescheduling_interrupts 44671 ± 3% +74.0% 77726 ± 5% interrupts.CPU11.CAL:Function_call_interrupts 463.17 ± 31% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 463.17 ± 31% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 342.17 ± 14% +500.7% 2055 ± 10% interrupts.CPU11.RES:Rescheduling_interrupts 45500 ± 4% +19.2% 54223 ± 5% interrupts.CPU12.CAL:Function_call_interrupts 587.67 ± 36% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 587.67 ± 36% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 369.67 ± 19% +178.5% 1029 ± 12% interrupts.CPU12.RES:Rescheduling_interrupts 45446 ± 3% +80.5% 82048 ± 4% interrupts.CPU13.CAL:Function_call_interrupts 443.83 ± 31% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 443.83 ± 31% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 378.17 ± 13% +437.6% 2033 ± 7% interrupts.CPU13.RES:Rescheduling_interrupts 44350 ± 2% +36.1% 60355 ± 17% interrupts.CPU14.CAL:Function_call_interrupts 509.00 ± 33% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 509.00 ± 33% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 380.83 ± 17% +215.5% 1201 ± 29% interrupts.CPU14.RES:Rescheduling_interrupts 44422 ± 2% +73.5% 77094 ± 3% interrupts.CPU15.CAL:Function_call_interrupts 485.00 ± 34% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 485.00 ± 34% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 356.33 ± 16% +450.0% 1959 ± 6% interrupts.CPU15.RES:Rescheduling_interrupts 44684 ± 5% +29.9% 58062 ± 10% interrupts.CPU16.CAL:Function_call_interrupts 773.67 ± 90% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 773.67 ± 90% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 396.17 ± 26% +182.2% 1118 ± 17% interrupts.CPU16.RES:Rescheduling_interrupts 45054 ± 3% +101.7% 90876 ± 13% interrupts.CPU17.CAL:Function_call_interrupts 713.67 ± 33% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 713.67 ± 33% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 359.33 ± 15% +566.8% 2396 ± 12% interrupts.CPU17.RES:Rescheduling_interrupts 677.00 ± 17% -99.8% 1.17 ±223% interrupts.CPU18.NMI:Non-maskable_interrupts 677.00 ± 17% -99.8% 1.17 ±223% interrupts.CPU18.PMI:Performance_monitoring_interrupts 181.33 ± 16% +141.2% 437.33 ± 41% interrupts.CPU18.RES:Rescheduling_interrupts 33914 ± 2% +88.3% 63866 ± 22% interrupts.CPU19.CAL:Function_call_interrupts 624.67 ± 31% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 624.67 ± 31% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 176.50 ± 11% +533.1% 1117 ± 22% interrupts.CPU19.RES:Rescheduling_interrupts 190993 ± 2% +36.0% 259836 ± 4% interrupts.CPU2.CAL:Function_call_interrupts 429.17 ± 9% -99.7% 1.17 ±223% interrupts.CPU2.NMI:Non-maskable_interrupts 429.17 ± 9% -99.7% 1.17 ±223% interrupts.CPU2.PMI:Performance_monitoring_interrupts 19836 ± 5% +163.3% 52229 ± 8% interrupts.CPU2.RES:Rescheduling_interrupts 447.67 ± 32% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 447.67 ± 32% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 167.83 ± 17% +116.1% 362.67 ± 24% interrupts.CPU20.RES:Rescheduling_interrupts 35342 ± 7% +50.0% 52996 ± 10% interrupts.CPU21.CAL:Function_call_interrupts 366.50 ± 4% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 366.50 ± 4% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 190.33 ± 27% +375.8% 905.67 ± 22% interrupts.CPU21.RES:Rescheduling_interrupts 638.83 ± 29% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 638.83 ± 29% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 35476 ± 5% +47.6% 52375 ± 10% interrupts.CPU23.CAL:Function_call_interrupts 455.17 ± 36% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 455.17 ± 36% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 214.83 ± 25% +302.4% 864.50 ± 21% interrupts.CPU23.RES:Rescheduling_interrupts 33941 +15.6% 39249 ± 7% interrupts.CPU24.CAL:Function_call_interrupts 471.00 ± 43% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 471.00 ± 43% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 151.83 ± 11% +190.8% 441.50 ± 24% interrupts.CPU24.RES:Rescheduling_interrupts 34730 ± 3% +60.8% 55862 ± 12% interrupts.CPU25.CAL:Function_call_interrupts 460.00 ± 35% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 460.00 ± 35% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 185.50 ± 26% +375.0% 881.17 ± 16% interrupts.CPU25.RES:Rescheduling_interrupts 540.17 ± 46% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 540.17 ± 46% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 127.50 ± 31% +63.9% 209.00 ± 20% interrupts.CPU26.RES:Rescheduling_interrupts 24340 +90.8% 46440 ± 27% interrupts.CPU27.CAL:Function_call_interrupts 580.33 ± 32% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 580.33 ± 32% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 119.17 ± 16% +411.5% 609.50 ± 32% interrupts.CPU27.RES:Rescheduling_interrupts 24293 ± 3% +12.0% 27211 ± 5% interrupts.CPU28.CAL:Function_call_interrupts 571.00 ± 29% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 571.00 ± 29% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 109.67 ± 33% +87.2% 205.33 ± 19% interrupts.CPU28.RES:Rescheduling_interrupts 24118 +82.7% 44054 ± 11% interrupts.CPU29.CAL:Function_call_interrupts 453.33 ± 26% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 453.33 ± 26% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 130.67 ± 21% +369.0% 612.83 ± 21% interrupts.CPU29.RES:Rescheduling_interrupts 181587 +48.8% 270117 interrupts.CPU3.CAL:Function_call_interrupts 574.17 ± 36% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 574.17 ± 36% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 16735 ± 4% +227.1% 54738 ± 8% interrupts.CPU3.RES:Rescheduling_interrupts 660.33 ± 26% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 660.33 ± 26% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 111.50 ± 13% +67.4% 186.67 ± 34% interrupts.CPU30.RES:Rescheduling_interrupts 25057 ± 4% +54.2% 38644 ± 8% interrupts.CPU31.CAL:Function_call_interrupts 969.17 ± 63% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 969.17 ± 63% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 143.50 ± 32% +252.1% 505.33 ± 20% interrupts.CPU31.RES:Rescheduling_interrupts 24213 ± 2% +15.3% 27925 ± 8% interrupts.CPU32.CAL:Function_call_interrupts 505.33 ± 35% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 505.33 ± 35% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 25341 ± 5% +70.5% 43205 ± 47% interrupts.CPU33.CAL:Function_call_interrupts 499.50 ± 35% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 499.50 ± 35% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 140.17 ± 27% +287.2% 542.67 ± 72% interrupts.CPU33.RES:Rescheduling_interrupts 531.67 ± 34% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 531.67 ± 34% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 107.50 ± 6% +151.5% 270.33 ± 57% interrupts.CPU34.RES:Rescheduling_interrupts 25120 ± 5% +73.5% 43580 ± 29% interrupts.CPU35.CAL:Function_call_interrupts 617.00 ± 55% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 617.00 ± 55% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 160.17 ± 38% +254.1% 567.17 ± 36% interrupts.CPU35.RES:Rescheduling_interrupts 17282 +21.9% 21064 ± 13% interrupts.CPU36.CAL:Function_call_interrupts 375.00 ± 3% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 375.00 ± 3% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 63.33 ± 4% +94.2% 123.00 ± 65% interrupts.CPU36.RES:Rescheduling_interrupts 18986 ± 14% +67.7% 31838 ± 28% interrupts.CPU37.CAL:Function_call_interrupts 567.33 ± 44% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 567.33 ± 44% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 119.33 ± 60% +200.1% 358.17 ± 47% interrupts.CPU37.RES:Rescheduling_interrupts 17215 +34.5% 23163 ± 31% interrupts.CPU38.CAL:Function_call_interrupts 618.33 ± 39% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 618.33 ± 39% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 78.67 ± 24% +135.2% 185.00 ± 93% interrupts.CPU38.RES:Rescheduling_interrupts 17638 ± 4% +41.8% 25004 ± 19% interrupts.CPU39.CAL:Function_call_interrupts 641.83 ± 44% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 641.83 ± 44% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 66699 ± 3% +69.7% 113177 ± 8% interrupts.CPU4.CAL:Function_call_interrupts 548.17 ± 39% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 548.17 ± 39% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 1304 ± 12% +401.7% 6542 ± 13% interrupts.CPU4.RES:Rescheduling_interrupts 17366 +48.4% 25763 ± 30% interrupts.CPU40.CAL:Function_call_interrupts 508.83 ± 33% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 508.83 ± 33% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 17318 ± 3% +68.9% 29247 ± 25% interrupts.CPU41.CAL:Function_call_interrupts 394.50 ± 18% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 394.50 ± 18% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 70.83 ± 17% +349.9% 318.67 ± 51% interrupts.CPU41.RES:Rescheduling_interrupts 17444 ± 5% +45.5% 25388 ± 43% interrupts.CPU42.CAL:Function_call_interrupts 620.33 ± 29% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 620.33 ± 29% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 17568 ± 2% +94.9% 34234 ± 40% interrupts.CPU43.CAL:Function_call_interrupts 428.33 ± 31% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 428.33 ± 31% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 89.33 ± 20% +328.7% 383.00 ± 73% interrupts.CPU43.RES:Rescheduling_interrupts 17183 +26.4% 21726 ± 18% interrupts.CPU44.CAL:Function_call_interrupts 384.67 ± 7% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 384.67 ± 7% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 12821 ± 6% +74.5% 22375 ± 31% interrupts.CPU45.CAL:Function_call_interrupts 454.17 ± 31% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 454.17 ± 31% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 12755 ± 5% +22.2% 15581 ± 16% interrupts.CPU46.CAL:Function_call_interrupts 662.17 ± 31% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 662.17 ± 31% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 12391 +57.9% 19569 ± 31% interrupts.CPU47.CAL:Function_call_interrupts 560.00 ± 29% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 560.00 ± 29% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 12220 +20.9% 14775 ± 6% interrupts.CPU48.CAL:Function_call_interrupts 445.33 ± 42% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 445.33 ± 42% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 12259 +54.4% 18931 ± 28% interrupts.CPU49.CAL:Function_call_interrupts 432.00 ± 30% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 432.00 ± 30% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 66405 ± 2% +100.0% 132795 ± 5% interrupts.CPU5.CAL:Function_call_interrupts 611.17 ± 31% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 611.17 ± 31% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 1328 ± 13% +505.2% 8037 ± 15% interrupts.CPU5.RES:Rescheduling_interrupts 12320 ± 2% +32.0% 16262 ± 19% interrupts.CPU50.CAL:Function_call_interrupts 615.83 ± 30% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 615.83 ± 30% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 12395 ± 3% +144.9% 30352 ± 54% interrupts.CPU51.CAL:Function_call_interrupts 547.67 ± 33% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 547.67 ± 33% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 12246 +29.9% 15904 ± 24% interrupts.CPU52.CAL:Function_call_interrupts 473.17 ± 35% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 473.17 ± 35% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 12513 ± 2% +19.0% 14891 ± 6% interrupts.CPU53.CAL:Function_call_interrupts 621.00 ± 38% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 621.00 ± 38% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 8152 ± 3% +15.0% 9376 ± 4% interrupts.CPU54.CAL:Function_call_interrupts 8103 ± 2% +14.4% 9271 ± 3% interrupts.CPU55.CAL:Function_call_interrupts 8315 ± 3% +14.4% 9514 ± 5% interrupts.CPU56.CAL:Function_call_interrupts 8078 +16.3% 9398 ± 4% interrupts.CPU58.CAL:Function_call_interrupts 7953 +113.1% 16952 ± 85% interrupts.CPU59.CAL:Function_call_interrupts 64779 ± 3% +74.6% 113096 ± 4% interrupts.CPU6.CAL:Function_call_interrupts 568.83 ± 32% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 568.83 ± 32% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 1214 ± 21% +480.6% 7049 ± 6% interrupts.CPU6.RES:Rescheduling_interrupts 7913 ± 2% +49.1% 11797 ± 20% interrupts.CPU60.CAL:Function_call_interrupts 8016 +50.0% 12022 ± 48% interrupts.CPU61.CAL:Function_call_interrupts 7989 ± 2% +17.0% 9347 ± 3% interrupts.CPU62.CAL:Function_call_interrupts 4330 +17.9% 5107 ± 4% interrupts.CPU63.CAL:Function_call_interrupts 4409 ± 2% +33.8% 5902 ± 22% interrupts.CPU64.CAL:Function_call_interrupts 4357 ± 3% +65.6% 7217 ± 45% interrupts.CPU65.CAL:Function_call_interrupts 4428 ± 2% +14.6% 5074 ± 6% interrupts.CPU66.CAL:Function_call_interrupts 4384 ± 2% +15.7% 5071 ± 4% interrupts.CPU67.CAL:Function_call_interrupts 4388 ± 3% +17.7% 5164 ± 4% interrupts.CPU68.CAL:Function_call_interrupts 4388 ± 3% +41.1% 6192 ± 39% interrupts.CPU69.CAL:Function_call_interrupts 64211 ± 5% +126.8% 145632 ± 11% interrupts.CPU7.CAL:Function_call_interrupts 518.00 ± 31% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 518.00 ± 31% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 1149 ± 26% +577.5% 7790 ± 10% interrupts.CPU7.RES:Rescheduling_interrupts 4399 ± 2% +16.2% 5111 ± 4% interrupts.CPU70.CAL:Function_call_interrupts 4323 ± 3% +20.4% 5207 ± 6% interrupts.CPU71.CAL:Function_call_interrupts 62912 ± 2% +78.3% 112190 ± 11% interrupts.CPU8.CAL:Function_call_interrupts 441.00 ± 31% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 441.00 ± 31% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 346.17 ± 18% +274.9% 1297 ± 27% interrupts.CPU8.RES:Rescheduling_interrupts 48078 ± 7% +69.2% 81349 ± 7% interrupts.CPU9.CAL:Function_call_interrupts 421.50 ± 29% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 421.50 ± 29% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 380.33 ± 14% +440.1% 2054 ± 15% interrupts.CPU9.RES:Rescheduling_interrupts 171.33 -100.0% 0.00 interrupts.IWI:IRQ_work_interrupts 29247 ± 5% -100.0% 7.00 interrupts.NMI:Non-maskable_interrupts 29247 ± 5% -100.0% 7.00 interrupts.PMI:Performance_monitoring_interrupts 157570 ± 3% +186.0% 450705 interrupts.RES:Rescheduling_interrupts 2718 ± 8% +87.1% 5087 ± 15% interrupts.TLB:TLB_shootdowns 0.02 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.05 ± 37% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.04 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.03 -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 8% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 0.00 ± 40% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.04 ± 60% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.01 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.01 ± 29% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 0.03 ± 85% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.btrfs_alloc_delayed_item.btrfs_insert_delayed_dir_index 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.copy_strings.isra.0 0.02 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.count.isra.0 0.00 ± 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.02 ± 65% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.__btrfs_tree_read_lock.btrfs_read_lock_root_node 0.01 ± 39% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk 0.01 ± 39% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.unlink_anon_vmas.free_pgtables 0.00 ± 48% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.unlink_file_vma.free_pgtables 0.00 ± 45% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 0.01 ± 34% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.setup_arg_pages.load_elf_binary 0.01 ± 36% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.elf_map 0.01 ± 35% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 0.01 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 0.02 ± 26% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 0.01 ± 39% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.01 ± 49% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 0.32 ±196% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.start_transaction.btrfs_rename 0.01 ± 44% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_trace.perf_event_mmap.mmap_region 0.01 ± 19% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.01 ± 49% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.exit_mmap 0.01 ± 32% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault 0.00 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 0.02 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot 0.02 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_check_dir_item_collision 0.02 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_dir_item 0.02 -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot 0.02 ± 14% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_del_inode_ref 0.02 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items 0.01 ± 15% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_lookup_dir_item 0.03 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.11 ± 30% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 9% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.07 ± 48% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.25 ±165% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.04 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.15 ± 87% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.05 ± 50% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.12 ± 73% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.14 ±136% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.20 ± 34% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.03 ± 69% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 0.04 ± 70% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.09 ± 61% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.11 ±141% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.74 ± 46% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 32% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 0.17 ±115% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.btrfs_alloc_delayed_item.btrfs_insert_delayed_dir_index 0.02 ± 29% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.copy_strings.isra.0 0.02 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.count.isra.0 0.02 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.03 ± 60% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.__btrfs_tree_read_lock.btrfs_read_lock_root_node 0.02 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk 0.02 ± 30% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.unlink_anon_vmas.free_pgtables 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.unlink_file_vma.free_pgtables 0.01 ± 4% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 0.02 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.setup_arg_pages.load_elf_binary 0.02 ± 44% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.elf_map 0.03 ±108% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 0.01 ± 12% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 0.02 ± 12% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 0.03 ± 57% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 2.02 ±208% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.start_transaction.btrfs_rename 0.02 ± 28% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_trace.perf_event_mmap.mmap_region 0.06 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.03 ± 89% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.exit_mmap 0.02 ± 16% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault 0.02 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 98.03 ± 37% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot 14.08 ± 80% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_check_dir_item_collision 7.30 ± 24% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_dir_item 11.49 ± 12% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot 0.05 ±118% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_del_inode_ref 0.05 ± 47% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items 0.04 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_lookup_dir_item 0.09 ± 41% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.18 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.04 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.04 ± 10% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 2.17 ±125% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 31.37 ± 63% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.03 ± 35% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 136.35 ±190% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.02 ± 4% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 205.23 ±112% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 4.86 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 376743 -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 7362 ± 15% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 4.84 ± 3% -100.0% 0.00 perf-sched.total_wait_time.average.ms 7362 ± 15% -100.0% 0.00 perf-sched.total_wait_time.max.ms 898.56 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 514.03 ± 45% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 777.57 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 513.65 ± 45% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 270.51 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 362.47 ±103% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 54.16 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 14.66 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 1.24 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot 1.36 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_check_dir_item_collision 1.23 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_dir_item 1.60 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot 346.82 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 384.10 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.57 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 4.49 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 625.10 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 671.88 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 20.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 9.33 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 11.67 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 9.33 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 245.17 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 5.67 ± 24% -100.0% 0.00 perf-sched.wait_and_delay.count.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 3087 -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 647.00 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 169836 -100.0% 0.00 perf-sched.wait_and_delay.count.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot 9518 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_check_dir_item_collision 9722 -100.0% 0.00 perf-sched.wait_and_delay.count.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_dir_item 176879 -100.0% 0.00 perf-sched.wait_and_delay.count.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot 32.67 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 47.33 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 39.67 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 2222 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 855.67 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 632.83 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 998.97 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3197 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3197 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1510 ± 42% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1695 ±111% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 3200 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 203.37 ± 33% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot 106.02 ± 35% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_check_dir_item_collision 109.95 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_dir_item 119.44 ± 29% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot 2866 ± 37% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.02 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 6.31 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2939 ± 39% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 7362 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 898.54 -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 513.98 ± 45% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 777.54 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 513.61 ± 45% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 270.50 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 4.60 -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.07 ± 15% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 0.30 ±156% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.10 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.11 ± 51% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 2.43 ±212% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 362.43 ±103% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 54.14 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 15.99 ± 88% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 0.18 ±145% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 10.42 ±220% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault 1.66 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.btrfs_alloc_delayed_item.btrfs_insert_delayed_dir_index 11.00 ±142% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 1.57 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.__btrfs_tree_read_lock.btrfs_read_lock_root_node 0.04 ± 46% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk 0.78 ± 98% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.__btrfs_tree_lock.btrfs_lock_root_node 0.11 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.generic_file_write_iter.new_sync_write 0.04 ± 54% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.unlink_anon_vmas.free_pgtables 0.06 ± 57% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.unlink_file_vma.free_pgtables 0.07 ± 47% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 0.07 ± 48% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff 0.10 ± 52% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 0.05 ± 38% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.step_into.path_openat 0.07 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 0.07 ± 38% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 2.63 ± 67% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 13.22 ± 82% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 0.07 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.81 ± 63% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__btrfs_unlink_inode.btrfs_rename 0.64 ±199% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 2.01 ± 68% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.btrfs_del_inode_ref.__btrfs_unlink_inode 0.09 ± 41% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 0.07 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file 1.33 ± 41% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.start_transaction.btrfs_rename 0.08 ± 61% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.06 ± 78% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_trace.perf_event_mmap.mmap_region 0.09 ± 40% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mmput.m_stop.seq_read_iter 0.86 ±174% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_renameat2.__x64_sys_rename 0.13 ± 15% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.__fdget_pos.ksys_write 3.79 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 0.10 ± 72% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.pipe_write.new_sync_write 0.14 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_write_begin 0.09 ± 31% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.remove_vma.exit_mmap.mmput 14.66 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.03 ± 25% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.07 ± 68% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 14.76 ±221% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.vfs_write.ksys_write.do_syscall_64 0.06 ± 51% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault 0.08 ± 24% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 1.22 -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot 1.34 -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_check_dir_item_collision 1.22 -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_dir_item 1.59 -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot 2.17 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_del_inode_ref 1.99 ± 14% -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items 0.72 ± 18% -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_lookup_dir_item 346.79 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 384.09 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.55 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.09 ±134% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 4.47 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 625.03 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.05 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 671.63 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 998.94 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 3197 ± 30% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 3197 ± 30% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1510 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 130.34 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.43 ± 22% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 2.88 ±201% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.64 ± 18% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.77 ±126% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 427.21 ±221% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 1695 ±111% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 3200 ± 30% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1206 ± 78% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 1.01 ±184% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 431.08 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault 4.12 ± 34% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.btrfs_alloc_delayed_item.btrfs_insert_delayed_dir_index 775.22 ±142% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 2.89 ± 24% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.__btrfs_tree_read_lock.btrfs_read_lock_root_node 0.14 ± 40% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk 1.32 ±100% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.__btrfs_tree_lock.btrfs_lock_root_node 0.36 ± 39% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.generic_file_write_iter.new_sync_write 0.12 ± 81% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.unlink_anon_vmas.free_pgtables 0.14 ± 53% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.unlink_file_vma.free_pgtables 0.21 ± 34% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 0.20 ± 68% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff 0.23 ± 75% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 0.09 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.step_into.path_openat 0.28 ± 11% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 0.31 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 3.78 ± 66% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.vfs_rename.do_renameat2 1786 ± 52% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 0.24 ± 40% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 2.36 ± 47% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__btrfs_unlink_inode.btrfs_rename 0.65 ±194% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 3.77 ± 76% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.btrfs_del_inode_ref.__btrfs_unlink_inode 0.27 ± 62% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 0.16 ± 45% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file 2.27 ± 45% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.start_transaction.btrfs_rename 0.22 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.21 ± 83% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_trace.perf_event_mmap.mmap_region 0.14 ± 56% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mmput.m_stop.seq_read_iter 1.01 ±152% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.do_renameat2.__x64_sys_rename 0.42 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.__fdget_pos.ksys_write 41.50 ± 40% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 0.17 ± 62% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.pipe_write.new_sync_write 0.45 ± 15% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_write_begin 0.36 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.remove_vma.exit_mmap.mmput 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.28 ± 40% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.19 ± 51% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 380.67 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.vfs_write.ksys_write.do_syscall_64 0.22 ± 33% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault 0.32 ± 34% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 123.56 ± 31% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_read_lock_root_node.btrfs_search_slot 100.18 ± 43% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_check_dir_item_collision 109.93 ± 30% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.__btrfs_tree_read_lock.btrfs_search_slot.btrfs_lookup_dir_item 119.40 ± 29% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_lock_root_node.btrfs_search_slot 4.52 ± 38% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_del_inode_ref 34.79 ±155% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_insert_empty_items 11.62 ± 90% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.__btrfs_tree_lock.btrfs_search_slot.btrfs_lookup_dir_item 2866 ± 37% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.99 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.09 ±134% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 6.18 ± 5% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2939 ± 39% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.39 ± 33% -100.0% 0.00 perf-sched.wait_time.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 7362 ± 15% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork *************************************************************************************************** lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory ========================================================================================= class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode: network/gcc-9/performance/1HDD/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp5/netdev/stress-ng/60s/0x5003006 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 14497968 -16.5% 12107611 stress-ng.netdev.ops 241632 -16.5% 201793 stress-ng.netdev.ops_per_sec 28594 ± 2% +10.0% 31466 stress-ng.time.involuntary_context_switches 13224 +13.4% 15000 stress-ng.time.minor_page_faults 9165 -2.3% 8958 stress-ng.time.percent_of_cpu_this_job_got 5664 -2.3% 5534 stress-ng.time.system_time 18056 ± 2% +11.3% 20089 stress-ng.time.voluntary_context_switches 60710 ± 39% -67.3% 19871 ± 24% cpuidle.POLL.time 15084 ±108% -73.9% 3929 ± 12% cpuidle.POLL.usage 1125429 +38.5% 1558306 vmstat.memory.cache 208197 -9.3% 188909 vmstat.system.in 163516 ± 10% +108.3% 340627 ± 20% numa-numastat.node0.numa_hit 153128 ± 18% +150.5% 383648 ± 28% numa-numastat.node1.local_node 200560 ± 9% +114.4% 430050 ± 20% numa-numastat.node1.numa_hit 16.83 ± 2% +9.8 26.62 mpstat.cpu.all.idle% 0.77 ± 3% -0.1 0.67 ± 2% mpstat.cpu.all.irq% 0.01 ± 12% +0.0 0.03 ± 6% mpstat.cpu.all.soft% 81.84 -9.8 72.07 mpstat.cpu.all.sys% 33717 ± 3% +1293.4% 469813 ± 4% meminfo.Active 33429 ± 3% +1304.5% 469525 ± 4% meminfo.Active(anon) 1040605 +41.2% 1469499 meminfo.Cached 4582223 +10.2% 5048860 meminfo.Committed_AS 48453 -18.8% 39324 meminfo.Mapped 2796590 +17.1% 3275440 meminfo.Memused 60364 ± 3% +710.5% 489259 ± 4% meminfo.Shmem 2866922 +14.2% 3275440 meminfo.max_used_kB 169442 ± 17% +44.3% 244489 ± 17% sched_debug.cfs_rq:/.spread0.max -58621 -94.4% -3290 sched_debug.cfs_rq:/.spread0.min 89.30 ± 30% +92.6% 171.96 ± 31% sched_debug.cfs_rq:/.util_est_enqueued.stddev 428152 ± 3% +69.4% 725479 ± 3% sched_debug.cpu.avg_idle.avg 2252 ± 9% +4593.0% 105717 ± 39% sched_debug.cpu.avg_idle.min 3.27 ± 16% +103.4% 6.66 ± 34% sched_debug.cpu.clock.stddev -19.33 +47.8% -28.58 sched_debug.cpu.nr_uninterruptible.min 8.38 ± 6% +39.9% 11.73 ± 4% sched_debug.cpu.nr_uninterruptible.stddev 1416 ± 29% +13684.2% 195252 ± 40% numa-meminfo.node0.Active 1224 ± 28% +15817.9% 194968 ± 40% numa-meminfo.node0.Active(anon) 506064 ± 2% +37.3% 695013 ± 10% numa-meminfo.node0.FilePages 10933 ± 59% +1788.2% 206439 ± 40% numa-meminfo.node0.Shmem 31962 ± 4% +760.2% 274929 ± 34% numa-meminfo.node1.Active 31866 ± 4% +762.7% 274927 ± 34% numa-meminfo.node1.Active(anon) 13564 ± 93% +206.2% 41535 ± 44% numa-meminfo.node1.AnonHugePages 534978 ± 2% +44.8% 774868 ± 11% numa-meminfo.node1.FilePages 26054 ± 11% -27.9% 18774 ± 15% numa-meminfo.node1.Mapped 1275440 ± 5% +34.5% 1715156 ± 7% numa-meminfo.node1.MemUsed 49864 ± 13% +467.9% 283197 ± 34% numa-meminfo.node1.Shmem 305.83 ± 28% +15868.6% 48837 ± 40% numa-vmstat.node0.nr_active_anon 126512 ± 2% +37.4% 173847 ± 10% numa-vmstat.node0.nr_file_pages 2729 ± 59% +1794.0% 51703 ± 40% numa-vmstat.node0.nr_shmem 305.83 ± 28% +15868.4% 48836 ± 40% numa-vmstat.node0.nr_zone_active_anon 7974 ± 4% +763.9% 68889 ± 34% numa-vmstat.node1.nr_active_anon 133839 ± 2% +44.9% 193872 ± 11% numa-vmstat.node1.nr_file_pages 6683 ± 10% -28.2% 4800 ± 14% numa-vmstat.node1.nr_mapped 12560 ± 12% +464.9% 70955 ± 34% numa-vmstat.node1.nr_shmem 7974 ± 4% +763.9% 68889 ± 34% numa-vmstat.node1.nr_zone_active_anon 752032 ± 7% +28.0% 962711 ± 5% numa-vmstat.node1.numa_hit 589148 ± 15% +36.4% 803358 ± 10% numa-vmstat.node1.numa_local 30372 ± 3% +82.7% 55486 ± 4% slabinfo.filp.active_objs 956.50 ± 3% +81.7% 1737 ± 4% slabinfo.filp.active_slabs 30621 ± 3% +81.6% 55618 ± 4% slabinfo.filp.num_objs 956.50 ± 3% +81.7% 1737 ± 4% slabinfo.filp.num_slabs 8410 +22.1% 10269 slabinfo.kmalloc-256.active_objs 8430 +23.0% 10368 ± 2% slabinfo.kmalloc-256.num_objs 506.67 ± 19% +35.8% 688.00 ± 9% slabinfo.kmalloc-rcl-128.active_objs 506.67 ± 19% +35.8% 688.00 ± 9% slabinfo.kmalloc-rcl-128.num_objs 1939 ± 9% +18.1% 2289 ± 7% slabinfo.kmalloc-rcl-96.active_objs 1939 ± 9% +18.1% 2289 ± 7% slabinfo.kmalloc-rcl-96.num_objs 14010 -27.9% 10102 slabinfo.proc_inode_cache.active_objs 14010 -27.9% 10102 slabinfo.proc_inode_cache.num_objs 16173 +9.8% 17756 slabinfo.radix_tree_node.active_objs 16173 +9.8% 17756 slabinfo.radix_tree_node.num_objs 8266 ± 3% +1318.3% 117249 ± 4% proc-vmstat.nr_active_anon 63314 +2.4% 64806 proc-vmstat.nr_anon_pages 260073 +41.2% 367248 proc-vmstat.nr_file_pages 12281 -19.0% 9954 proc-vmstat.nr_mapped 2636 +2.2% 2694 proc-vmstat.nr_page_table_pages 15012 ± 2% +713.9% 122187 ± 4% proc-vmstat.nr_shmem 20917 -3.1% 20259 proc-vmstat.nr_slab_reclaimable 46901 +4.3% 48911 proc-vmstat.nr_slab_unreclaimable 8266 ± 3% +1318.3% 117249 ± 4% proc-vmstat.nr_zone_active_anon 396216 +102.8% 803626 ± 2% proc-vmstat.numa_hit 309617 +131.6% 717021 ± 3% proc-vmstat.numa_local 23737 ± 3% -32.5% 16011 ± 4% proc-vmstat.pgactivate 410071 +149.2% 1021949 ± 3% proc-vmstat.pgalloc_normal 245232 -9.7% 221534 proc-vmstat.pgfault 251959 +143.0% 612297 ± 3% proc-vmstat.pgfree 99.74 -99.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe 99.48 -99.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe 99.43 -99.4 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe 99.30 -99.3 0.00 perf-profile.calltrace.cycles-pp.sock_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe 99.23 -99.2 0.00 perf-profile.calltrace.cycles-pp.sock_do_ioctl.sock_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe 57.64 -57.6 0.00 perf-profile.calltrace.cycles-pp.inet_ioctl.sock_do_ioctl.sock_ioctl.__x64_sys_ioctl.do_syscall_64 57.45 -57.5 0.00 perf-profile.calltrace.cycles-pp.devinet_ioctl.inet_ioctl.sock_do_ioctl.sock_ioctl.__x64_sys_ioctl 56.99 -57.0 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.devinet_ioctl.inet_ioctl.sock_do_ioctl.sock_ioctl 56.38 -56.4 0.00 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.devinet_ioctl.inet_ioctl.sock_do_ioctl 40.66 ± 3% -40.7 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.sock_do_ioctl.sock_ioctl.__x64_sys_ioctl.do_syscall_64 40.23 ± 3% -40.2 0.00 perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.sock_do_ioctl.sock_ioctl.__x64_sys_ioctl 99.76 -99.8 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 99.50 -99.5 0.00 perf-profile.children.cycles-pp.do_syscall_64 99.44 -99.4 0.00 perf-profile.children.cycles-pp.__x64_sys_ioctl 99.30 -99.3 0.00 perf-profile.children.cycles-pp.sock_ioctl 99.23 -99.2 0.00 perf-profile.children.cycles-pp.sock_do_ioctl 97.65 -97.6 0.00 perf-profile.children.cycles-pp.__mutex_lock 96.66 -96.7 0.00 perf-profile.children.cycles-pp.osq_lock 57.64 -57.6 0.00 perf-profile.children.cycles-pp.inet_ioctl 57.46 -57.5 0.00 perf-profile.children.cycles-pp.devinet_ioctl 96.18 -96.2 0.00 perf-profile.self.cycles-pp.osq_lock 1.04 ± 5% +16.4% 1.21 ± 2% perf-stat.i.MPKI 1.408e+10 +5.7% 1.488e+10 perf-stat.i.branch-instructions 0.32 ± 2% -0.1 0.27 ± 3% perf-stat.i.branch-miss-rate% 27033821 ± 2% +7.0% 28938875 ± 3% perf-stat.i.branch-misses 64343782 ± 2% +31.3% 84507289 ± 2% perf-stat.i.cache-references 3.58 -5.5% 3.38 perf-stat.i.cpi 157.98 ± 2% +90.7% 301.33 perf-stat.i.cpu-migrations 1.835e+10 +4.2% 1.911e+10 perf-stat.i.dTLB-loads 0.00 ± 14% -0.0 0.00 ± 15% perf-stat.i.dTLB-store-miss-rate% 8.139e+08 +89.3% 1.541e+09 ± 2% perf-stat.i.dTLB-stores 10420036 ± 6% +19.8% 12487661 ± 6% perf-stat.i.iTLB-load-misses 6.938e+10 +5.8% 7.337e+10 perf-stat.i.instructions 0.30 +7.2% 0.32 perf-stat.i.ipc 12.10 -14.5% 10.34 perf-stat.i.major-faults 347.06 +7.0% 371.27 perf-stat.i.metric.M/sec 3578 -11.6% 3164 perf-stat.i.minor-faults 3943395 +49.0% 5875481 ± 11% perf-stat.i.node-store-misses 22391 ± 7% +115.7% 48297 ± 8% perf-stat.i.node-stores 3590 -11.6% 3174 perf-stat.i.page-faults 0.93 ± 2% +24.2% 1.15 perf-stat.overall.MPKI 3.66 -5.5% 3.46 perf-stat.overall.cpi 0.00 ± 10% -0.0 0.00 ± 15% perf-stat.overall.dTLB-store-miss-rate% 0.27 +5.8% 0.29 perf-stat.overall.ipc 92.12 -2.3 89.85 perf-stat.overall.node-load-miss-rate% 1.386e+10 +5.7% 1.464e+10 perf-stat.ps.branch-instructions 26542496 ± 2% +7.4% 28498462 ± 3% perf-stat.ps.branch-misses 63337390 ± 2% +31.3% 83145045 ± 2% perf-stat.ps.cache-references 156.45 ± 2% +89.5% 296.49 perf-stat.ps.cpu-migrations 1.806e+10 +4.1% 1.88e+10 perf-stat.ps.dTLB-loads 8.007e+08 +89.3% 1.516e+09 ± 2% perf-stat.ps.dTLB-stores 10251812 ± 6% +19.8% 12285306 ± 6% perf-stat.ps.iTLB-load-misses 6.828e+10 +5.7% 7.218e+10 perf-stat.ps.instructions 11.88 -13.2% 10.31 perf-stat.ps.major-faults 3512 -11.2% 3117 perf-stat.ps.minor-faults 3882790 +48.9% 5779960 ± 11% perf-stat.ps.node-store-misses 22040 ± 7% +116.1% 47630 ± 8% perf-stat.ps.node-stores 3524 -11.2% 3128 perf-stat.ps.page-faults 4.307e+12 +6.1% 4.569e+12 perf-stat.total.instructions 0.01 ± 68% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.00 ± 95% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 35% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.00 ± 40% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.49 ±157% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 22% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.27 ±138% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.02 ±168% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.57 ± 47% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.02 ± 99% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.01 ±127% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 27% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.08 ± 98% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 2.59 ±147% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 1.34 ±139% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1.79 ±218% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 2.93 ± 4% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.03 ± 72% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 4.60 ± 61% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 0.19 ± 31% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 313.33 ± 17% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 10.21 ± 28% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 0.16 ± 27% -100.0% 0.00 perf-sched.total_wait_time.average.ms 10.20 ± 28% -100.0% 0.00 perf-sched.total_wait_time.max.ms 1.32 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.24 ±207% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 35% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.09 ±113% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 2.57 ± 21% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 3.52 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.02 ±168% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.75 ± 42% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 2.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 6.50 ± 38% -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.17 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 86.17 ± 62% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 96.00 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 7.17 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 1.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 6.00 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 96.83 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 6.83 ± 56% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 2.61 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 2.12 ±215% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.58 ± 71% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 9.95 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 4.82 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1.79 ±218% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 4.19 ± 66% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 1.30 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.09 ±120% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 2.08 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 3.25 ± 5% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2.61 ± 10% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.58 ± 71% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 8.56 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 4.81 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 57741 -2.9% 56095 interrupts.CAL:Function_call_interrupts 7723 ± 3% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 7723 ± 3% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 7236 ± 20% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 7236 ± 20% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 7236 ± 20% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 7236 ± 20% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 7236 ± 20% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 7236 ± 20% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 99.50 ± 12% -18.9% 80.67 ± 4% interrupts.CPU2.RES:Rescheduling_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 7226 ± 20% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 7226 ± 20% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 7193 ± 20% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 7193 ± 20% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 804.50 ± 10% -20.8% 637.00 ± 4% interrupts.CPU24.CAL:Function_call_interrupts 7858 -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 7858 -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 897.50 ± 40% -33.4% 597.67 ± 2% interrupts.CPU25.CAL:Function_call_interrupts 7889 -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 7889 -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 691.33 ± 11% -16.8% 575.50 ± 13% interrupts.CPU26.CAL:Function_call_interrupts 7844 -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 7844 -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 7897 -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 7897 -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 6576 ± 28% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 6576 ± 28% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 7897 -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 7897 -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 7897 -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 7897 -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 7239 ± 20% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 7239 ± 20% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 7239 ± 20% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 7239 ± 20% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 7239 ± 20% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 7239 ± 20% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 7238 ± 20% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 7238 ± 20% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 7896 -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 7896 -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 7238 ± 20% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 7238 ± 20% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 7238 ± 20% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 7238 ± 20% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 7896 -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 7896 -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 7896 -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 7896 -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 7896 -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 7896 -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 7896 -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 7896 -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 7896 -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 7896 -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 7895 -100.0% 1.00 ±223% interrupts.CPU45.NMI:Non-maskable_interrupts 7895 -100.0% 1.00 ±223% interrupts.CPU45.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 7220 ± 20% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 7220 ± 20% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 758.00 ± 7% -17.0% 629.33 ± 6% interrupts.CPU48.CAL:Function_call_interrupts 6478 ± 27% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 6478 ± 27% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 757.00 ± 30% -28.2% 543.50 ± 4% interrupts.CPU49.CAL:Function_call_interrupts 6556 ± 28% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 6556 ± 28% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 108.83 ± 21% -23.0% 83.83 ± 3% interrupts.CPU49.RES:Rescheduling_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 6575 ± 28% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 6575 ± 28% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 6576 ± 28% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 6576 ± 28% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 6577 ± 28% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 6577 ± 28% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 7234 ± 20% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 7234 ± 20% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 7154 ± 20% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 7154 ± 20% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 7235 ± 20% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 5919 ± 33% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 5919 ± 33% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 5262 ± 35% -100.0% 0.00 interrupts.CPU63.NMI:Non-maskable_interrupts 5262 ± 35% -100.0% 0.00 interrupts.CPU63.PMI:Performance_monitoring_interrupts 5262 ± 35% -100.0% 0.00 interrupts.CPU64.NMI:Non-maskable_interrupts 5262 ± 35% -100.0% 0.00 interrupts.CPU64.PMI:Performance_monitoring_interrupts 6580 ± 28% -100.0% 0.00 interrupts.CPU65.NMI:Non-maskable_interrupts 6580 ± 28% -100.0% 0.00 interrupts.CPU65.PMI:Performance_monitoring_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU66.NMI:Non-maskable_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts 5919 ± 33% -100.0% 0.00 interrupts.CPU67.NMI:Non-maskable_interrupts 5919 ± 33% -100.0% 0.00 interrupts.CPU67.PMI:Performance_monitoring_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU68.NMI:Non-maskable_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU68.PMI:Performance_monitoring_interrupts 6580 ± 28% -100.0% 0.00 interrupts.CPU69.NMI:Non-maskable_interrupts 6580 ± 28% -100.0% 0.00 interrupts.CPU69.PMI:Performance_monitoring_interrupts 7238 ± 20% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 7238 ± 20% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU70.NMI:Non-maskable_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU70.PMI:Performance_monitoring_interrupts 5922 ± 33% -100.0% 0.00 interrupts.CPU71.NMI:Non-maskable_interrupts 5922 ± 33% -100.0% 0.00 interrupts.CPU71.PMI:Performance_monitoring_interrupts 5877 ± 32% -100.0% 0.00 interrupts.CPU72.NMI:Non-maskable_interrupts 5877 ± 32% -100.0% 0.00 interrupts.CPU72.PMI:Performance_monitoring_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU73.NMI:Non-maskable_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU73.PMI:Performance_monitoring_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU74.NMI:Non-maskable_interrupts 6578 ± 28% -100.0% 0.00 interrupts.CPU74.PMI:Performance_monitoring_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU75.NMI:Non-maskable_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU75.PMI:Performance_monitoring_interrupts 5908 ± 33% -100.0% 0.00 interrupts.CPU76.NMI:Non-maskable_interrupts 5908 ± 33% -100.0% 0.00 interrupts.CPU76.PMI:Performance_monitoring_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU77.NMI:Non-maskable_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU77.PMI:Performance_monitoring_interrupts 5919 ± 33% -100.0% 2.00 ±141% interrupts.CPU78.NMI:Non-maskable_interrupts 5919 ± 33% -100.0% 2.00 ±141% interrupts.CPU78.PMI:Performance_monitoring_interrupts 5920 ± 33% -100.0% 0.00 interrupts.CPU79.NMI:Non-maskable_interrupts 5920 ± 33% -100.0% 0.00 interrupts.CPU79.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 5263 ± 35% -100.0% 0.00 interrupts.CPU80.NMI:Non-maskable_interrupts 5263 ± 35% -100.0% 0.00 interrupts.CPU80.PMI:Performance_monitoring_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU81.NMI:Non-maskable_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU81.PMI:Performance_monitoring_interrupts 5263 ± 35% -100.0% 1.00 ±223% interrupts.CPU82.NMI:Non-maskable_interrupts 5263 ± 35% -100.0% 1.00 ±223% interrupts.CPU82.PMI:Performance_monitoring_interrupts 5922 ± 33% -100.0% 0.00 interrupts.CPU83.NMI:Non-maskable_interrupts 5922 ± 33% -100.0% 0.00 interrupts.CPU83.PMI:Performance_monitoring_interrupts 5920 ± 33% -100.0% 0.00 interrupts.CPU84.NMI:Non-maskable_interrupts 5920 ± 33% -100.0% 0.00 interrupts.CPU84.PMI:Performance_monitoring_interrupts 5262 ± 35% -100.0% 0.00 interrupts.CPU85.NMI:Non-maskable_interrupts 5262 ± 35% -100.0% 0.00 interrupts.CPU85.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU86.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU86.PMI:Performance_monitoring_interrupts 6555 ± 28% -100.0% 0.00 interrupts.CPU87.NMI:Non-maskable_interrupts 6555 ± 28% -100.0% 0.00 interrupts.CPU87.PMI:Performance_monitoring_interrupts 6578 ± 28% -100.0% 1.00 ±223% interrupts.CPU88.NMI:Non-maskable_interrupts 6578 ± 28% -100.0% 1.00 ±223% interrupts.CPU88.PMI:Performance_monitoring_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU89.NMI:Non-maskable_interrupts 6579 ± 28% -100.0% 0.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 7237 ± 20% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU90.NMI:Non-maskable_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU90.PMI:Performance_monitoring_interrupts 5920 ± 33% -100.0% 0.00 interrupts.CPU91.NMI:Non-maskable_interrupts 5920 ± 33% -100.0% 0.00 interrupts.CPU91.PMI:Performance_monitoring_interrupts 5919 ± 33% -100.0% 0.00 interrupts.CPU92.NMI:Non-maskable_interrupts 5919 ± 33% -100.0% 0.00 interrupts.CPU92.PMI:Performance_monitoring_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU93.NMI:Non-maskable_interrupts 5921 ± 33% -100.0% 0.00 interrupts.CPU93.PMI:Performance_monitoring_interrupts 6567 ± 28% -100.0% 0.00 interrupts.CPU94.NMI:Non-maskable_interrupts 6567 ± 28% -100.0% 0.00 interrupts.CPU94.PMI:Performance_monitoring_interrupts 70.83 ± 5% +25.6% 89.00 ± 10% interrupts.CPU94.RES:Rescheduling_interrupts 697.83 ± 3% -18.1% 571.83 ± 4% interrupts.CPU95.CAL:Function_call_interrupts 5916 ± 33% -100.0% 0.00 interrupts.CPU95.NMI:Non-maskable_interrupts 5916 ± 33% -100.0% 0.00 interrupts.CPU95.PMI:Performance_monitoring_interrupts 140.67 ± 7% -39.8% 84.67 ± 11% interrupts.CPU95.RES:Rescheduling_interrupts 165.50 ± 3% -100.0% 0.00 interrupts.IWI:IRQ_work_interrupts 652667 ± 3% -100.0% 5.00 ± 44% interrupts.NMI:Non-maskable_interrupts 652667 ± 3% -100.0% 5.00 ± 44% interrupts.PMI:Performance_monitoring_interrupts *************************************************************************************************** lkp-knm01: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory ========================================================================================= compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/directio/1HDD/xfs/x86_64-rhel-8.3/hdd/debian-10.4-x86_64-20200603.cgz/lkp-knm01/DRBH/fxmark/0x11 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 470.47 -11.9% 414.40 fxmark.hdd_xfs_DRBH_18_directio.iowait_sec 87.65 -11.7% 77.37 fxmark.hdd_xfs_DRBH_18_directio.iowait_util 8.97 +15.9% 10.40 fxmark.hdd_xfs_DRBH_18_directio.softirq_sec 1.67 +16.2% 1.94 fxmark.hdd_xfs_DRBH_18_directio.softirq_util 25.00 +191.2% 72.81 fxmark.hdd_xfs_DRBH_18_directio.sys_sec 4.66 +191.9% 13.59 fxmark.hdd_xfs_DRBH_18_directio.sys_util 2.56 +276.5% 9.62 fxmark.hdd_xfs_DRBH_18_directio.user_sec 0.48 +277.3% 1.80 fxmark.hdd_xfs_DRBH_18_directio.user_util 0.04 ± 23% -100.0% 0.00 fxmark.hdd_xfs_DRBH_1_directio.idle_sec 0.14 ± 29% -100.0% 0.00 fxmark.hdd_xfs_DRBH_1_directio.idle_util 8.03 -100.0% 0.00 fxmark.hdd_xfs_DRBH_1_directio.iowait_sec 28.45 ± 6% -100.0% 0.00 fxmark.hdd_xfs_DRBH_1_directio.iowait_util 2.28 -13.4% 1.98 fxmark.hdd_xfs_DRBH_1_directio.irq_sec 8.09 ± 6% -20.8% 6.41 fxmark.hdd_xfs_DRBH_1_directio.irq_util 3.10 -18.1% 2.54 ± 3% fxmark.hdd_xfs_DRBH_1_directio.softirq_sec 11.00 ± 7% -25.2% 8.23 ± 3% fxmark.hdd_xfs_DRBH_1_directio.softirq_util 13.52 ± 14% +68.3% 22.74 fxmark.hdd_xfs_DRBH_1_directio.sys_sec 47.43 ± 7% +55.3% 73.65 fxmark.hdd_xfs_DRBH_1_directio.sys_util 1.38 ± 16% +162.6% 3.61 ± 5% fxmark.hdd_xfs_DRBH_1_directio.user_sec 4.88 ± 17% +139.8% 11.71 ± 5% fxmark.hdd_xfs_DRBH_1_directio.user_util 338041 -42.5% 194212 fxmark.hdd_xfs_DRBH_1_directio.works 11267 -42.5% 6473 fxmark.hdd_xfs_DRBH_1_directio.works/sec 9.62 +15.9% 11.16 fxmark.hdd_xfs_DRBH_27_directio.softirq_sec 1.19 +16.1% 1.38 fxmark.hdd_xfs_DRBH_27_directio.softirq_util 25.20 +194.0% 74.11 fxmark.hdd_xfs_DRBH_27_directio.sys_sec 3.12 +194.5% 9.18 fxmark.hdd_xfs_DRBH_27_directio.sys_util 2.81 +258.7% 10.07 ± 2% fxmark.hdd_xfs_DRBH_27_directio.user_sec 0.35 +259.2% 1.25 ± 2% fxmark.hdd_xfs_DRBH_27_directio.user_util 15.54 -100.0% 0.00 ±141% fxmark.hdd_xfs_DRBH_2_directio.idle_sec 27.73 ± 3% -100.0% 0.01 ±141% fxmark.hdd_xfs_DRBH_2_directio.idle_util 9.24 ± 2% -99.0% 0.10 ± 45% fxmark.hdd_xfs_DRBH_2_directio.iowait_sec 16.47 ± 2% -99.0% 0.16 ± 45% fxmark.hdd_xfs_DRBH_2_directio.iowait_util 5.40 +13.6% 6.13 fxmark.hdd_xfs_DRBH_2_directio.softirq_sec 20.05 ± 8% +122.5% 44.62 fxmark.hdd_xfs_DRBH_2_directio.sys_sec 35.68 ± 5% +105.6% 73.36 fxmark.hdd_xfs_DRBH_2_directio.sys_util 1.79 ± 3% +212.5% 5.60 ± 7% fxmark.hdd_xfs_DRBH_2_directio.user_sec 3.19 ± 3% +188.2% 9.21 ± 7% fxmark.hdd_xfs_DRBH_2_directio.user_util 613930 -14.3% 525831 fxmark.hdd_xfs_DRBH_2_directio.works 20464 -14.3% 17528 fxmark.hdd_xfs_DRBH_2_directio.works/sec 13.87 +13.8% 15.77 fxmark.hdd_xfs_DRBH_36_directio.softirq_sec 1.29 +13.8% 1.47 fxmark.hdd_xfs_DRBH_36_directio.softirq_util 30.05 +163.1% 79.06 fxmark.hdd_xfs_DRBH_36_directio.sys_sec 2.80 +163.3% 7.39 fxmark.hdd_xfs_DRBH_36_directio.sys_util 3.17 +228.7% 10.41 ± 3% fxmark.hdd_xfs_DRBH_36_directio.user_sec 0.30 +228.9% 0.97 ± 3% fxmark.hdd_xfs_DRBH_36_directio.user_util 14.18 +13.5% 16.09 fxmark.hdd_xfs_DRBH_45_directio.softirq_sec 1.06 +13.6% 1.20 fxmark.hdd_xfs_DRBH_45_directio.softirq_util 30.46 +162.2% 79.85 fxmark.hdd_xfs_DRBH_45_directio.sys_sec 2.27 +162.4% 5.96 fxmark.hdd_xfs_DRBH_45_directio.sys_util 3.43 +210.4% 10.63 ± 2% fxmark.hdd_xfs_DRBH_45_directio.user_sec 0.26 +210.7% 0.79 ± 2% fxmark.hdd_xfs_DRBH_45_directio.user_util 19.51 -74.8% 4.92 ± 20% fxmark.hdd_xfs_DRBH_4_directio.idle_sec 17.11 -75.4% 4.21 ± 19% fxmark.hdd_xfs_DRBH_4_directio.idle_util 55.35 -52.7% 26.19 ± 2% fxmark.hdd_xfs_DRBH_4_directio.iowait_sec 48.53 -53.8% 22.43 ± 3% fxmark.hdd_xfs_DRBH_4_directio.iowait_util 6.29 +11.5% 7.01 fxmark.hdd_xfs_DRBH_4_directio.irq_sec 7.34 +22.0% 8.96 fxmark.hdd_xfs_DRBH_4_directio.softirq_sec 6.44 +19.2% 7.67 fxmark.hdd_xfs_DRBH_4_directio.softirq_util 23.53 +164.8% 62.31 fxmark.hdd_xfs_DRBH_4_directio.sys_sec 20.63 +158.6% 53.35 fxmark.hdd_xfs_DRBH_4_directio.sys_util 2.03 ± 2% +264.3% 7.41 fxmark.hdd_xfs_DRBH_4_directio.user_sec 1.78 ± 2% +255.7% 6.35 fxmark.hdd_xfs_DRBH_4_directio.user_util 20.25 -18.9% 16.42 fxmark.hdd_xfs_DRBH_54_directio.softirq_sec 1.27 -19.4% 1.02 fxmark.hdd_xfs_DRBH_54_directio.softirq_util 40.95 +97.1% 80.70 fxmark.hdd_xfs_DRBH_54_directio.sys_sec 2.56 +95.9% 5.02 fxmark.hdd_xfs_DRBH_54_directio.sys_util 14.90 -25.2% 11.15 ± 3% fxmark.hdd_xfs_DRBH_54_directio.user_sec 0.93 -25.6% 0.69 ± 3% fxmark.hdd_xfs_DRBH_54_directio.user_util 31.39 +158.4% 81.11 fxmark.hdd_xfs_DRBH_63_directio.sys_sec 1.67 +158.4% 4.33 fxmark.hdd_xfs_DRBH_63_directio.sys_util 4.05 ± 2% +181.2% 11.39 ± 3% fxmark.hdd_xfs_DRBH_63_directio.user_sec 0.22 ± 2% +181.1% 0.61 ± 3% fxmark.hdd_xfs_DRBH_63_directio.user_util 36.80 +135.7% 86.72 fxmark.hdd_xfs_DRBH_72_directio.sys_sec 1.72 +135.6% 4.06 fxmark.hdd_xfs_DRBH_72_directio.sys_util 9.47 ± 3% +59.6% 15.11 ± 3% fxmark.hdd_xfs_DRBH_72_directio.user_sec 0.44 ± 3% +59.6% 0.71 ± 3% fxmark.hdd_xfs_DRBH_72_directio.user_util 20.09 ± 2% -22.8% 15.50 ± 10% fxmark.hdd_xfs_DRBH_9_directio.idle_sec 7.57 ± 2% -23.1% 5.82 ± 10% fxmark.hdd_xfs_DRBH_9_directio.idle_util 201.58 -23.4% 154.36 fxmark.hdd_xfs_DRBH_9_directio.iowait_sec 75.95 -23.7% 57.97 fxmark.hdd_xfs_DRBH_9_directio.iowait_util 8.02 +17.3% 9.41 fxmark.hdd_xfs_DRBH_9_directio.softirq_sec 3.02 +16.9% 3.53 fxmark.hdd_xfs_DRBH_9_directio.softirq_util 24.79 +178.5% 69.04 fxmark.hdd_xfs_DRBH_9_directio.sys_sec 9.34 +177.6% 25.93 fxmark.hdd_xfs_DRBH_9_directio.sys_util 2.23 ± 3% +287.0% 8.62 fxmark.hdd_xfs_DRBH_9_directio.user_sec 0.84 ± 3% +285.7% 3.24 fxmark.hdd_xfs_DRBH_9_directio.user_util 422.70 +3.1% 435.93 fxmark.time.elapsed_time 422.70 +3.1% 435.93 fxmark.time.elapsed_time.max 8600363 -17.3% 7115503 fxmark.time.file_system_inputs 6835 ± 10% +46.0% 9982 ± 9% fxmark.time.involuntary_context_switches 1092432 -17.0% 906910 fxmark.time.voluntary_context_switches 19970495 -16.0% 16780815 ± 5% cpuidle.POLL.time 347745 -19.1% 281382 ± 4% cpuidle.POLL.usage 1659415 +12.0% 1858153 numa-numastat.node0.local_node 1659381 +12.0% 1858103 numa-numastat.node0.numa_hit 17.22 -22.5% 13.33 iostat.cpu.idle 59.21 -18.3% 48.40 iostat.cpu.iowait 15.94 +74.8% 27.87 iostat.cpu.system 1.78 ± 3% +114.4% 3.81 iostat.cpu.user 26.94 -4.7 22.24 mpstat.cpu.all.idle% 55.03 -9.3 45.69 mpstat.cpu.all.iowait% 10.67 ± 2% +12.0 22.66 mpstat.cpu.all.sys% 1.78 ± 2% +2.0 3.78 mpstat.cpu.all.usr% 16.00 -25.0% 12.00 vmstat.cpu.id 59.50 -17.9% 48.83 vmstat.cpu.wa 75287 -6.0% 70807 vmstat.io.bi 6734 -3.1% 6528 vmstat.io.bo 1442765 +25.5% 1810880 vmstat.memory.cache 2.33 ± 20% +78.6% 4.17 ± 8% vmstat.procs.r 49866 -4.1% 47798 vmstat.system.cs 18886 ± 3% +28.5% 24270 ± 2% slabinfo.filp.active_objs 7665 +40.7% 10783 slabinfo.kmalloc-2k.active_objs 522.50 +36.2% 711.83 slabinfo.kmalloc-2k.active_slabs 8367 +36.2% 11400 slabinfo.kmalloc-2k.num_objs 522.50 +36.2% 711.83 slabinfo.kmalloc-2k.num_slabs 630.83 +31.5% 829.50 slabinfo.kmalloc-8k.active_objs 675.67 +29.6% 875.33 slabinfo.kmalloc-8k.num_objs 3971 ± 4% -12.3% 3481 ± 8% slabinfo.proc_inode_cache.active_objs 4331 ± 3% -10.2% 3889 ± 7% slabinfo.proc_inode_cache.num_objs 3498 ± 9% +178.9% 9756 numa-vmstat.node0.nr_active_anon 65053 +21.2% 78835 numa-vmstat.node0.nr_anon_pages 95.83 +23.3% 118.17 numa-vmstat.node0.nr_anon_transparent_hugepages 211917 +42.4% 301736 numa-vmstat.node0.nr_file_pages 135208 +72.0% 232530 numa-vmstat.node0.nr_inactive_anon 7948 ± 3% -19.2% 6424 numa-vmstat.node0.nr_mapped 1241 +16.4% 1444 numa-vmstat.node0.nr_page_table_pages 73768 +121.8% 163603 numa-vmstat.node0.nr_shmem 3498 ± 9% +178.9% 9756 numa-vmstat.node0.nr_zone_active_anon 135208 +72.0% 232529 numa-vmstat.node0.nr_zone_inactive_anon 13794 ± 10% +182.5% 38969 numa-meminfo.node0.Active 13794 ± 10% +182.5% 38969 numa-meminfo.node0.Active(anon) 196927 +23.4% 243029 numa-meminfo.node0.AnonHugePages 259988 +21.3% 315285 numa-meminfo.node0.AnonPages 848027 +42.7% 1209988 numa-meminfo.node0.FilePages 541178 +72.4% 933175 numa-meminfo.node0.Inactive 541172 +72.4% 933169 numa-meminfo.node0.Inactive(anon) 30980 ± 3% -19.0% 25107 numa-meminfo.node0.Mapped 2296401 +74.5% 4008127 numa-meminfo.node0.MemUsed 4963 +16.3% 5773 numa-meminfo.node0.PageTables 295428 +122.5% 657458 numa-meminfo.node0.Shmem 13781 ± 10% +182.1% 38874 meminfo.Active 13781 ± 10% +182.1% 38874 meminfo.Active(anon) 196971 +23.4% 243088 meminfo.AnonHugePages 260004 +21.3% 315288 meminfo.AnonPages 1359260 +26.7% 1722280 meminfo.Cached 633128 +71.9% 1088631 meminfo.Committed_AS 541295 +72.6% 934452 meminfo.Inactive 541290 +72.6% 934447 meminfo.Inactive(anon) 44382 ± 2% -12.3% 38919 meminfo.Mapped 2897620 +59.1% 4608971 meminfo.Memused 4963 +16.4% 5776 meminfo.PageTables 295522 +122.9% 658634 meminfo.Shmem 13185 ±114% +288.5% 51229 ± 18% sched_debug.cfs_rq:/.load.min 22.81 ± 48% +183.6% 64.69 ± 14% sched_debug.cfs_rq:/.load_avg.min 0.21 ± 11% +84.0% 0.38 ± 13% sched_debug.cfs_rq:/.nr_running.avg 215.82 ± 21% +94.8% 420.43 ± 10% sched_debug.cfs_rq:/.runnable_avg.avg 979.69 ± 11% +31.9% 1292 ± 5% sched_debug.cfs_rq:/.runnable_avg.max 131.04 ± 37% +110.1% 275.27 ± 19% sched_debug.cfs_rq:/.runnable_avg.min 176.18 ± 19% +44.7% 254.98 ± 9% sched_debug.cfs_rq:/.runnable_avg.stddev 177.79 ± 17% +58.5% 281.81 ± 7% sched_debug.cfs_rq:/.util_avg.avg 842.38 ± 3% +21.6% 1024 sched_debug.cfs_rq:/.util_avg.max 104.27 ± 37% +69.8% 177.04 ± 21% sched_debug.cfs_rq:/.util_avg.min 149.99 ± 16% +36.6% 204.88 ± 8% sched_debug.cfs_rq:/.util_avg.stddev 35.23 ± 57% +619.4% 253.47 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.avg 366.06 ± 36% +191.2% 1066 sched_debug.cfs_rq:/.util_est_enqueued.max 8.73 ±222% +1455.1% 135.75 ± 12% sched_debug.cfs_rq:/.util_est_enqueued.min 65.74 ± 42% +248.8% 229.27 ± 6% sched_debug.cfs_rq:/.util_est_enqueued.stddev 421568 ± 13% +20.7% 508771 ± 11% sched_debug.cpu.avg_idle.min 37.46 ± 3% -16.7% 31.22 sched_debug.cpu.clock.stddev 0.33 ± 12% +81.7% 0.60 ± 13% sched_debug.cpu.nr_running.avg 2.10 ± 11% +21.8% 2.56 ± 9% sched_debug.cpu.nr_running.max 41051 ± 2% +12.7% 46249 sched_debug.cpu.nr_switches.stddev 0.10 ± 7% -0.0 0.08 ± 8% perf-stat.i.branch-miss-rate% 0.21 ± 6% -0.1 0.16 ± 7% perf-stat.i.cache-miss-rate% 49683 -3.9% 47769 perf-stat.i.context-switches 0.17 ± 7% -24.3% 0.13 ± 10% perf-stat.i.cpi 104.82 ± 3% -8.5% 95.86 ± 2% perf-stat.i.cpu-migrations 38.28 ± 6% -13.5% 33.12 ± 10% perf-stat.i.cycles-between-cache-misses 0.02 ± 7% -0.0 0.02 ± 10% perf-stat.i.iTLB-load-miss-rate% 1.33 ± 5% -25.4% 0.99 ± 6% perf-stat.i.major-faults 3337 -7.7% 3081 perf-stat.i.minor-faults 3338 -7.7% 3082 perf-stat.i.page-faults 28.69 +3.1% 29.57 perf-stat.overall.MPKI 6.86 -0.5 6.37 perf-stat.overall.branch-miss-rate% 15.22 -2.2 12.98 perf-stat.overall.cache-miss-rate% 11.78 -14.5% 10.08 perf-stat.overall.cpi 2698 -2.7% 2626 perf-stat.overall.cycles-between-cache-misses 1.35 -0.1 1.20 perf-stat.overall.iTLB-load-miss-rate% 73.56 +12.2% 82.52 perf-stat.overall.instructions-per-iTLB-miss 0.08 +16.9% 0.10 perf-stat.overall.ipc 49401 -4.1% 47400 perf-stat.ps.context-switches 39243 -1.6% 38632 perf-stat.ps.cpu-clock 105.09 ± 3% -8.7% 95.95 ± 2% perf-stat.ps.cpu-migrations 1.32 ± 5% -25.3% 0.99 ± 6% perf-stat.ps.major-faults 3329 -7.7% 3073 perf-stat.ps.minor-faults 3331 -7.7% 3074 perf-stat.ps.page-faults 39243 -1.6% 38632 perf-stat.ps.task-clock 3462 ± 10% +180.2% 9703 proc-vmstat.nr_active_anon 64982 +21.4% 78859 proc-vmstat.nr_anon_pages 95.83 +23.8% 118.67 proc-vmstat.nr_anon_transparent_hugepages 1969074 -2.2% 1926210 proc-vmstat.nr_dirty_background_threshold 3942964 -2.2% 3857131 proc-vmstat.nr_dirty_threshold 340110 +26.7% 430962 proc-vmstat.nr_file_pages 19848480 -2.2% 19419218 proc-vmstat.nr_free_pages 135581 +72.6% 234056 proc-vmstat.nr_inactive_anon 11305 ± 2% -12.5% 9890 proc-vmstat.nr_mapped 1242 +16.4% 1445 proc-vmstat.nr_page_table_pages 74175 +122.5% 165050 proc-vmstat.nr_shmem 54453 +6.9% 58200 proc-vmstat.nr_slab_unreclaimable 3462 ± 10% +180.2% 9703 proc-vmstat.nr_zone_active_anon 135581 +72.6% 234056 proc-vmstat.nr_zone_inactive_anon 1682585 +11.9% 1882216 proc-vmstat.numa_hit 1682584 +11.9% 1882215 proc-vmstat.numa_local 127918 ± 37% +52.5% 195065 ± 2% proc-vmstat.numa_pte_updates 36660 ± 20% -43.1% 20848 proc-vmstat.pgactivate 1815043 +11.2% 2018000 proc-vmstat.pgalloc_normal 94923 +181.1% 266864 proc-vmstat.pgdeactivate 1519750 -3.9% 1460276 proc-vmstat.pgfault 1714835 -27.0% 1252410 proc-vmstat.pgfree 31796238 -3.0% 30848515 proc-vmstat.pgpgin 136917 +2.3% 140094 proc-vmstat.pgreuse 77387 -22.0% 60332 proc-vmstat.slabs_scanned 4489856 +1.5% 4558720 proc-vmstat.unevictable_pgs_scanned 55797 ± 10% +144.3% 136315 ± 3% softirqs.CPU0.RCU 7042 ± 32% +78.3% 12555 ± 6% softirqs.CPU0.TIMER 47705 ± 18% +148.2% 118408 ± 6% softirqs.CPU1.RCU 39916 ± 6% -12.3% 34996 ± 6% softirqs.CPU1.SCHED 30751 ± 14% +141.1% 74134 ± 7% softirqs.CPU10.RCU 31373 ± 14% +127.3% 71311 ± 5% softirqs.CPU11.RCU 31147 ± 13% +135.1% 73240 ± 7% softirqs.CPU12.RCU 31794 ± 8% +121.3% 70359 ± 2% softirqs.CPU13.RCU 32297 ± 13% +141.3% 77923 ± 8% softirqs.CPU14.RCU 31270 ± 16% +124.1% 70067 ± 4% softirqs.CPU15.RCU 28106 ± 12% +105.1% 57657 ± 3% softirqs.CPU16.RCU 29182 ± 15% +106.5% 60263 ± 7% softirqs.CPU17.RCU 21390 ± 45% +24.7% 26670 ± 34% softirqs.CPU17.SCHED 26529 ± 12% +129.9% 60998 ± 4% softirqs.CPU18.RCU 26869 ± 14% +118.5% 58721 ± 4% softirqs.CPU19.RCU 42573 ± 15% +139.4% 101901 ± 6% softirqs.CPU2.RCU 27072 ± 11% +120.6% 59730 softirqs.CPU20.RCU 26933 ± 12% +117.6% 58601 ± 6% softirqs.CPU21.RCU 26943 ± 11% +127.5% 61307 ± 11% softirqs.CPU22.RCU 27159 ± 10% +115.7% 58579 ± 2% softirqs.CPU23.RCU 26878 ± 14% +116.5% 58188 ± 3% softirqs.CPU24.RCU 26682 ± 12% +128.6% 60985 ± 16% softirqs.CPU25.RCU 26935 ± 12% +117.8% 58660 ± 6% softirqs.CPU26.RCU 22797 ± 11% +110.7% 48035 ± 5% softirqs.CPU27.RCU 23921 ± 15% +109.6% 50138 ± 12% softirqs.CPU28.RCU 22953 ± 10% +118.9% 50241 ± 11% softirqs.CPU29.RCU 41432 ± 15% +130.1% 95337 ± 3% softirqs.CPU3.RCU 22785 ± 12% +127.8% 51898 ± 15% softirqs.CPU30.RCU 23938 ± 19% +93.5% 46325 softirqs.CPU31.RCU 22679 ± 12% +97.9% 44881 ± 10% softirqs.CPU32.RCU 21791 ± 10% +102.6% 44140 ± 6% softirqs.CPU33.RCU 21633 ± 12% +166.8% 57712 ± 26% softirqs.CPU34.RCU 22156 ± 22% +107.7% 46019 ± 14% softirqs.CPU35.RCU 18669 ± 9% +103.8% 38055 ± 7% softirqs.CPU36.RCU 18684 ± 9% +96.1% 36634 softirqs.CPU37.RCU 18272 ± 10% +125.0% 41104 ± 7% softirqs.CPU38.RCU 20002 ± 12% +96.4% 39285 ± 17% softirqs.CPU39.RCU 37060 ± 14% +148.4% 92048 ± 7% softirqs.CPU4.RCU 18438 ± 9% +123.1% 41136 ± 9% softirqs.CPU40.RCU 19532 ± 12% +92.9% 37674 ± 7% softirqs.CPU41.RCU 18464 ± 8% +106.1% 38060 ± 5% softirqs.CPU42.RCU 18394 ± 8% +110.8% 38783 ± 6% softirqs.CPU43.RCU 18483 ± 9% +127.9% 42119 ± 14% softirqs.CPU44.RCU 14672 ± 5% +98.7% 29152 ± 7% softirqs.CPU45.RCU 9319 ± 4% +11.3% 10372 ± 7% softirqs.CPU45.SCHED 14395 ± 3% +114.4% 30860 ± 15% softirqs.CPU46.RCU 15039 ± 12% +81.9% 27352 ± 2% softirqs.CPU47.RCU 13375 ± 4% +159.0% 34644 ± 23% softirqs.CPU48.RCU 13377 ± 3% +107.8% 27801 ± 17% softirqs.CPU49.RCU 38383 ± 14% +112.5% 81556 ± 2% softirqs.CPU5.RCU 13280 ± 4% +140.0% 31874 ± 19% softirqs.CPU50.RCU 13741 ± 12% +86.1% 25573 softirqs.CPU51.RCU 13880 ± 10% +111.3% 29323 ± 18% softirqs.CPU52.RCU 14224 ± 6% +115.7% 30685 ± 22% softirqs.CPU53.RCU 9898 ± 3% +107.8% 20565 ± 12% softirqs.CPU54.RCU 9875 ± 4% +98.1% 19561 ± 4% softirqs.CPU55.RCU 10156 ± 4% +84.0% 18692 ± 2% softirqs.CPU56.RCU 10053 ± 3% +90.2% 19122 ± 6% softirqs.CPU57.RCU 9985 ± 4% +127.7% 22738 ± 15% softirqs.CPU58.RCU 9932 ± 4% +114.5% 21299 ± 16% softirqs.CPU59.RCU 37614 ± 16% +117.6% 81853 ± 8% softirqs.CPU6.RCU 10274 ± 6% +89.4% 19462 ± 5% softirqs.CPU60.RCU 10097 ± 4% +90.5% 19231 ± 6% softirqs.CPU61.RCU 10045 ± 5% +104.0% 20491 ± 11% softirqs.CPU62.RCU 6993 ± 9% +63.2% 11416 ± 7% softirqs.CPU63.RCU 5868 ± 6% +75.6% 10303 ± 7% softirqs.CPU64.RCU 5953 ± 7% +73.2% 10310 ± 11% softirqs.CPU65.RCU 5965 ± 6% +144.7% 14598 ± 45% softirqs.CPU66.RCU 5957 ± 5% +74.7% 10411 ± 13% softirqs.CPU67.RCU 5980 ± 6% +101.3% 12040 ± 21% softirqs.CPU68.RCU 6153 ± 11% +77.5% 10921 ± 10% softirqs.CPU69.RCU 36924 ± 14% +132.3% 85788 ± 12% softirqs.CPU7.RCU 6067 ± 8% +66.2% 10083 ± 4% softirqs.CPU71.RCU 37038 ± 15% +118.3% 80838 ± 6% softirqs.CPU8.RCU 32361 ± 14% +114.0% 69238 ± 2% softirqs.CPU9.RCU 1961760 ± 8% +95.1% 3827733 softirqs.RCU 50835 +46.3% 74363 ± 3% softirqs.TIMER 41.27 ± 3% -41.3 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 40.48 ± 3% -40.5 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 40.48 ± 3% -40.5 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 40.39 ± 3% -40.4 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 36.27 ± 2% -36.3 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe 36.03 -36.0 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe 35.48 ± 2% -35.5 0.00 perf-profile.calltrace.cycles-pp.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe 35.33 ± 2% -35.3 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe 34.75 ± 2% -34.8 0.00 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe 29.03 ± 44% -29.0 0.00 perf-profile.calltrace.cycles-pp.xfs_file_read_iter.new_sync_read.vfs_read.ksys_pread64.do_syscall_64 28.91 ± 44% -28.9 0.00 perf-profile.calltrace.cycles-pp.xfs_file_dio_read.xfs_file_read_iter.new_sync_read.vfs_read.ksys_pread64 28.18 ± 44% -28.2 0.00 perf-profile.calltrace.cycles-pp.iomap_dio_rw.xfs_file_dio_read.xfs_file_read_iter.new_sync_read.vfs_read 28.16 ± 44% -28.2 0.00 perf-profile.calltrace.cycles-pp.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read.xfs_file_read_iter.new_sync_read 21.22 -21.2 0.00 perf-profile.calltrace.cycles-pp.__schedule.schedule.io_schedule.__iomap_dio_rw.iomap_dio_rw 19.03 ± 5% -19.0 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 18.54 ± 5% -18.5 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 17.88 ± 44% -17.9 0.00 perf-profile.calltrace.cycles-pp.io_schedule.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read.xfs_file_read_iter 17.84 ± 44% -17.8 0.00 perf-profile.calltrace.cycles-pp.schedule.io_schedule.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read 12.48 ± 2% -12.5 0.00 perf-profile.calltrace.cycles-pp.flush_smp_call_function_from_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 12.42 ± 4% -12.4 0.00 perf-profile.calltrace.cycles-pp.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event 12.23 ± 2% -12.2 0.00 perf-profile.calltrace.cycles-pp.do_softirq.flush_smp_call_function_from_idle.do_idle.cpu_startup_entry.start_secondary 12.15 ± 2% -12.1 0.00 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq.flush_smp_call_function_from_idle.do_idle.cpu_startup_entry 11.92 ± 2% -11.9 0.00 perf-profile.calltrace.cycles-pp.blk_complete_reqs.__softirqentry_text_start.do_softirq.flush_smp_call_function_from_idle.do_idle 11.28 ± 2% -11.3 0.00 perf-profile.calltrace.cycles-pp.scsi_io_completion.blk_complete_reqs.__softirqentry_text_start.do_softirq.flush_smp_call_function_from_idle 11.22 ± 3% -11.2 0.00 perf-profile.calltrace.cycles-pp.scsi_end_request.scsi_io_completion.blk_complete_reqs.__softirqentry_text_start.do_softirq 11.12 -11.1 0.00 perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.io_schedule.__iomap_dio_rw 11.05 -11.0 0.00 perf-profile.calltrace.cycles-pp.newidle_balance.pick_next_task_fair.__schedule.schedule.io_schedule 10.97 ± 4% -11.0 0.00 perf-profile.calltrace.cycles-pp.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow 10.85 ± 4% -10.9 0.00 perf-profile.calltrace.cycles-pp.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward.__perf_event_overflow 10.13 -10.1 0.00 perf-profile.calltrace.cycles-pp.load_balance.newidle_balance.pick_next_task_fair.__schedule.schedule 9.56 ± 3% -9.6 0.00 perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 9.32 -9.3 0.00 perf-profile.calltrace.cycles-pp.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair.__schedule 9.20 -9.2 0.00 perf-profile.calltrace.cycles-pp.update_sd_lb_stats.find_busiest_group.load_balance.newidle_balance.pick_next_task_fair 9.06 ± 3% -9.1 0.00 perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle 9.03 ± 5% -9.0 0.00 perf-profile.calltrace.cycles-pp.perf_callchain_kernel.get_perf_callchain.perf_callchain.perf_prepare_sample.perf_event_output_forward 7.38 -7.4 0.00 perf-profile.calltrace.cycles-pp.ret_from_fork 7.38 -7.4 0.00 perf-profile.calltrace.cycles-pp.kthread.ret_from_fork 7.07 -7.1 0.00 perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork 6.48 ± 4% -6.5 0.00 perf-profile.calltrace.cycles-pp.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch.__schedule 6.48 ± 4% -6.5 0.00 perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter 6.38 ± 4% -6.4 0.00 perf-profile.calltrace.cycles-pp.perf_event_output_forward.__perf_event_overflow.perf_swevent_overflow.perf_tp_event.perf_trace_sched_switch 6.15 ± 4% -6.2 0.00 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state 5.71 ± 8% -5.7 0.00 perf-profile.calltrace.cycles-pp.blk_mq_flush_plug_list.blk_flush_plug_list.blk_finish_plug.__iomap_dio_rw.iomap_dio_rw 5.63 ± 8% -5.6 0.00 perf-profile.calltrace.cycles-pp.blk_mq_sched_insert_requests.blk_mq_flush_plug_list.blk_flush_plug_list.blk_finish_plug.__iomap_dio_rw 5.52 ± 3% -5.5 0.00 perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.io_schedule.__iomap_dio_rw 5.44 ± 4% -5.4 0.00 perf-profile.calltrace.cycles-pp.blk_update_request.scsi_end_request.scsi_io_completion.blk_complete_reqs.__softirqentry_text_start 5.40 ± 3% -5.4 0.00 perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.io_schedule 4.94 ± 44% -4.9 0.00 perf-profile.calltrace.cycles-pp.iomap_apply.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read.xfs_file_read_iter 4.88 ± 45% -4.9 0.00 perf-profile.calltrace.cycles-pp.blk_finish_plug.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read.xfs_file_read_iter 4.86 ± 45% -4.9 0.00 perf-profile.calltrace.cycles-pp.blk_flush_plug_list.blk_finish_plug.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read 41.44 -41.4 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 41.27 ± 3% -41.3 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 41.27 ± 3% -41.3 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 41.27 ± 3% -41.3 0.00 perf-profile.children.cycles-pp.do_idle 41.03 -41.0 0.00 perf-profile.children.cycles-pp.do_syscall_64 40.48 ± 3% -40.5 0.00 perf-profile.children.cycles-pp.start_secondary 36.25 -36.3 0.00 perf-profile.children.cycles-pp.vfs_read 35.49 ± 2% -35.5 0.00 perf-profile.children.cycles-pp.ksys_pread64 34.99 ± 2% -35.0 0.00 perf-profile.children.cycles-pp.new_sync_read 34.58 ± 2% -34.6 0.00 perf-profile.children.cycles-pp.xfs_file_read_iter 33.61 ± 2% -33.6 0.00 perf-profile.children.cycles-pp.iomap_dio_rw 33.59 ± 2% -33.6 0.00 perf-profile.children.cycles-pp.__iomap_dio_rw 28.95 ± 44% -28.9 0.00 perf-profile.children.cycles-pp.xfs_file_dio_read 28.58 -28.6 0.00 perf-profile.children.cycles-pp.__schedule 25.31 -25.3 0.00 perf-profile.children.cycles-pp.schedule 21.34 -21.3 0.00 perf-profile.children.cycles-pp.io_schedule 19.34 ± 4% -19.3 0.00 perf-profile.children.cycles-pp.cpuidle_enter 19.25 ± 4% -19.3 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 17.26 ± 3% -17.3 0.00 perf-profile.children.cycles-pp.perf_tp_event 16.62 ± 3% -16.6 0.00 perf-profile.children.cycles-pp.perf_swevent_overflow 16.48 ± 3% -16.5 0.00 perf-profile.children.cycles-pp.__perf_event_overflow 16.26 ± 3% -16.3 0.00 perf-profile.children.cycles-pp.perf_event_output_forward 15.05 -15.0 0.00 perf-profile.children.cycles-pp.__softirqentry_text_start 13.59 ± 3% -13.6 0.00 perf-profile.children.cycles-pp.perf_prepare_sample 13.46 -13.5 0.00 perf-profile.children.cycles-pp.blk_complete_reqs 12.78 -12.8 0.00 perf-profile.children.cycles-pp.flush_smp_call_function_from_idle 12.73 ± 2% -12.7 0.00 perf-profile.children.cycles-pp.scsi_io_completion 12.68 ± 2% -12.7 0.00 perf-profile.children.cycles-pp.scsi_end_request 12.52 -12.5 0.00 perf-profile.children.cycles-pp.do_softirq 12.20 -12.2 0.00 perf-profile.children.cycles-pp.pick_next_task_fair 12.01 ± 3% -12.0 0.00 perf-profile.children.cycles-pp.perf_callchain 11.89 ± 3% -11.9 0.00 perf-profile.children.cycles-pp.get_perf_callchain 11.60 ± 2% -11.6 0.00 perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt 11.31 -11.3 0.00 perf-profile.children.cycles-pp.newidle_balance 10.82 -10.8 0.00 perf-profile.children.cycles-pp.load_balance 10.79 ± 2% -10.8 0.00 perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt 10.03 ± 4% -10.0 0.00 perf-profile.children.cycles-pp.perf_callchain_kernel 9.87 -9.9 0.00 perf-profile.children.cycles-pp.find_busiest_group 9.76 -9.8 0.00 perf-profile.children.cycles-pp.update_sd_lb_stats 8.79 ± 3% -8.8 0.00 perf-profile.children.cycles-pp.try_to_wake_up 8.08 ± 15% -8.1 0.00 perf-profile.children.cycles-pp.cmd_record 7.88 ± 3% -7.9 0.00 perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt 7.51 ± 3% -7.5 0.00 perf-profile.children.cycles-pp.hrtimer_interrupt 7.39 -7.4 0.00 perf-profile.children.cycles-pp.ret_from_fork 7.38 -7.4 0.00 perf-profile.children.cycles-pp.kthread 7.32 ± 4% -7.3 0.00 perf-profile.children.cycles-pp.perf_trace_sched_switch 7.27 ± 4% -7.3 0.00 perf-profile.children.cycles-pp.unwind_next_frame 7.12 ± 6% -7.1 0.00 perf-profile.children.cycles-pp.__blk_mq_run_hw_queue 7.08 ± 6% -7.1 0.00 perf-profile.children.cycles-pp.blk_mq_sched_dispatch_requests 7.08 -7.1 0.00 perf-profile.children.cycles-pp.worker_thread 7.00 ± 6% -7.0 0.00 perf-profile.children.cycles-pp.__blk_mq_sched_dispatch_requests 6.18 ± 3% -6.2 0.00 perf-profile.children.cycles-pp.blk_update_request 6.07 ± 3% -6.1 0.00 perf-profile.children.cycles-pp.dequeue_task_fair 5.93 ± 3% -5.9 0.00 perf-profile.children.cycles-pp.iomap_apply 5.82 ± 3% -5.8 0.00 perf-profile.children.cycles-pp.dequeue_entity 5.78 ± 8% -5.8 0.00 perf-profile.children.cycles-pp.blk_finish_plug 5.76 ± 8% -5.8 0.00 perf-profile.children.cycles-pp.blk_flush_plug_list 5.71 ± 8% -5.7 0.00 perf-profile.children.cycles-pp.blk_mq_flush_plug_list 5.68 ± 6% -5.7 0.00 perf-profile.children.cycles-pp.blk_mq_dispatch_rq_list 5.63 ± 8% -5.6 0.00 perf-profile.children.cycles-pp.blk_mq_sched_insert_requests 5.62 ± 3% -5.6 0.00 perf-profile.children.cycles-pp.update_curr 5.60 ± 4% -5.6 0.00 perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template 5.52 ± 3% -5.5 0.00 perf-profile.children.cycles-pp.iomap_dio_bio_end_io 5.13 ± 4% -5.1 0.00 perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime 4.77 ± 8% -4.8 0.00 perf-profile.children.cycles-pp.__blk_mq_delay_run_hw_queue 6.11 -6.1 0.00 perf-profile.self.cycles-pp.update_sd_lb_stats 543393 +123.6% 1214974 ± 10% interrupts.CAL:Function_call_interrupts 616.83 ± 33% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 616.83 ± 33% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 6571 ± 17% +227.3% 21510 ± 24% interrupts.CPU1.CAL:Function_call_interrupts 394.83 ± 6% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 394.83 ± 6% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 4209 ± 7% +289.0% 16374 ± 29% interrupts.CPU10.CAL:Function_call_interrupts 509.33 ± 32% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 509.33 ± 32% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 65.83 ± 16% +112.9% 140.17 ± 43% interrupts.CPU10.RES:Rescheduling_interrupts 4663 ± 20% +275.0% 17490 ± 20% interrupts.CPU11.CAL:Function_call_interrupts 752.50 ± 69% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 752.50 ± 69% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 4340 ± 8% +180.2% 12161 ± 41% interrupts.CPU12.CAL:Function_call_interrupts 635.00 ± 27% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 635.00 ± 27% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 4648 ± 19% +216.5% 14713 ± 18% interrupts.CPU13.CAL:Function_call_interrupts 508.33 ± 35% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 508.33 ± 35% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 4835 ± 20% +291.4% 18922 ± 35% interrupts.CPU14.CAL:Function_call_interrupts 723.50 ± 61% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 723.50 ± 61% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 4650 ± 11% +216.9% 14735 ± 26% interrupts.CPU15.CAL:Function_call_interrupts 523.00 ± 40% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 523.00 ± 40% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 1283 ± 69% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 1283 ± 69% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 508.50 ±121% +214.4% 1598 ± 96% interrupts.CPU16.RES:Rescheduling_interrupts 1626 ± 5% +18.5% 1928 ± 21% interrupts.CPU17.CAL:Function_call_interrupts 685.50 ± 78% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 685.50 ± 78% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 3385 ± 4% +244.6% 11666 ± 26% interrupts.CPU18.CAL:Function_call_interrupts 442.67 ± 32% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 442.67 ± 32% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 3192 ± 5% +143.8% 7784 ± 18% interrupts.CPU19.CAL:Function_call_interrupts 566.83 ± 34% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 566.83 ± 34% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 23332 ± 8% +600.3% 163397 ± 7% interrupts.CPU2.CAL:Function_call_interrupts 765.33 ± 54% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 765.33 ± 54% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 481.00 ± 11% +57.0% 755.00 ± 18% interrupts.CPU2.RES:Rescheduling_interrupts 3300 ± 4% +189.4% 9553 ± 22% interrupts.CPU20.CAL:Function_call_interrupts 444.00 ± 31% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 444.00 ± 31% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 3269 ± 3% +144.3% 7987 ± 50% interrupts.CPU21.CAL:Function_call_interrupts 633.00 ± 29% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 633.00 ± 29% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 3337 ± 8% +222.8% 10775 ± 46% interrupts.CPU22.CAL:Function_call_interrupts 380.50 ± 2% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 380.50 ± 2% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 3379 ± 7% +151.4% 8498 ± 31% interrupts.CPU23.CAL:Function_call_interrupts 472.50 ± 31% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 472.50 ± 31% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 3462 ± 9% +128.2% 7901 ± 38% interrupts.CPU24.CAL:Function_call_interrupts 461.83 ± 41% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 461.83 ± 41% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 3337 ± 14% +188.4% 9627 ± 69% interrupts.CPU25.CAL:Function_call_interrupts 536.67 ± 43% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 536.67 ± 43% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 3131 ± 13% +162.6% 8224 ± 49% interrupts.CPU26.CAL:Function_call_interrupts 570.17 ± 31% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 570.17 ± 31% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 510.83 ± 36% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 510.83 ± 36% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 566.83 ± 33% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 566.83 ± 33% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 2678 ± 7% +134.4% 6278 ± 59% interrupts.CPU29.CAL:Function_call_interrupts 373.67 ± 2% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 373.67 ± 2% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 22321 ± 8% +526.1% 139744 ± 9% interrupts.CPU3.CAL:Function_call_interrupts 631.17 ± 26% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 631.17 ± 26% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 2766 ± 8% +216.1% 8743 ± 76% interrupts.CPU30.CAL:Function_call_interrupts 437.83 ± 30% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 437.83 ± 30% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 558.33 ± 32% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 558.33 ± 32% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 618.00 ± 39% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 618.00 ± 39% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 382.67 ± 6% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 382.67 ± 6% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 2641 ± 16% +480.9% 15342 ± 78% interrupts.CPU34.CAL:Function_call_interrupts 562.67 ± 34% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 562.67 ± 34% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 619.83 ± 67% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 619.83 ± 67% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 394.17 ± 9% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 394.17 ± 9% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 1036 ± 78% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 1036 ± 78% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 511.00 ± 37% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 511.00 ± 37% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 443.17 ± 33% -99.7% 1.33 ±223% interrupts.CPU39.NMI:Non-maskable_interrupts 443.17 ± 33% -99.7% 1.33 ±223% interrupts.CPU39.PMI:Performance_monitoring_interrupts 8695 ± 15% +445.5% 47432 ± 10% interrupts.CPU4.CAL:Function_call_interrupts 524.50 ± 31% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 524.50 ± 31% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 510.00 ± 36% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 510.00 ± 36% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 499.33 ± 35% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 499.33 ± 35% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 2264 ± 5% +95.8% 4433 ± 51% interrupts.CPU42.CAL:Function_call_interrupts 520.83 ± 30% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 520.83 ± 30% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 904.00 ± 84% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 904.00 ± 84% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 397.67 ± 10% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 397.67 ± 10% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 902.50 ± 91% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 902.50 ± 91% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 690.50 ± 70% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 690.50 ± 70% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 502.33 ± 32% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 502.33 ± 32% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 601.17 ± 26% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 601.17 ± 26% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 564.00 ± 34% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 564.00 ± 34% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 8558 ± 18% +311.3% 35202 ± 13% interrupts.CPU5.CAL:Function_call_interrupts 657.67 ± 28% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 657.67 ± 28% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 2137 ± 17% +200.3% 6419 ± 83% interrupts.CPU50.CAL:Function_call_interrupts 581.33 ± 35% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 581.33 ± 35% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 558.33 ± 33% -99.8% 1.17 ±223% interrupts.CPU51.NMI:Non-maskable_interrupts 558.33 ± 33% -99.8% 1.17 ±223% interrupts.CPU51.PMI:Performance_monitoring_interrupts 859.17 ± 93% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 859.17 ± 93% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 835.83 ± 21% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 835.83 ± 21% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 8262 ± 17% +386.4% 40189 ± 10% interrupts.CPU6.CAL:Function_call_interrupts 537.17 ± 38% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 537.17 ± 38% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 124.67 ± 31% +148.0% 309.17 ± 47% interrupts.CPU6.RES:Rescheduling_interrupts 1676 ± 2% +38.2% 2316 ± 29% interrupts.CPU60.CAL:Function_call_interrupts 1493 +32.4% 1976 ± 23% interrupts.CPU69.CAL:Function_call_interrupts 8081 ± 12% +408.7% 41110 ± 17% interrupts.CPU7.CAL:Function_call_interrupts 598.00 ± 32% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 598.00 ± 32% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 167.83 ± 39% +102.2% 339.33 ± 40% interrupts.CPU7.RES:Rescheduling_interrupts 22704 ± 10% -15.4% 19197 ± 13% interrupts.CPU72.LOC:Local_timer_interrupts 21150 ± 7% -16.9% 17577 ± 5% interrupts.CPU73.LOC:Local_timer_interrupts 6560 ± 19% +180.1% 18375 ± 24% interrupts.CPU8.CAL:Function_call_interrupts 675.83 ± 58% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 675.83 ± 58% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 53.00 ± 27% +69.2% 89.67 ± 24% interrupts.CPU8.TLB:TLB_shootdowns 5863 ± 27% +151.4% 14741 ± 19% interrupts.CPU9.CAL:Function_call_interrupts 1115 ± 88% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 1115 ± 88% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 173.17 -99.9% 0.17 ±223% interrupts.IWI:IRQ_work_interrupts 32594 ± 4% -100.0% 7.33 ± 6% interrupts.NMI:Non-maskable_interrupts 32594 ± 4% -100.0% 7.33 ± 6% interrupts.PMI:Performance_monitoring_interrupts 52040 ± 6% +773.3% 454473 interrupts.RES:Rescheduling_interrupts 3224 ± 5% +80.3% 5815 ± 9% interrupts.TLB:TLB_shootdowns 0.01 ± 41% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.04 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.02 ± 27% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.05 ± 30% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.00 ± 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.03 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.01 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 0.02 ±112% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.04 ± 41% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.05 -100.0% 0.00 perf-sched.sch_delay.avg.ms.io_schedule.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read 0.01 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 6% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.security_task_alloc.copy_process 0.02 ± 10% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.copy_pte_range.copy_page_range.dup_mmap 0.00 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.01 ± 35% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.path_lookupat 0.10 ± 59% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_ilock.xfs_ilock_iocb 0.12 ± 73% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin 0.01 ± 34% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 0.01 ± 15% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.elf_map 0.01 ± 25% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open 0.01 ± 8% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 0.01 ± 22% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 0.02 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.drm_gem_put_pages.drm_gem_shmem_put_pages_locked.drm_gem_shmem_put_pages 0.01 ± 30% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.filemap_read.__kernel_read.exec_binprm 0.01 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.01 ± 35% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 0.01 ± 34% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 0.00 ± 27% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.01 ± 24% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma 0.02 ± 10% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.dup_mmap 0.01 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_trace.perf_event_mmap.mmap_region 0.08 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_dio_bio_actor 13.91 ± 33% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.drm_gem_shmem_vunmap.mgag200_handle_damage 0.01 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 0.02 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_read_mapping_page_gfp 0.02 ± 18% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.rcu_gp_kthread.kthread.ret_from_fork 0.00 ± 45% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.remove_vma.exit_mmap.mmput 0.12 ± 95% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.set_page_dirty_lock.bio_set_pages_dirty.iomap_dio_bio_actor 0.12 ±120% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 8% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.02 ± 40% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.unmap_vmas.exit_mmap.mmput 0.01 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 0.01 ± 32% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault 0.03 ± 70% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 0.03 ± 20% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.09 ± 33% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 22% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.02 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.02 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 32% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 0.06 ± 23% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.01 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.06 ± 68% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 0.11 ± 54% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.07 ± 48% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.15 ± 72% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 0.07 ± 39% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.16 ± 27% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.03 ± 44% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 0.08 ± 49% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 2.22 ±175% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.10 ± 64% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 88.07 ± 25% -100.0% 0.00 perf-sched.sch_delay.max.ms.io_schedule.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read 0.95 ± 62% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 8% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.security_task_alloc.copy_process 0.02 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.copy_pte_range.copy_page_range.dup_mmap 0.02 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.02 ± 17% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.path_lookupat 0.29 ±118% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_ilock.xfs_ilock_iocb 0.18 ± 93% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin 0.02 ± 32% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 0.02 ± 13% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.elf_map 0.02 ± 31% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open 0.02 ± 14% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 0.02 ± 26% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 0.02 ± 7% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.drm_gem_put_pages.drm_gem_shmem_put_pages_locked.drm_gem_shmem_put_pages 0.02 ± 26% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.filemap_read.__kernel_read.exec_binprm 0.02 ± 13% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.01 ± 2% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 0.01 ± 8% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 0.02 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.02 ± 42% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma 0.03 ± 30% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.dup_mmap 0.02 ± 24% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_trace.perf_event_mmap.mmap_region 0.13 ± 75% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_dio_bio_actor 40.43 ± 9% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.drm_gem_shmem_vunmap.mgag200_handle_damage 0.03 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 0.03 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_read_mapping_page_gfp 0.02 ± 7% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.remove_vma.exit_mmap.mmput 0.19 ±118% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.set_page_dirty_lock.bio_set_pages_dirty.iomap_dio_bio_actor 0.82 ±177% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 0.07 ± 36% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.03 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.02 ± 27% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.exit_mmap.mmput 0.01 ± 3% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 0.02 ± 22% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault 2.08 ±133% -100.0% 0.00 perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork 0.09 ± 27% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.18 ± 52% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.06 ± 60% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.04 ± 27% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 4.09 ±146% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.02 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion.__flush_work.lru_add_drain_all 42.31 ± 36% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 0.07 ± 97% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 32.80 ± 47% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.03 -100.0% 0.00 perf-sched.total_sch_delay.average.ms 88.07 ± 25% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 5.64 -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 503697 -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 6059 ± 13% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 5.61 -100.0% 0.00 perf-sched.total_wait_time.average.ms 6059 ± 13% -100.0% 0.00 perf-sched.total_wait_time.max.ms 431.86 ± 40% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 391.41 ± 57% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 484.10 ± 24% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 394.02 ± 62% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 102.10 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 50.13 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 265.46 ±105% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 2.06 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.io_schedule.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read 50.46 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 141.30 ± 32% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.drm_gem_put_pages.drm_gem_shmem_put_pages_locked.drm_gem_shmem_put_pages 510.84 ± 61% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.drm_gem_shmem_vunmap.mgag200_handle_damage 442.36 ± 23% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 156.53 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_read_mapping_page_gfp 207.09 ± 45% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 13.85 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 213.05 ± 46% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 380.44 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.44 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 8.79 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 324.78 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 4.17 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 20.33 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 9.00 ± 12% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 12.50 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 8.83 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 242.50 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 258.50 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 6.67 ± 26% -100.0% 0.00 perf-sched.wait_and_delay.count.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 257396 -100.0% 0.00 perf-sched.wait_and_delay.count.io_schedule.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read 2917 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 3.50 ± 51% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.drm_gem_put_pages.drm_gem_shmem_put_pages_locked.drm_gem_shmem_put_pages 3.50 ± 35% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.mutex_lock.drm_gem_shmem_vunmap.mgag200_handle_damage 13.17 ± 42% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 9.33 ± 60% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_read_mapping_page_gfp 8.17 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 652.00 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 29.00 ± 32% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 35.50 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 39.50 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1055 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 3188 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 233390 -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 998.86 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2321 ± 45% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2214 ± 47% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1306 ± 52% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 1069 ±112% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 219.50 ± 25% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.io_schedule.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read 2684 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 180.44 ± 23% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.drm_gem_put_pages.drm_gem_shmem_put_pages_locked.drm_gem_shmem_put_pages 1026 ± 56% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.drm_gem_shmem_vunmap.mgag200_handle_damage 1683 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 207.45 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_read_mapping_page_gfp 665.25 ± 69% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 1001 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 1418 ± 71% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 505.01 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 187.12 ± 47% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 2022 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 6059 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 431.84 ± 40% -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 391.37 ± 57% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 484.08 ± 24% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 393.98 ± 62% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 102.10 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 5.26 ± 34% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 22.72 ± 58% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 11.52 ±221% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 3.65 ± 37% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.36 ± 77% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 50.12 ± 30% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 265.42 ±105% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 2.01 -100.0% 0.00 perf-sched.wait_time.avg.ms.io_schedule.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read 50.45 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 42.46 ±222% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page 0.13 ± 18% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 24.19 ±221% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 0.11 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault 260.79 ±110% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.security_task_alloc.copy_process 0.11 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.change_p4d_range.change_protection.change_prot_numa 0.08 ± 20% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.change_p4d_range.change_protection.mprotect_fixup 115.72 ±136% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.copy_pte_range.copy_page_range.dup_mmap 15.33 ± 92% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 2.27 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_ilock.xfs_ilock_iocb 2.06 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin 0.11 ± 34% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.__vma_adjust.__split_vma 0.10 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.generic_file_write_iter.new_sync_write 0.07 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 0.09 ± 35% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff 75.09 ±142% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 0.05 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open 16.71 ±223% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 9.88 ±221% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 141.28 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.drm_gem_put_pages.drm_gem_shmem_put_pages_locked.drm_gem_shmem_put_pages 0.98 ±199% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 62.61 ± 74% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.12 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc_cursor 0.86 ±206% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 23.89 ±223% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file 0.08 ± 23% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.05 ± 35% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma 378.01 ±106% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.dup_mmap 1.99 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_dio_bio_actor 0.13 ± 15% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mmput.m_stop.seq_read_iter 0.10 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.__fdget_pos.ksys_write 496.93 ± 62% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.drm_gem_shmem_vunmap.mgag200_handle_damage 0.06 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_event_ctx_lock_nested.isra 442.35 ± 23% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 71.69 ±140% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.pipe_read.new_sync_read 0.09 ± 21% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.pipe_write.new_sync_write 0.09 ± 17% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock_interruptible.devkmsg_read.vfs_read 156.51 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_read_mapping_page_gfp 0.78 ±193% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_write_begin 5.49 ± 98% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.rcu_gp_kthread.kthread.ret_from_fork 23.88 ±222% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.remove_vma.exit_mmap.mmput 2.00 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.set_page_dirty_lock.bio_set_pages_dirty.iomap_dio_bio_actor 206.97 ± 45% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 13.85 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.01 ± 59% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.08 ± 43% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 116.73 ±159% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.exit_mmap 0.05 ± 21% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 2.55 ±214% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.vfs_write.ksys_write.do_syscall_64 82.61 ± 59% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 3.59 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork 213.02 ± 46% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 380.43 ± 15% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 478.42 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.04 ± 38% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 8.77 ± 16% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.03 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion.stop_one_cpu.affine_move_task 324.72 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 2.41 ±217% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 4.15 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 998.82 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2321 ± 45% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2214 ± 47% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 403.97 ±105% -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 834.54 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 167.37 ±222% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 1527 ± 40% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 1.07 ± 88% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 1306 ± 52% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 1069 ±112% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 131.43 ± 26% -100.0% 0.00 perf-sched.wait_time.max.ms.io_schedule.__iomap_dio_rw.iomap_dio_rw.xfs_file_dio_read 2684 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 169.49 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page 1.18 ±113% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 167.94 ±222% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 0.37 ± 54% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault 504.70 ± 98% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.security_task_alloc.copy_process 0.12 ± 20% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.change_p4d_range.change_protection.change_prot_numa 0.14 ± 25% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.change_p4d_range.change_protection.mprotect_fixup 339.24 ±138% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.copy_pte_range.copy_page_range.dup_mmap 670.24 ± 69% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 3.61 ± 94% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_ilock.xfs_ilock_iocb 2.13 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin 0.13 ± 58% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.__vma_adjust.__split_vma 0.22 ± 52% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.generic_file_write_iter.new_sync_write 0.14 ± 16% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 0.17 ± 6% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff 333.42 ±141% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 0.13 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open 166.88 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 166.82 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 180.42 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.drm_gem_put_pages.drm_gem_shmem_put_pages_locked.drm_gem_shmem_put_pages 305.10 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 667.15 ± 70% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.13 ± 17% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc_cursor 4.87 ±216% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 167.05 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file 0.17 ± 14% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.13 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma 504.67 ± 98% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.dup_mmap 2.06 ± 4% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_dio_bio_actor 0.15 ± 17% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mmput.m_stop.seq_read_iter 0.18 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.__fdget_pos.ksys_write 1006 ± 58% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.drm_gem_shmem_vunmap.mgag200_handle_damage 0.14 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_event_ctx_lock_nested.isra 1683 ± 16% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 338.71 ±139% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.pipe_read.new_sync_read 0.13 ± 18% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.pipe_write.new_sync_write 0.10 ± 25% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock_interruptible.devkmsg_read.vfs_read 207.43 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_read_mapping_page_gfp 167.77 ±222% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.pagecache_get_page.shmem_getpage_gfp.shmem_write_begin 14.30 ±156% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.rcu_gp_kthread.kthread.ret_from_fork 166.78 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.remove_vma.exit_mmap.mmput 2.17 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.set_page_dirty_lock.bio_set_pages_dirty.iomap_dio_bio_actor 665.19 ± 69% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.13 ± 24% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.17 ± 47% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 333.41 ±141% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.exit_mmap 0.14 ± 12% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 166.63 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.vfs_write.ksys_write.do_syscall_64 833.28 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 8.66 ± 47% -100.0% 0.00 perf-sched.wait_time.max.ms.rcu_gp_kthread.kthread.ret_from_fork 1418 ± 71% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 504.98 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.04 ± 38% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 187.10 ± 47% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.04 ± 19% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.wait_for_completion.stop_one_cpu.affine_move_task 2022 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 166.81 ±223% -100.0% 0.00 perf-sched.wait_time.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 6059 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork *************************************************************************************************** lkp-ivb-2ep1: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 112G memory ========================================================================================= class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode: filesystem/gcc-9/performance/1HDD/btrfs/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-ivb-2ep1/fanotify/stress-ng/60s/0x42e commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 12490216 ± 2% +192.8% 36570607 stress-ng.fanotify.ops 208133 ± 2% +192.8% 609421 stress-ng.fanotify.ops_per_sec 9316177 ± 2% +9.7% 10221786 stress-ng.time.file_system_outputs 31713 ± 2% -65.1% 11073 ± 2% stress-ng.time.involuntary_context_switches 600.00 +12.0% 672.00 stress-ng.time.percent_of_cpu_this_job_got 346.49 +12.3% 389.09 stress-ng.time.system_time 6396933 -6.6% 5971834 stress-ng.time.voluntary_context_switches 2759525 -82.9% 472204 ±199% cpuidle.C1E.usage 85.38 -6.4% 79.88 iostat.cpu.idle 13.47 +38.5% 18.65 iostat.cpu.system 1.15 ± 5% +28.3% 1.47 ± 8% iostat.cpu.user 972791 ± 15% +53.2% 1490045 ± 13% numa-numastat.node0.local_node 1000563 ± 13% +51.1% 1511645 ± 14% numa-numastat.node0.numa_hit 1034189 ± 12% +49.8% 1548719 ± 13% numa-numastat.node1.local_node 1049813 ± 11% +49.6% 1570497 ± 14% numa-numastat.node1.numa_hit 0.00 ± 18% -0.0 0.00 ± 21% mpstat.cpu.all.iowait% 1.29 -0.3 1.03 mpstat.cpu.all.irq% 0.18 ± 3% +0.1 0.32 mpstat.cpu.all.soft% 11.00 +3.1 14.13 mpstat.cpu.all.sys% 1.09 ± 5% +0.2 1.28 ± 8% mpstat.cpu.all.usr% 84.83 -6.9% 79.00 vmstat.cpu.id 1253877 +12.6% 1412418 vmstat.memory.cache 6.17 ± 6% +45.9% 9.00 vmstat.procs.r 217718 -5.4% 205891 vmstat.system.cs 101985 -3.0% 98976 ± 3% vmstat.system.in 113948 ± 3% +138.5% 271740 ± 5% meminfo.Active 5426 ± 3% +3778.0% 210448 ± 2% meminfo.Active(anon) 108520 ± 3% -43.5% 61291 ± 22% meminfo.Active(file) 1176827 +13.5% 1335425 meminfo.Cached 672554 +33.9% 900305 meminfo.Committed_AS 4476 ± 10% -91.4% 383.00 ± 34% meminfo.Dirty 19734 +1037.6% 224492 ± 2% meminfo.Shmem 964.50 ± 24% +9627.8% 93824 ± 66% numa-meminfo.node0.Active(anon) 2077 ± 19% -88.6% 236.50 ± 47% numa-meminfo.node0.Dirty 564057 +17.3% 661420 ± 9% numa-meminfo.node0.FilePages 173.83 ± 29% +696.8% 1385 ± 47% numa-meminfo.node0.Inactive(file) 1087988 ± 6% +12.8% 1227049 ± 6% numa-meminfo.node0.MemUsed 1906 ± 18% +33.4% 2543 ± 13% numa-meminfo.node0.PageTables 8467 ± 46% +1147.0% 105588 ± 56% numa-meminfo.node0.Shmem 4461 ± 6% +2519.6% 116877 ± 54% numa-meminfo.node1.Active(anon) 73443 ± 17% -59.4% 29806 ± 39% numa-meminfo.node1.Active(file) 2508 ± 11% -89.8% 255.17 ± 15% numa-meminfo.node1.Dirty 19022 ± 12% -18.6% 15489 ± 11% numa-meminfo.node1.Mapped 11265 ± 36% +957.8% 119162 ± 52% numa-meminfo.node1.Shmem 18172 ± 5% -45.7% 9859 ± 13% slabinfo.Acpi-State.active_objs 357.50 ± 5% -45.5% 194.67 ± 13% slabinfo.Acpi-State.active_slabs 18247 ± 5% -45.4% 9959 ± 13% slabinfo.Acpi-State.num_objs 357.50 ± 5% -45.5% 194.67 ± 13% slabinfo.Acpi-State.num_slabs 9587 ± 3% -41.3% 5625 ± 10% slabinfo.btrfs_delayed_node.active_objs 9738 ± 3% -40.5% 5789 ± 9% slabinfo.btrfs_delayed_node.num_objs 4002 ± 4% -21.9% 3127 ± 3% slabinfo.btrfs_inode.active_objs 4094 ± 4% -21.7% 3206 ± 3% slabinfo.btrfs_inode.num_objs 26574 ± 2% +88.4% 50068 ± 3% slabinfo.filp.active_objs 835.83 ± 2% +88.2% 1572 ± 3% slabinfo.filp.active_slabs 26760 ± 2% +88.1% 50342 ± 3% slabinfo.filp.num_objs 835.83 ± 2% +88.2% 1572 ± 3% slabinfo.filp.num_slabs 4265 -13.9% 3672 ± 2% slabinfo.kmalloc-1k.active_objs 4348 -13.9% 3743 slabinfo.kmalloc-1k.num_objs 11516 ± 4% +16.6% 13429 ± 5% slabinfo.kmalloc-256.active_objs 12697 ± 2% +20.7% 15330 ± 3% slabinfo.kmalloc-256.num_objs 1356 ± 3% +3784.9% 52691 ± 2% proc-vmstat.nr_active_anon 27200 ± 3% -43.6% 15351 ± 22% proc-vmstat.nr_active_file 1164755 ± 2% +9.7% 1277850 proc-vmstat.nr_dirtied 1141 ± 10% -91.7% 95.17 ± 46% proc-vmstat.nr_dirty 294523 +13.5% 334210 proc-vmstat.nr_file_pages 8937 -1.2% 8829 proc-vmstat.nr_mapped 1182 +2.6% 1212 proc-vmstat.nr_page_table_pages 4932 +1039.4% 56205 ± 2% proc-vmstat.nr_shmem 19024 -3.6% 18348 proc-vmstat.nr_slab_reclaimable 29534 +2.6% 30298 proc-vmstat.nr_slab_unreclaimable 2294 ± 11% -80.2% 454.33 ± 7% proc-vmstat.nr_written 1356 ± 3% +3784.9% 52691 ± 2% proc-vmstat.nr_zone_active_anon 27200 ± 3% -43.6% 15351 ± 22% proc-vmstat.nr_zone_active_file 912.83 ± 7% -85.5% 132.17 ± 29% proc-vmstat.nr_zone_write_pending 2066836 ± 2% +49.6% 3092305 proc-vmstat.numa_hit 2023423 ± 2% +50.7% 3048905 proc-vmstat.numa_local 92817 ± 2% +9.3% 101441 ± 3% proc-vmstat.pgactivate 2781261 ± 3% +69.4% 4712670 proc-vmstat.pgalloc_normal 2654251 ± 4% +69.0% 4486672 proc-vmstat.pgfree 2448 -17.6% 2016 ± 3% proc-vmstat.pgpgout 240.83 ± 24% +9661.1% 23508 ± 66% numa-vmstat.node0.nr_active_anon 517.00 ± 17% -88.9% 57.33 ± 48% numa-vmstat.node0.nr_dirty 141040 +17.3% 165420 ± 9% numa-vmstat.node0.nr_file_pages 43.17 ± 29% +701.5% 346.00 ± 48% numa-vmstat.node0.nr_inactive_file 474.83 ± 18% +33.9% 636.00 ± 13% numa-vmstat.node0.nr_page_table_pages 2116 ± 46% +1149.7% 26449 ± 56% numa-vmstat.node0.nr_shmem 240.83 ± 24% +9661.1% 23508 ± 66% numa-vmstat.node0.nr_zone_active_anon 43.17 ± 29% +701.5% 346.00 ± 48% numa-vmstat.node0.nr_zone_inactive_file 405.17 ± 17% -79.9% 81.50 ± 30% numa-vmstat.node0.nr_zone_write_pending 893940 ± 6% +33.0% 1188603 ± 9% numa-vmstat.node0.numa_hit 862956 ± 6% +35.0% 1165016 ± 8% numa-vmstat.node0.numa_local 1115 ± 6% +2526.7% 29292 ± 54% numa-vmstat.node1.nr_active_anon 18390 ± 17% -59.4% 7466 ± 39% numa-vmstat.node1.nr_active_file 611.17 ± 9% -89.1% 66.50 ± 16% numa-vmstat.node1.nr_dirty 4842 ± 12% -18.3% 3957 ± 11% numa-vmstat.node1.nr_mapped 2816 ± 36% +960.4% 29859 ± 52% numa-vmstat.node1.nr_shmem 1115 ± 6% +2526.7% 29292 ± 54% numa-vmstat.node1.nr_zone_active_anon 18390 ± 17% -59.4% 7465 ± 39% numa-vmstat.node1.nr_zone_active_file 513.67 ± 10% -84.1% 81.83 ± 10% numa-vmstat.node1.nr_zone_write_pending 1065486 ± 3% +19.9% 1277991 ± 9% numa-vmstat.node1.numa_hit 896498 ± 4% +22.9% 1101875 ± 9% numa-vmstat.node1.numa_local 40423 ± 22% -69.1% 12498 ± 24% softirqs.CPU0.RCU 14292 ± 8% -17.6% 11774 ± 10% softirqs.CPU0.SCHED 44809 ± 12% -71.5% 12755 ± 7% softirqs.CPU1.RCU 40049 ± 55% -72.8% 10873 ± 12% softirqs.CPU11.RCU 33535 ± 42% -65.8% 11474 ± 20% softirqs.CPU13.RCU 12441 ± 3% -16.0% 10450 ± 9% softirqs.CPU13.SCHED 24762 ± 25% -56.0% 10894 ± 21% softirqs.CPU14.RCU 31219 ± 49% -64.2% 11172 ± 20% softirqs.CPU15.RCU 11891 ± 4% -14.8% 10126 ± 12% softirqs.CPU15.SCHED 25893 ± 16% -61.8% 9879 ± 30% softirqs.CPU17.RCU 28328 ± 32% -62.0% 10762 ± 14% softirqs.CPU18.RCU 12636 ± 3% -12.6% 11048 ± 5% softirqs.CPU18.SCHED 33125 ± 28% -62.5% 12422 ± 52% softirqs.CPU20.RCU 34019 ± 57% -71.8% 9589 ± 20% softirqs.CPU21.RCU 12134 ± 9% -17.5% 10015 ± 12% softirqs.CPU21.SCHED 29111 ± 36% -68.4% 9200 ± 23% softirqs.CPU23.RCU 32512 ± 39% -68.7% 10168 ± 17% softirqs.CPU24.RCU 34099 ± 15% -69.5% 10399 ± 17% softirqs.CPU27.RCU 43034 ± 47% -72.5% 11845 ± 10% softirqs.CPU28.RCU 33745 ± 64% -67.9% 10833 ± 16% softirqs.CPU29.RCU 52529 ± 35% -79.4% 10807 ± 6% softirqs.CPU3.RCU 12544 ± 4% -17.4% 10358 ± 11% softirqs.CPU3.SCHED 29590 ± 52% -64.7% 10446 ± 17% softirqs.CPU31.RCU 27190 ± 26% -65.0% 9523 ± 12% softirqs.CPU32.RCU 44168 ± 50% -80.3% 8709 ± 23% softirqs.CPU34.RCU 25047 ± 46% -64.1% 8993 ± 14% softirqs.CPU35.RCU 28064 ± 33% -60.8% 11004 ± 56% softirqs.CPU36.RCU 11942 ± 5% -9.9% 10762 ± 5% softirqs.CPU36.SCHED 27214 ± 19% -66.2% 9189 ± 28% softirqs.CPU37.RCU 12314 ± 3% -16.2% 10325 ± 11% softirqs.CPU37.SCHED 25848 ± 43% -65.3% 8974 ± 25% softirqs.CPU38.RCU 52780 ± 38% -73.2% 14126 ± 45% softirqs.CPU4.RCU 26546 ± 24% -61.0% 10351 ± 17% softirqs.CPU41.RCU 31949 ± 35% -64.6% 11306 ± 35% softirqs.CPU42.RCU 23559 ± 19% -55.8% 10421 ± 35% softirqs.CPU43.RCU 22268 ± 18% -57.4% 9491 ± 23% softirqs.CPU44.RCU 22746 ± 16% -58.9% 9344 ± 25% softirqs.CPU45.RCU 28030 ± 34% -73.4% 7467 ± 18% softirqs.CPU46.RCU 12431 ± 4% -22.6% 9623 ± 15% softirqs.CPU46.SCHED 29131 ± 23% -70.0% 8729 ± 19% softirqs.CPU47.RCU 38774 ± 25% -74.1% 10043 ± 11% softirqs.CPU5.RCU 38783 ± 33% -73.7% 10219 ± 18% softirqs.CPU6.RCU 12164 ± 7% -13.0% 10579 ± 10% softirqs.CPU6.SCHED 44630 ± 35% -77.3% 10114 ± 16% softirqs.CPU7.RCU 33702 ± 26% -69.6% 10237 ± 13% softirqs.CPU8.RCU 12171 ± 5% -16.0% 10222 ± 14% softirqs.CPU8.SCHED 38790 ± 32% -66.2% 13113 ± 54% softirqs.CPU9.RCU 1593503 ± 9% -63.3% 585545 ± 6% softirqs.RCU 574058 ± 2% -11.6% 507256 softirqs.SCHED 0.01 ± 73% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 13% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.97 ±100% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 79% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.16 ±132% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.09 ±204% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.47 ± 75% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.01 ± 73% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.07 ± 70% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 5.42 ±109% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 79% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 1.35 ±138% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 4.31 ±220% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 3.48 ± 99% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.06 ± 55% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 10.56 ± 76% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 0.29 ± 13% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 281.67 ± 19% -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 13.18 ± 64% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 0.23 ± 14% -100.0% 0.00 perf-sched.total_wait_time.average.ms 12.17 ± 71% -100.0% 0.00 perf-sched.total_wait_time.max.ms 0.11 ±121% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 73% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.07 ± 11% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 2.65 ± 32% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 79% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 3.41 ± 19% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.14 ±143% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.63 ± 89% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 7.00 ± 18% -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.00 -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 145.67 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 48.17 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 5.67 ± 16% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 1.00 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 7.83 ± 34% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 48.83 -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 8.33 ± 59% -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 0.60 ±124% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.01 ± 73% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 1.02 ± 38% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 10.16 ± 43% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.01 ± 79% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 4.68 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 6.44 ±150% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 3.70 ± 89% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 0.09 ±116% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.06 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1.67 ± 60% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 3.25 ± 17% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.60 ±125% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1.02 ± 38% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 6.49 ± 71% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 4.67 ± 10% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 13.03 -14.2% 11.18 perf-stat.i.MPKI 2.646e+09 +37.0% 3.624e+09 perf-stat.i.branch-instructions 1.48 -0.2 1.31 perf-stat.i.branch-miss-rate% 37824895 ± 2% +24.1% 46937107 perf-stat.i.branch-misses 15.02 ± 7% +4.5 19.49 ± 6% perf-stat.i.cache-miss-rate% 24387642 ± 7% +55.4% 37890527 ± 5% perf-stat.i.cache-misses 1.679e+08 +17.1% 1.966e+08 perf-stat.i.cache-references 224406 -5.4% 212341 perf-stat.i.context-switches 2.316e+10 +33.5% 3.093e+10 perf-stat.i.cpu-cycles 92.43 ± 2% -11.4% 81.90 perf-stat.i.cpu-migrations 986.91 ± 7% -15.3% 835.87 ± 6% perf-stat.i.cycles-between-cache-misses 43585425 ± 8% +23.8% 53963530 ± 4% perf-stat.i.dTLB-load-misses 3.437e+09 +34.5% 4.624e+09 perf-stat.i.dTLB-loads 0.24 ± 6% -0.0 0.22 ± 4% perf-stat.i.dTLB-store-miss-rate% 4653429 ± 7% +28.7% 5988923 ± 5% perf-stat.i.dTLB-store-misses 1.913e+09 +38.0% 2.641e+09 perf-stat.i.dTLB-stores 42.79 ± 2% +1.4 44.23 perf-stat.i.iTLB-load-miss-rate% 3977420 ± 3% +13.3% 4508153 ± 4% perf-stat.i.iTLB-load-misses 5675446 ± 2% +5.9% 6007965 ± 2% perf-stat.i.iTLB-loads 1.282e+10 +36.3% 1.747e+10 perf-stat.i.instructions 3319 ± 3% +18.8% 3943 ± 4% perf-stat.i.instructions-per-iTLB-miss 0.56 +2.7% 0.57 perf-stat.i.ipc 10.59 -6.3% 9.93 ± 3% perf-stat.i.major-faults 0.48 +33.5% 0.64 perf-stat.i.metric.GHz 171.12 +35.9% 232.61 perf-stat.i.metric.M/sec 12351183 ± 7% +81.0% 22361766 ± 6% perf-stat.i.node-load-misses 12870201 ± 7% +80.4% 23218665 ± 6% perf-stat.i.node-loads 5934617 ± 15% +58.6% 9413921 ± 8% perf-stat.i.node-store-misses 7873670 ± 10% +58.6% 12485981 ± 7% perf-stat.i.node-stores 13.10 -14.1% 11.25 perf-stat.overall.MPKI 1.43 -0.1 1.30 perf-stat.overall.branch-miss-rate% 14.53 ± 7% +4.8 19.29 ± 6% perf-stat.overall.cache-miss-rate% 954.74 ± 7% -14.2% 818.89 ± 5% perf-stat.overall.cycles-between-cache-misses 0.24 ± 6% -0.0 0.23 ± 4% perf-stat.overall.dTLB-store-miss-rate% 41.21 ± 2% +1.7 42.86 perf-stat.overall.iTLB-load-miss-rate% 3227 ± 3% +20.3% 3882 ± 4% perf-stat.overall.instructions-per-iTLB-miss 2.604e+09 +37.0% 3.566e+09 perf-stat.ps.branch-instructions 37237300 ± 2% +24.1% 46198979 perf-stat.ps.branch-misses 23998156 ± 7% +55.4% 37282284 ± 5% perf-stat.ps.cache-misses 1.653e+08 +17.1% 1.934e+08 perf-stat.ps.cache-references 220793 -5.4% 208922 perf-stat.ps.context-switches 2.279e+10 +33.5% 3.043e+10 perf-stat.ps.cpu-cycles 90.96 ± 2% -11.4% 80.61 perf-stat.ps.cpu-migrations 42883940 ± 8% +23.8% 53096759 ± 4% perf-stat.ps.dTLB-load-misses 3.382e+09 +34.6% 4.55e+09 perf-stat.ps.dTLB-loads 4578524 ± 7% +28.7% 5892864 ± 5% perf-stat.ps.dTLB-store-misses 1.883e+09 +38.0% 2.599e+09 perf-stat.ps.dTLB-stores 3913697 ± 3% +13.3% 4436087 ± 4% perf-stat.ps.iTLB-load-misses 5584105 ± 2% +5.9% 5911288 ± 2% perf-stat.ps.iTLB-loads 1.262e+10 +36.3% 1.72e+10 perf-stat.ps.instructions 10.45 ± 2% -5.8% 9.84 ± 3% perf-stat.ps.major-faults 12152327 ± 7% +81.0% 22001242 ± 6% perf-stat.ps.node-load-misses 12663089 ± 7% +80.4% 22844459 ± 6% perf-stat.ps.node-loads 5839615 ± 15% +58.6% 9262515 ± 8% perf-stat.ps.node-store-misses 7747947 ± 10% +58.6% 12285487 ± 7% perf-stat.ps.node-stores 7.979e+11 +36.5% 1.089e+12 perf-stat.total.instructions 54.67 -54.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe 38.97 ± 2% -39.0 0.00 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify 38.04 ± 3% -38.0 0.00 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify 38.04 ± 3% -38.0 0.00 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 38.01 ± 3% -38.0 0.00 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 36.61 -36.6 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe 35.26 ± 3% -35.3 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify 35.12 ± 3% -35.1 0.00 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary 33.11 ± 3% -33.1 0.00 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry 17.91 ± 5% -17.9 0.00 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 15.67 ± 5% -15.7 0.00 perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 15.38 ± 5% -15.4 0.00 perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 14.94 ± 5% -14.9 0.00 perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe 13.93 ± 4% -13.9 0.00 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe 13.91 ± 4% -13.9 0.00 perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe 9.91 ± 4% -9.9 0.00 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe 9.87 ± 4% -9.9 0.00 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64 7.98 ± 6% -8.0 0.00 perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 7.93 ± 4% -7.9 0.00 perf-profile.calltrace.cycles-pp.btrfs_create.path_openat.do_filp_open.do_sys_openat2.do_sys_open 7.79 ± 4% -7.8 0.00 perf-profile.calltrace.cycles-pp.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe 7.56 ± 5% -7.6 0.00 perf-profile.calltrace.cycles-pp.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe 7.48 ± 5% -7.5 0.00 perf-profile.calltrace.cycles-pp.btrfs_unlink.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe 6.95 ± 5% -6.9 0.00 perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.exit_to_user_mode_prepare 6.55 ± 5% -6.6 0.00 perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run 6.48 ± 5% -6.5 0.00 perf-profile.calltrace.cycles-pp.btrfs_evict_inode.evict.__dentry_kill.dput.__fput 5.77 ± 10% -5.8 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_fanotify_init.do_syscall_64.entry_SYSCALL_64_after_hwframe 54.85 -54.8 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 38.97 ± 2% -39.0 0.00 perf-profile.children.cycles-pp.secondary_startup_64_no_verify 38.97 ± 2% -39.0 0.00 perf-profile.children.cycles-pp.cpu_startup_entry 38.96 ± 2% -39.0 0.00 perf-profile.children.cycles-pp.do_idle 38.04 ± 3% -38.0 0.00 perf-profile.children.cycles-pp.start_secondary 36.76 -36.8 0.00 perf-profile.children.cycles-pp.do_syscall_64 36.14 ± 3% -36.1 0.00 perf-profile.children.cycles-pp.cpuidle_enter 36.12 ± 3% -36.1 0.00 perf-profile.children.cycles-pp.cpuidle_enter_state 33.16 ± 3% -33.2 0.00 perf-profile.children.cycles-pp.intel_idle 17.93 ± 5% -17.9 0.00 perf-profile.children.cycles-pp.syscall_exit_to_user_mode 15.71 ± 5% -15.7 0.00 perf-profile.children.cycles-pp.exit_to_user_mode_prepare 15.47 ± 4% -15.5 0.00 perf-profile.children.cycles-pp.btrfs_search_slot 15.41 ± 5% -15.4 0.00 perf-profile.children.cycles-pp.task_work_run 14.99 ± 5% -15.0 0.00 perf-profile.children.cycles-pp.__fput 13.96 ± 4% -14.0 0.00 perf-profile.children.cycles-pp.do_sys_open 13.95 ± 4% -14.0 0.00 perf-profile.children.cycles-pp.do_sys_openat2 9.92 ± 4% -9.9 0.00 perf-profile.children.cycles-pp.do_filp_open 9.88 ± 4% -9.9 0.00 perf-profile.children.cycles-pp.path_openat 9.65 ± 4% -9.6 0.00 perf-profile.children.cycles-pp.__btrfs_tree_lock 9.46 ± 4% -9.5 0.00 perf-profile.children.cycles-pp.btrfs_lock_root_node 9.44 ± 4% -9.4 0.00 perf-profile.children.cycles-pp.rwsem_down_write_slowpath 8.98 ± 5% -9.0 0.00 perf-profile.children.cycles-pp.btrfs_insert_empty_items 8.27 ± 5% -8.3 0.00 perf-profile.children.cycles-pp.dput 7.93 ± 4% -7.9 0.00 perf-profile.children.cycles-pp.btrfs_create 7.79 ± 4% -7.8 0.00 perf-profile.children.cycles-pp.do_unlinkat 7.73 ± 5% -7.7 0.00 perf-profile.children.cycles-pp.__dentry_kill 7.56 ± 5% -7.6 0.00 perf-profile.children.cycles-pp.vfs_unlink 7.48 ± 5% -7.5 0.00 perf-profile.children.cycles-pp.btrfs_unlink 7.36 ± 4% -7.4 0.00 perf-profile.children.cycles-pp.fsnotify 7.25 ± 4% -7.2 0.00 perf-profile.children.cycles-pp.__fsnotify_parent 6.55 ± 5% -6.6 0.00 perf-profile.children.cycles-pp.evict 6.51 ± 4% -6.5 0.00 perf-profile.children.cycles-pp.fanotify_handle_event 6.48 ± 5% -6.5 0.00 perf-profile.children.cycles-pp.btrfs_evict_inode 5.78 ± 10% -5.8 0.00 perf-profile.children.cycles-pp.__x64_sys_fanotify_init 33.16 ± 3% -33.2 0.00 perf-profile.self.cycles-pp.intel_idle 341793 ± 5% +22.4% 418250 ± 3% interrupts.CAL:Function_call_interrupts 1535 ± 44% -100.0% 0.00 interrupts.CPU0.NMI:Non-maskable_interrupts 1535 ± 44% -100.0% 0.00 interrupts.CPU0.PMI:Performance_monitoring_interrupts 1589 ± 40% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 1589 ± 40% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 1189 ± 46% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 1189 ± 46% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 1067 ± 48% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 1067 ± 48% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 2372 ± 24% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 2372 ± 24% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 1361 ± 27% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 1361 ± 27% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 572.50 ± 30% -42.3% 330.50 ± 39% interrupts.CPU13.RES:Rescheduling_interrupts 1154 ± 57% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 1154 ± 57% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 921.00 ± 60% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 921.00 ± 60% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 896.67 ± 25% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 896.67 ± 25% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 1434 ± 48% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 1434 ± 48% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 1521 ± 41% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 1521 ± 41% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 1022 ± 39% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 1022 ± 39% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 1232 ± 74% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 1232 ± 74% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 949.83 ± 40% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 949.83 ± 40% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 920.67 ± 33% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 920.67 ± 33% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 999.67 ± 40% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 999.67 ± 40% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 1613 ± 44% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 1613 ± 44% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 2218 ± 41% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 2218 ± 41% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 2063 ± 54% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 2063 ± 54% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 1177 ± 55% -99.9% 1.17 ±223% interrupts.CPU26.NMI:Non-maskable_interrupts 1177 ± 55% -99.9% 1.17 ±223% interrupts.CPU26.PMI:Performance_monitoring_interrupts 1914 ± 50% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 1914 ± 50% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 5403 ± 33% +118.9% 11827 ± 18% interrupts.CPU28.CAL:Function_call_interrupts 1955 ± 45% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 1955 ± 45% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 1217 ± 67% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 1217 ± 67% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 1833 ± 48% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 1833 ± 48% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 1084 ± 46% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 1084 ± 46% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 2134 ± 74% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 2134 ± 74% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 1189 ± 47% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 1189 ± 47% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 1308 ± 87% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 1308 ± 87% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 1190 ± 80% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 1190 ± 80% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 1183 ± 47% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 1183 ± 47% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 1972 ± 43% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 1972 ± 43% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 2061 ± 58% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 2061 ± 58% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 1439 ± 36% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 1439 ± 36% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 949.17 ± 66% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 949.17 ± 66% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 1669 ± 80% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 1669 ± 80% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 1150 ± 58% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 1150 ± 58% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 1250 ± 33% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 1250 ± 33% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 1153 ± 46% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 1153 ± 46% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 541.00 ± 14% -48.9% 276.50 ± 36% interrupts.CPU42.RES:Rescheduling_interrupts 1463 ± 34% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 1463 ± 34% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 1061 ± 44% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 1061 ± 44% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 1622 ± 36% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 1622 ± 36% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 8354 ± 27% -46.7% 4454 ± 46% interrupts.CPU46.CAL:Function_call_interrupts 1516 ± 18% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 1516 ± 18% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 480.17 ± 31% -66.8% 159.50 ± 56% interrupts.CPU46.RES:Rescheduling_interrupts 1445 ± 27% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 1445 ± 27% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 1636 ± 86% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 1636 ± 86% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 1166 ± 51% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 1166 ± 51% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 1576 ± 62% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 1576 ± 62% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 1397 ± 61% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 1397 ± 61% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 1100 ± 32% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 1100 ± 32% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 67888 ± 6% -100.0% 1.17 ±223% interrupts.NMI:Non-maskable_interrupts 67888 ± 6% -100.0% 1.17 ±223% interrupts.PMI:Performance_monitoring_interrupts 22575 ± 7% -21.5% 17715 ± 7% interrupts.RES:Rescheduling_interrupts *************************************************************************************************** lkp-knm01: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory ========================================================================================= compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/ucode: gcc-9/performance/bufferedio/1HDD/xfs/x86_64-rhel-8.3/hdd/debian-10.4-x86_64-20200603.cgz/lkp-knm01/MWCM/fxmark/0x11 commit: dde38d5b80 ("kernel: don't include PF_IO_WORKERs as part of same_thread_group()") 43b2a76b1a ("proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/") dde38d5b805358c7 43b2a76b1a5abcc9833b463bef1 ---------------- --------------------------- %stddev %change %stddev \ | \ 2.99 ± 26% -43.8% 1.68 ± 41% fxmark.hdd_xfs_MWCM_18_bufferedio.iowait_sec 0.55 ± 26% -43.8% 0.31 ± 41% fxmark.hdd_xfs_MWCM_18_bufferedio.iowait_util 1.05 ± 2% +28.3% 1.34 ± 3% fxmark.hdd_xfs_MWCM_18_bufferedio.softirq_sec 0.19 ± 2% +28.2% 0.25 ± 3% fxmark.hdd_xfs_MWCM_18_bufferedio.softirq_util 3.56 ± 3% +124.7% 8.01 fxmark.hdd_xfs_MWCM_18_bufferedio.user_sec 0.65 ± 3% +124.5% 1.47 fxmark.hdd_xfs_MWCM_18_bufferedio.user_util 1.39 -8.1% 1.28 fxmark.hdd_xfs_MWCM_1_bufferedio.irq_sec 4.60 -10.6% 4.11 ± 2% fxmark.hdd_xfs_MWCM_1_bufferedio.irq_util 2.40 +62.0% 3.90 ± 2% fxmark.hdd_xfs_MWCM_1_bufferedio.user_sec 7.95 +57.8% 12.54 ± 2% fxmark.hdd_xfs_MWCM_1_bufferedio.user_util 143586 -74.9% 36089 fxmark.hdd_xfs_MWCM_1_bufferedio.works 4786 -74.9% 1202 fxmark.hdd_xfs_MWCM_1_bufferedio.works/sec 129.85 +15.3% 149.78 ± 4% fxmark.hdd_xfs_MWCM_27_bufferedio.idle_sec 15.91 +15.2% 18.33 ± 4% fxmark.hdd_xfs_MWCM_27_bufferedio.idle_util 1.20 ± 4% +27.0% 1.52 ± 7% fxmark.hdd_xfs_MWCM_27_bufferedio.softirq_sec 0.15 ± 4% +26.8% 0.19 ± 7% fxmark.hdd_xfs_MWCM_27_bufferedio.softirq_util 3.82 ± 2% +115.2% 8.23 fxmark.hdd_xfs_MWCM_27_bufferedio.user_sec 0.47 ± 2% +115.0% 1.01 fxmark.hdd_xfs_MWCM_27_bufferedio.user_util 1.05 ± 11% -99.8% 0.00 ±223% fxmark.hdd_xfs_MWCM_2_bufferedio.idle_sec 1.73 ± 11% -99.8% 0.00 ±223% fxmark.hdd_xfs_MWCM_2_bufferedio.idle_util 0.60 ± 18% +42.8% 0.86 ± 14% fxmark.hdd_xfs_MWCM_2_bufferedio.softirq_sec 2.92 ± 2% +98.9% 5.81 fxmark.hdd_xfs_MWCM_2_bufferedio.user_sec 4.81 ± 2% +96.5% 9.44 fxmark.hdd_xfs_MWCM_2_bufferedio.user_util 234726 -51.3% 114375 fxmark.hdd_xfs_MWCM_2_bufferedio.works 7820 -51.3% 3812 fxmark.hdd_xfs_MWCM_2_bufferedio.works/sec 174.58 ± 2% +9.7% 191.57 ± 3% fxmark.hdd_xfs_MWCM_36_bufferedio.idle_sec 16.02 ± 2% +9.8% 17.58 ± 3% fxmark.hdd_xfs_MWCM_36_bufferedio.idle_util 4.39 ± 5% +12.2% 4.93 ± 3% fxmark.hdd_xfs_MWCM_36_bufferedio.iowait_sec 0.40 ± 5% +12.2% 0.45 ± 3% fxmark.hdd_xfs_MWCM_36_bufferedio.iowait_util 1.38 ± 3% +17.6% 1.63 ± 2% fxmark.hdd_xfs_MWCM_36_bufferedio.softirq_sec 0.13 ± 3% +17.6% 0.15 ± 2% fxmark.hdd_xfs_MWCM_36_bufferedio.softirq_util 4.05 +108.9% 8.46 fxmark.hdd_xfs_MWCM_36_bufferedio.user_sec 0.37 +108.9% 0.78 fxmark.hdd_xfs_MWCM_36_bufferedio.user_util 190.50 ± 3% +30.1% 247.93 ± 2% fxmark.hdd_xfs_MWCM_45_bufferedio.idle_sec 13.99 ± 3% +30.1% 18.20 ± 2% fxmark.hdd_xfs_MWCM_45_bufferedio.idle_util 1.46 +26.6% 1.85 ± 2% fxmark.hdd_xfs_MWCM_45_bufferedio.softirq_sec 0.11 +26.5% 0.14 ± 2% fxmark.hdd_xfs_MWCM_45_bufferedio.softirq_util 4.34 ± 2% +103.6% 8.83 fxmark.hdd_xfs_MWCM_45_bufferedio.user_sec 0.32 ± 2% +103.5% 0.65 fxmark.hdd_xfs_MWCM_45_bufferedio.user_util 22.08 ± 4% -39.2% 13.42 ± 3% fxmark.hdd_xfs_MWCM_4_bufferedio.idle_sec 18.34 ± 4% -39.5% 11.09 ± 3% fxmark.hdd_xfs_MWCM_4_bufferedio.idle_util 0.72 ± 7% +46.1% 1.05 ± 17% fxmark.hdd_xfs_MWCM_4_bufferedio.softirq_sec 0.60 ± 7% +45.3% 0.87 ± 17% fxmark.hdd_xfs_MWCM_4_bufferedio.softirq_util 3.06 ± 2% +131.5% 7.08 fxmark.hdd_xfs_MWCM_4_bufferedio.user_sec 2.54 ± 2% +130.4% 5.85 fxmark.hdd_xfs_MWCM_4_bufferedio.user_util 215767 ± 3% -24.7% 162553 ± 5% fxmark.hdd_xfs_MWCM_4_bufferedio.works 7191 ± 3% -24.7% 5418 ± 5% fxmark.hdd_xfs_MWCM_4_bufferedio.works/sec 219.32 +14.4% 250.82 ± 3% fxmark.hdd_xfs_MWCM_54_bufferedio.idle_sec 13.43 +14.3% 15.35 ± 3% fxmark.hdd_xfs_MWCM_54_bufferedio.idle_util 0.28 ± 7% +10.9% 0.31 ± 5% fxmark.hdd_xfs_MWCM_54_bufferedio.iowait_util 1.62 +19.2% 1.94 fxmark.hdd_xfs_MWCM_54_bufferedio.softirq_sec 0.10 +19.2% 0.12 fxmark.hdd_xfs_MWCM_54_bufferedio.softirq_util 4.60 +98.4% 9.13 ± 2% fxmark.hdd_xfs_MWCM_54_bufferedio.user_sec 0.28 +98.3% 0.56 ± 2% fxmark.hdd_xfs_MWCM_54_bufferedio.user_util 4.38 ± 7% +17.9% 5.17 ± 3% fxmark.hdd_xfs_MWCM_63_bufferedio.iowait_sec 0.23 ± 7% +17.9% 0.27 ± 3% fxmark.hdd_xfs_MWCM_63_bufferedio.iowait_util 45.48 -16.0% 38.21 fxmark.hdd_xfs_MWCM_63_bufferedio.irq_sec 2.39 -16.0% 2.00 fxmark.hdd_xfs_MWCM_63_bufferedio.irq_util 5.74 ± 5% +63.3% 9.37 fxmark.hdd_xfs_MWCM_63_bufferedio.user_sec 0.30 ± 5% +63.2% 0.49 fxmark.hdd_xfs_MWCM_63_bufferedio.user_util 274.07 +11.9% 306.65 ± 3% fxmark.hdd_xfs_MWCM_72_bufferedio.idle_sec 12.58 +11.9% 14.07 ± 3% fxmark.hdd_xfs_MWCM_72_bufferedio.idle_util 1.84 +17.2% 2.15 ± 2% fxmark.hdd_xfs_MWCM_72_bufferedio.softirq_sec 0.08 +17.1% 0.10 ± 2% fxmark.hdd_xfs_MWCM_72_bufferedio.softirq_util 10.14 ± 5% +27.0% 12.88 fxmark.hdd_xfs_MWCM_72_bufferedio.user_sec 0.47 ± 5% +27.0% 0.59 fxmark.hdd_xfs_MWCM_72_bufferedio.user_util 0.83 ± 6% +29.8% 1.08 ± 5% fxmark.hdd_xfs_MWCM_9_bufferedio.softirq_sec 0.31 ± 6% +29.5% 0.40 ± 5% fxmark.hdd_xfs_MWCM_9_bufferedio.softirq_util 3.23 ± 2% +136.1% 7.62 fxmark.hdd_xfs_MWCM_9_bufferedio.user_sec 1.19 ± 2% +135.6% 2.80 fxmark.hdd_xfs_MWCM_9_bufferedio.user_util 211922 -9.0% 192843 fxmark.hdd_xfs_MWCM_9_bufferedio.works 7063 -9.0% 6427 fxmark.hdd_xfs_MWCM_9_bufferedio.works/sec 658.14 -3.1% 637.71 fxmark.time.elapsed_time 658.14 -3.1% 637.71 fxmark.time.elapsed_time.max 8689574 -17.0% 7211125 fxmark.time.file_system_outputs 39138 ± 2% -47.9% 20404 ± 2% fxmark.time.involuntary_context_switches 47.83 -14.6% 40.83 fxmark.time.percent_of_cpu_this_job_got 307.08 -17.2% 254.15 fxmark.time.system_time 10.15 ± 5% -11.8% 8.96 fxmark.time.user_time 118598 ± 6% -40.7% 70339 ± 3% fxmark.time.voluntary_context_switches 46.96 -4.5 42.49 mpstat.cpu.all.idle% 1.38 ± 4% -0.8 0.62 ± 7% mpstat.cpu.all.iowait% 1.70 +1.7 3.40 mpstat.cpu.all.usr% 42.36 -12.4% 37.10 iostat.cpu.idle 1.44 ± 4% -54.8% 0.65 ± 7% iostat.cpu.iowait 50.72 +6.9% 54.22 iostat.cpu.system 1.71 +101.3% 3.45 iostat.cpu.user 41.83 -12.0% 36.83 vmstat.cpu.id 53.33 +7.5% 57.33 vmstat.cpu.sy 23611 -9.5% 21357 vmstat.io.bo 2.00 ± 40% -83.3% 0.33 ±141% vmstat.memory.buff 1905010 +25.9% 2398319 vmstat.memory.cache 13.33 ± 3% +10.0% 14.67 ± 3% vmstat.procs.r 5683 -21.9% 4441 vmstat.system.cs 3958 ± 60% +724.1% 32625 meminfo.Active 3958 ± 60% +724.1% 32625 meminfo.Active(anon) 196386 +26.0% 247543 meminfo.AnonHugePages 252331 +24.3% 313609 meminfo.AnonPages 1658698 +31.4% 2179643 meminfo.Cached 523157 +134.2% 1225039 meminfo.Committed_AS 257914 -19.6% 207401 ± 2% meminfo.Dirty 842565 +65.7% 1395908 meminfo.Inactive 449485 +140.4% 1080471 meminfo.Inactive(anon) 393080 -19.8% 315436 ± 2% meminfo.Inactive(file) 243200 -11.4% 215570 meminfo.KReclaimable 3418497 +58.5% 5418471 meminfo.Memused 1867 ± 21% -16.9% 1551 ± 27% meminfo.Mlocked 4496 +21.9% 5481 meminfo.PageTables 243200 -11.4% 215570 meminfo.SReclaimable 201510 ± 2% +296.9% 799894 meminfo.Shmem 4010 ± 60% +713.6% 32626 numa-meminfo.node0.Active 4010 ± 60% +713.6% 32626 numa-meminfo.node0.Active(anon) 196385 +26.0% 247474 numa-meminfo.node0.AnonHugePages 252331 +24.3% 313610 numa-meminfo.node0.AnonPages 351993 -8.5% 321980 numa-meminfo.node0.AnonPages.max 257349 -19.2% 207877 ± 2% numa-meminfo.node0.Dirty 1147258 +45.4% 1667753 numa-meminfo.node0.FilePages 842171 +65.7% 1395101 numa-meminfo.node0.Inactive 449378 +140.0% 1078579 numa-meminfo.node0.Inactive(anon) 392793 -19.4% 316521 ± 2% numa-meminfo.node0.Inactive(file) 225656 -12.4% 197697 numa-meminfo.node0.KReclaimable 2819028 +70.9% 4817962 numa-meminfo.node0.MemUsed 1070 ± 21% -17.2% 886.67 ± 27% numa-meminfo.node0.Mlocked 4489 +22.1% 5481 numa-meminfo.node0.PageTables 225656 -12.4% 197697 numa-meminfo.node0.SReclaimable 201454 ± 2% +296.1% 798007 numa-meminfo.node0.Shmem 994.00 ± 59% +723.3% 8183 numa-vmstat.node0.nr_active_anon 63109 +24.2% 78411 numa-vmstat.node0.nr_anon_pages 95.17 +26.3% 120.17 numa-vmstat.node0.nr_anon_transparent_hugepages 1176854 -9.9% 1060636 numa-vmstat.node0.nr_dirtied 64225 -19.0% 52004 ± 2% numa-vmstat.node0.nr_dirty 286640 +45.3% 416479 numa-vmstat.node0.nr_file_pages 112357 +139.4% 268978 numa-vmstat.node0.nr_inactive_anon 98047 -19.1% 79315 ± 2% numa-vmstat.node0.nr_inactive_file 267.33 ± 21% -17.4% 220.83 ± 27% numa-vmstat.node0.nr_mlock 1122 +22.1% 1369 numa-vmstat.node0.nr_page_table_pages 50342 ± 2% +295.0% 198855 numa-vmstat.node0.nr_shmem 56463 -12.2% 49577 numa-vmstat.node0.nr_slab_reclaimable 1112391 -9.3% 1008414 numa-vmstat.node0.nr_written 994.00 ± 59% +723.3% 8183 numa-vmstat.node0.nr_zone_active_anon 112356 +139.4% 268978 numa-vmstat.node0.nr_zone_inactive_anon 98047 -19.1% 79315 ± 2% numa-vmstat.node0.nr_zone_inactive_file 64468 -19.0% 52227 ± 2% numa-vmstat.node0.nr_zone_write_pending 95.82 -95.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe 95.51 -95.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe 95.00 -95.0 0.00 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe 94.99 -95.0 0.00 perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe 94.93 -94.9 0.00 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe 94.92 -94.9 0.00 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64 92.57 -92.6 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2.do_sys_open 90.03 -90.0 0.00 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2 96.60 -96.6 0.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe 96.27 -96.3 0.00 perf-profile.children.cycles-pp.do_syscall_64 95.02 -95.0 0.00 perf-profile.children.cycles-pp.do_sys_open 95.02 -95.0 0.00 perf-profile.children.cycles-pp.do_sys_openat2 94.96 -95.0 0.00 perf-profile.children.cycles-pp.do_filp_open 94.95 -95.0 0.00 perf-profile.children.cycles-pp.path_openat 92.57 -92.6 0.00 perf-profile.children.cycles-pp.rwsem_down_write_slowpath 90.07 -90.1 0.00 perf-profile.children.cycles-pp.osq_lock 87.72 -87.7 0.00 perf-profile.self.cycles-pp.osq_lock 110141 ± 75% +365.9% 513125 ± 50% sched_debug.cfs_rq:/.MIN_vruntime.avg 61835 ± 22% +132.7% 143865 ± 48% sched_debug.cfs_rq:/.load.min 84.70 ± 34% +110.9% 178.65 ± 31% sched_debug.cfs_rq:/.load_avg.min 110141 ± 75% +365.9% 513125 ± 50% sched_debug.cfs_rq:/.max_vruntime.avg 0.43 ± 8% +36.1% 0.58 ± 7% sched_debug.cfs_rq:/.nr_running.avg 590.27 ± 12% +72.4% 1017 ± 9% sched_debug.cfs_rq:/.runnable_avg.avg 1241 ± 4% +42.5% 1768 ± 4% sched_debug.cfs_rq:/.runnable_avg.max 450.96 ± 19% +63.9% 739.03 ± 18% sched_debug.cfs_rq:/.runnable_avg.min 183.25 ± 4% +72.8% 316.58 ± 12% sched_debug.cfs_rq:/.runnable_avg.stddev 981.57 ± 2% +12.5% 1104 ± 2% sched_debug.cfs_rq:/.util_avg.max 140.93 ± 10% +58.8% 223.86 ± 12% sched_debug.cfs_rq:/.util_avg.stddev 244.58 ± 29% +138.2% 582.55 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.avg 715.45 ± 12% +88.9% 1351 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.max 74.40 ± 51% +291.1% 290.98 ± 18% sched_debug.cfs_rq:/.util_est_enqueued.min 160.96 ± 20% +100.4% 322.60 ± 14% sched_debug.cfs_rq:/.util_est_enqueued.stddev 890384 ± 2% +15.1% 1024883 ± 2% sched_debug.cpu.avg_idle.avg 0.64 ± 8% +68.0% 1.07 ± 7% sched_debug.cpu.nr_running.avg 106640 ± 4% -20.4% 84865 ± 2% sched_debug.cpu.nr_switches.avg 162029 ± 11% -31.5% 111019 ± 3% sched_debug.cpu.nr_switches.max 19452 ± 16% -54.0% 8943 ± 11% sched_debug.cpu.nr_switches.stddev 30.79 ± 17% +58.9% 48.93 ± 22% sched_debug.cpu.nr_uninterruptible.stddev 7531035 ± 7% +20.0% 9037300 ± 8% perf-stat.i.branch-instructions 1042350 ± 8% +26.1% 1314230 ± 9% perf-stat.i.cache-references 5460 -23.6% 4169 perf-stat.i.context-switches 34881 +4.3% 36364 perf-stat.i.cpu-clock 86.46 +22.8% 106.15 perf-stat.i.cpu-migrations 35737707 ± 7% +21.4% 43388326 ± 8% perf-stat.i.iTLB-loads 36020458 ± 7% +20.5% 43420402 ± 8% perf-stat.i.instructions 1.68 ± 3% -19.2% 1.36 ± 2% perf-stat.i.major-faults 0.47 -3.4% 0.46 perf-stat.i.metric.K/sec 0.16 ± 7% +21.2% 0.19 ± 8% perf-stat.i.metric.M/sec 3196 -4.6% 3049 perf-stat.i.minor-faults 3197 -4.6% 3050 perf-stat.i.page-faults 34881 +4.3% 36364 perf-stat.i.task-clock 28.94 +4.6% 30.29 ± 2% perf-stat.overall.MPKI 6.58 ± 3% -0.3 6.26 ± 2% perf-stat.overall.branch-miss-rate% 15.05 -2.3 12.77 perf-stat.overall.cache-miss-rate% 11.72 ± 2% -13.5% 10.14 ± 2% perf-stat.overall.cpi 1.34 ± 2% -0.1 1.20 ± 3% perf-stat.overall.iTLB-load-miss-rate% 74.06 ± 2% +11.5% 82.56 ± 3% perf-stat.overall.instructions-per-iTLB-miss 0.09 ± 2% +15.6% 0.10 ± 2% perf-stat.overall.ipc 7980203 ± 8% +20.5% 9616659 ± 8% perf-stat.ps.branch-instructions 1105480 ± 8% +26.6% 1400073 ± 9% perf-stat.ps.cache-references 5431 -23.5% 4156 perf-stat.ps.context-switches 35097 +4.3% 36600 perf-stat.ps.cpu-clock 86.57 +22.5% 106.08 perf-stat.ps.cpu-migrations 37869860 ± 8% +21.9% 46164154 ± 8% perf-stat.ps.iTLB-loads 38170494 ± 7% +21.0% 46199974 ± 8% perf-stat.ps.instructions 1.67 ± 3% -19.1% 1.35 ± 2% perf-stat.ps.major-faults 3196 -4.8% 3042 perf-stat.ps.minor-faults 3197 -4.8% 3044 perf-stat.ps.page-faults 35097 +4.3% 36600 perf-stat.ps.task-clock 1001 ± 60% +716.0% 8169 proc-vmstat.nr_active_anon 63064 +24.4% 78430 proc-vmstat.nr_anon_pages 95.67 +26.3% 120.83 proc-vmstat.nr_anon_transparent_hugepages 2261662 -15.4% 1914471 proc-vmstat.nr_dirtied 64596 -19.1% 52255 ± 2% proc-vmstat.nr_dirty 1965793 -2.6% 1913817 proc-vmstat.nr_dirty_background_threshold 3936393 -2.6% 3832315 proc-vmstat.nr_dirty_threshold 415053 +31.5% 545772 proc-vmstat.nr_file_pages 19717145 -2.5% 19215555 proc-vmstat.nr_free_pages 112498 +140.2% 270263 proc-vmstat.nr_inactive_anon 98489 -19.2% 79564 ± 2% proc-vmstat.nr_inactive_file 44997 ± 2% +7.4% 48349 ± 2% proc-vmstat.nr_kernel_stack 10053 -1.9% 9861 proc-vmstat.nr_mapped 1123 +22.1% 1371 proc-vmstat.nr_page_table_pages 50534 ± 2% +296.0% 200107 proc-vmstat.nr_shmem 61027 -11.3% 54136 proc-vmstat.nr_slab_reclaimable 101172 -1.0% 100132 proc-vmstat.nr_slab_unreclaimable 2261662 -15.4% 1914471 proc-vmstat.nr_written 1001 ± 60% +716.0% 8169 proc-vmstat.nr_zone_active_anon 112498 +140.2% 270262 proc-vmstat.nr_zone_inactive_anon 98489 -19.2% 79564 ± 2% proc-vmstat.nr_zone_inactive_file 64831 -19.0% 52485 ± 2% proc-vmstat.nr_zone_write_pending 3299 ± 63% +364.5% 15328 ± 3% proc-vmstat.numa_hint_faults 3299 ± 63% +364.5% 15328 ± 3% proc-vmstat.numa_hint_faults_local 5266464 -3.9% 5060321 proc-vmstat.numa_hit 5266463 -3.9% 5060320 proc-vmstat.numa_local 101583 ± 15% +93.1% 196200 ± 7% proc-vmstat.numa_pte_updates 8559071 -7.8% 7894373 proc-vmstat.pgalloc_normal 28661 ± 58% +1100.4% 344036 proc-vmstat.pgdeactivate 2226732 -6.9% 2073834 proc-vmstat.pgfault 8494793 -17.1% 7041676 proc-vmstat.pgfree 15444714 -12.6% 13500210 proc-vmstat.pgpgout 190432 -3.4% 184002 proc-vmstat.pgreuse 1507431 ± 2% -16.6% 1257190 ± 5% proc-vmstat.pgrotated 6176896 -2.2% 6039424 proc-vmstat.unevictable_pgs_scanned 18103 ± 2% +53.9% 27860 slabinfo.filp.active_objs 31744 +13.8% 36134 slabinfo.filp.num_objs 153835 -14.6% 131344 slabinfo.kmalloc-16.active_objs 158996 -13.6% 137344 slabinfo.kmalloc-16.num_objs 99930 -15.4% 84508 slabinfo.kmalloc-1k.active_objs 3237 -15.0% 2750 slabinfo.kmalloc-1k.active_slabs 103601 -15.0% 88018 slabinfo.kmalloc-1k.num_objs 3237 -15.0% 2750 slabinfo.kmalloc-1k.num_slabs 7414 +51.8% 11255 slabinfo.kmalloc-2k.active_objs 493.50 +48.3% 732.00 slabinfo.kmalloc-2k.active_slabs 7905 +48.2% 11718 slabinfo.kmalloc-2k.num_objs 493.50 +48.3% 732.00 slabinfo.kmalloc-2k.num_slabs 2596 +10.0% 2854 slabinfo.kmalloc-4k.active_objs 137671 -10.1% 123701 slabinfo.kmalloc-512.active_objs 2632 -11.5% 2330 slabinfo.kmalloc-512.active_slabs 168512 -11.5% 149177 slabinfo.kmalloc-512.num_objs 2632 -11.5% 2330 slabinfo.kmalloc-512.num_slabs 5342 ± 2% -11.0% 4754 ± 2% slabinfo.kmalloc-rcl-256.active_objs 5597 ± 2% -10.5% 5008 slabinfo.kmalloc-rcl-256.num_objs 14228 +14.6% 16310 slabinfo.lsm_file_cache.active_objs 3648 ± 7% -12.5% 3190 ± 6% slabinfo.proc_inode_cache.active_objs 17438 +10.7% 19312 slabinfo.radix_tree_node.active_objs 2909 ± 2% +9.9% 3196 ± 2% slabinfo.task_struct.active_objs 2911 ± 2% +9.9% 3199 ± 2% slabinfo.task_struct.active_slabs 2911 ± 2% +9.9% 3199 ± 2% slabinfo.task_struct.num_objs 2911 ± 2% +9.9% 3199 ± 2% slabinfo.task_struct.num_slabs 20109 ± 2% +13.7% 22869 slabinfo.vm_area_struct.active_objs 20704 ± 2% +13.2% 23438 slabinfo.vm_area_struct.num_objs 7176 -12.5% 6275 slabinfo.xfs_buf.active_objs 7282 -12.9% 6341 slabinfo.xfs_buf.num_objs 126500 -17.7% 104047 slabinfo.xfs_ili.active_objs 3013 -17.7% 2479 slabinfo.xfs_ili.active_slabs 126583 -17.7% 104134 slabinfo.xfs_ili.num_objs 3013 -17.7% 2479 slabinfo.xfs_ili.num_slabs 126041 -17.8% 103637 slabinfo.xfs_inode.active_objs 3940 -17.8% 3240 slabinfo.xfs_inode.active_slabs 126114 -17.8% 103714 slabinfo.xfs_inode.num_objs 3940 -17.8% 3240 slabinfo.xfs_inode.num_slabs 57565 ± 2% -10.9% 51311 ± 2% softirqs.BLOCK 12356 ± 47% -81.6% 2278 ± 8% softirqs.CPU0.BLOCK 98286 ± 2% +41.2% 138764 ± 4% softirqs.CPU0.RCU 49207 ± 4% -13.8% 42413 ± 2% softirqs.CPU0.SCHED 46829 ± 3% +101.7% 94478 ± 6% softirqs.CPU1.RCU 22741 ± 4% +61.8% 36787 ± 14% softirqs.CPU10.RCU 24689 ± 8% +28.0% 31592 ± 8% softirqs.CPU11.RCU 23968 ± 5% +59.7% 38276 ± 8% softirqs.CPU12.RCU 22492 ± 4% +35.8% 30550 ± 6% softirqs.CPU13.RCU 22638 ± 11% +77.9% 40266 ± 12% softirqs.CPU14.RCU 21903 ± 8% +35.6% 29691 ± 6% softirqs.CPU15.RCU 21035 ± 5% +54.8% 32555 ± 6% softirqs.CPU16.RCU 21930 ± 6% +26.0% 27634 ± 6% softirqs.CPU17.RCU 18842 ± 5% +48.6% 27998 ± 14% softirqs.CPU18.RCU 18532 ± 4% +38.6% 25676 ± 11% softirqs.CPU19.RCU 33684 ± 3% +140.0% 80842 ± 10% softirqs.CPU2.RCU 42433 ± 2% -18.5% 34603 ± 5% softirqs.CPU2.SCHED 18682 ± 4% +71.7% 32080 ± 18% softirqs.CPU20.RCU 18869 ± 3% +25.4% 23665 ± 7% softirqs.CPU21.RCU 18576 ± 5% +71.6% 31870 ± 18% softirqs.CPU22.RCU 19038 ± 6% +37.4% 26156 ± 14% softirqs.CPU23.RCU 18889 ± 4% +77.6% 33543 ± 20% softirqs.CPU24.RCU 18974 ± 4% +27.8% 24248 ± 7% softirqs.CPU25.RCU 18375 ± 4% +47.3% 27068 ± 11% softirqs.CPU26.RCU 15781 ± 5% +28.7% 20318 ± 4% softirqs.CPU27.RCU 15262 ± 7% +64.8% 25160 ± 23% softirqs.CPU28.RCU 15320 ± 8% +39.4% 21355 ± 9% softirqs.CPU29.RCU 33234 ± 6% +85.4% 61610 ± 5% softirqs.CPU3.RCU 42951 ± 2% -10.8% 38333 softirqs.CPU3.SCHED 15557 ± 5% +75.3% 27271 ± 18% softirqs.CPU30.RCU 15457 ± 5% +30.2% 20119 ± 12% softirqs.CPU31.RCU 14668 ± 6% +58.6% 23264 ± 28% softirqs.CPU32.RCU 15163 ± 8% +22.4% 18564 ± 5% softirqs.CPU33.RCU 14968 ± 6% +49.5% 22379 ± 17% softirqs.CPU34.RCU 14708 ± 4% +24.5% 18304 ± 3% softirqs.CPU35.RCU 12790 ± 7% +35.7% 17359 ± 10% softirqs.CPU36.RCU 12618 ± 4% +45.2% 18317 ± 20% softirqs.CPU38.RCU 12796 ± 6% +38.8% 17766 ± 22% softirqs.CPU39.RCU 27231 ± 5% +97.7% 53841 ± 10% softirqs.CPU4.RCU 12450 ± 5% +41.3% 17589 ± 9% softirqs.CPU40.RCU 12556 ± 5% +26.2% 15840 ± 4% softirqs.CPU41.RCU 12812 ± 8% +56.4% 20045 ± 18% softirqs.CPU42.RCU 12827 ± 6% +36.5% 17514 ± 14% softirqs.CPU43.RCU 12348 ± 5% +65.0% 20370 ± 19% softirqs.CPU44.RCU 10154 ± 5% +36.8% 13887 ± 13% softirqs.CPU45.RCU 10455 ± 8% +39.7% 14607 ± 16% softirqs.CPU46.RCU 10142 ± 4% +37.0% 13897 ± 15% softirqs.CPU47.RCU 9445 ± 6% +47.9% 13971 ± 30% softirqs.CPU48.RCU 9324 ± 6% +22.9% 11456 ± 8% softirqs.CPU49.RCU 27127 ± 6% +64.4% 44584 ± 11% softirqs.CPU5.RCU 9469 ± 3% +32.5% 12550 ± 22% softirqs.CPU50.RCU 9345 ± 5% +22.0% 11400 ± 7% softirqs.CPU51.RCU 9314 ± 6% +24.0% 11550 softirqs.CPU52.RCU 9635 ± 6% +15.3% 11108 ± 2% softirqs.CPU53.RCU 7425 ± 7% +65.9% 12317 ± 26% softirqs.CPU54.RCU 7463 ± 7% +22.6% 9148 ± 9% softirqs.CPU55.RCU 7526 ± 6% +20.8% 9088 ± 8% softirqs.CPU56.RCU 7468 ± 7% +27.9% 9555 ± 16% softirqs.CPU57.RCU 7428 ± 6% +23.9% 9205 ± 14% softirqs.CPU59.RCU 27507 ± 4% +82.1% 50087 ± 14% softirqs.CPU6.RCU 7441 ± 6% +24.2% 9242 ± 17% softirqs.CPU61.RCU 7497 ± 6% +35.9% 10191 ± 23% softirqs.CPU62.RCU 4464 ± 4% +54.9% 6915 ± 23% softirqs.CPU66.RCU 4602 ± 5% +77.0% 8147 ± 31% softirqs.CPU68.RCU 4662 ± 3% +45.0% 6759 ± 36% softirqs.CPU69.RCU 27435 ± 7% +51.9% 41665 ± 6% softirqs.CPU7.RCU 4556 ± 4% +50.3% 6848 ± 30% softirqs.CPU71.RCU 25751 ± 4% +59.7% 41133 ± 8% softirqs.CPU8.RCU 23146 ± 3% +33.5% 30910 ± 7% softirqs.CPU9.RCU 1599985 ± 2% +38.5% 2215703 softirqs.RCU 555802 +5.2% 584573 interrupts.CAL:Function_call_interrupts 3484 ± 41% -100.0% 1.00 ±223% interrupts.CPU0.NMI:Non-maskable_interrupts 3484 ± 41% -100.0% 1.00 ±223% interrupts.CPU0.PMI:Performance_monitoring_interrupts 5232 ± 25% +158.1% 13506 ± 11% interrupts.CPU0.RES:Rescheduling_interrupts 2903 ± 20% -100.0% 0.00 interrupts.CPU1.NMI:Non-maskable_interrupts 2903 ± 20% -100.0% 0.00 interrupts.CPU1.PMI:Performance_monitoring_interrupts 3042 ± 38% -100.0% 0.00 interrupts.CPU10.NMI:Non-maskable_interrupts 3042 ± 38% -100.0% 0.00 interrupts.CPU10.PMI:Performance_monitoring_interrupts 284.00 ± 8% +42.5% 404.83 ± 20% interrupts.CPU10.RES:Rescheduling_interrupts 3109 ± 41% -100.0% 0.00 interrupts.CPU11.NMI:Non-maskable_interrupts 3109 ± 41% -100.0% 0.00 interrupts.CPU11.PMI:Performance_monitoring_interrupts 272.33 ± 9% +39.0% 378.67 ± 13% interrupts.CPU11.RES:Rescheduling_interrupts 3823 ± 28% -100.0% 0.00 interrupts.CPU12.NMI:Non-maskable_interrupts 3823 ± 28% -100.0% 0.00 interrupts.CPU12.PMI:Performance_monitoring_interrupts 3069 ± 37% -100.0% 0.00 interrupts.CPU13.NMI:Non-maskable_interrupts 3069 ± 37% -100.0% 0.00 interrupts.CPU13.PMI:Performance_monitoring_interrupts 247.50 ± 11% +25.7% 311.00 ± 6% interrupts.CPU13.RES:Rescheduling_interrupts 3006 ± 20% -100.0% 0.00 interrupts.CPU14.NMI:Non-maskable_interrupts 3006 ± 20% -100.0% 0.00 interrupts.CPU14.PMI:Performance_monitoring_interrupts 257.00 ± 11% +40.8% 361.83 ± 13% interrupts.CPU14.RES:Rescheduling_interrupts 75.33 ± 7% +148.2% 187.00 ± 78% interrupts.CPU14.TLB:TLB_shootdowns 3441 ± 34% -100.0% 0.00 interrupts.CPU15.NMI:Non-maskable_interrupts 3441 ± 34% -100.0% 0.00 interrupts.CPU15.PMI:Performance_monitoring_interrupts 218.00 ± 8% +47.2% 320.83 ± 10% interrupts.CPU15.RES:Rescheduling_interrupts 53.00 ± 8% +119.5% 116.33 ± 88% interrupts.CPU15.TLB:TLB_shootdowns 3431 ± 16% +35.7% 4656 ± 9% interrupts.CPU16.CAL:Function_call_interrupts 2574 ± 13% -100.0% 0.00 interrupts.CPU16.NMI:Non-maskable_interrupts 2574 ± 13% -100.0% 0.00 interrupts.CPU16.PMI:Performance_monitoring_interrupts 2899 ± 18% -100.0% 0.00 interrupts.CPU17.NMI:Non-maskable_interrupts 2899 ± 18% -100.0% 0.00 interrupts.CPU17.PMI:Performance_monitoring_interrupts 3865 ± 36% -100.0% 0.00 interrupts.CPU18.NMI:Non-maskable_interrupts 3865 ± 36% -100.0% 0.00 interrupts.CPU18.PMI:Performance_monitoring_interrupts 186.50 ± 8% +34.2% 250.33 ± 22% interrupts.CPU18.RES:Rescheduling_interrupts 3049 ± 38% -100.0% 0.00 interrupts.CPU19.NMI:Non-maskable_interrupts 3049 ± 38% -100.0% 0.00 interrupts.CPU19.PMI:Performance_monitoring_interrupts 3115 ± 38% -100.0% 0.00 interrupts.CPU2.NMI:Non-maskable_interrupts 3115 ± 38% -100.0% 0.00 interrupts.CPU2.PMI:Performance_monitoring_interrupts 2295 ± 6% +69.8% 3897 ± 10% interrupts.CPU2.RES:Rescheduling_interrupts 3218 ± 37% -100.0% 0.00 interrupts.CPU20.NMI:Non-maskable_interrupts 3218 ± 37% -100.0% 0.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts 4378 ± 34% -100.0% 0.00 interrupts.CPU21.NMI:Non-maskable_interrupts 4378 ± 34% -100.0% 0.00 interrupts.CPU21.PMI:Performance_monitoring_interrupts 174.83 ± 8% +31.6% 230.00 ± 9% interrupts.CPU21.RES:Rescheduling_interrupts 3107 ± 39% -100.0% 0.00 interrupts.CPU22.NMI:Non-maskable_interrupts 3107 ± 39% -100.0% 0.00 interrupts.CPU22.PMI:Performance_monitoring_interrupts 166.33 ± 10% +44.3% 240.00 ± 9% interrupts.CPU22.RES:Rescheduling_interrupts 3004 ± 19% -100.0% 0.00 interrupts.CPU23.NMI:Non-maskable_interrupts 3004 ± 19% -100.0% 0.00 interrupts.CPU23.PMI:Performance_monitoring_interrupts 166.17 ± 13% +42.7% 237.17 ± 18% interrupts.CPU23.RES:Rescheduling_interrupts 2960 ± 20% -100.0% 0.00 interrupts.CPU24.NMI:Non-maskable_interrupts 2960 ± 20% -100.0% 0.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts 3946 ± 35% -100.0% 0.00 interrupts.CPU25.NMI:Non-maskable_interrupts 3946 ± 35% -100.0% 0.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts 3174 ± 11% +25.7% 3990 ± 7% interrupts.CPU26.CAL:Function_call_interrupts 3511 ± 42% -100.0% 0.00 interrupts.CPU26.NMI:Non-maskable_interrupts 3511 ± 42% -100.0% 0.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts 139.83 ± 10% +27.1% 177.67 ± 4% interrupts.CPU26.RES:Rescheduling_interrupts 3909 ± 40% -100.0% 0.00 interrupts.CPU27.NMI:Non-maskable_interrupts 3909 ± 40% -100.0% 0.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts 126.83 ± 13% +17.9% 149.50 ± 3% interrupts.CPU27.RES:Rescheduling_interrupts 3361 ± 34% -100.0% 0.00 interrupts.CPU28.NMI:Non-maskable_interrupts 3361 ± 34% -100.0% 0.00 interrupts.CPU28.PMI:Performance_monitoring_interrupts 122.83 ± 5% +34.7% 165.50 ± 7% interrupts.CPU28.RES:Rescheduling_interrupts 3511 ± 43% -100.0% 0.00 interrupts.CPU29.NMI:Non-maskable_interrupts 3511 ± 43% -100.0% 0.00 interrupts.CPU29.PMI:Performance_monitoring_interrupts 4237 ± 26% -100.0% 0.00 interrupts.CPU3.NMI:Non-maskable_interrupts 4237 ± 26% -100.0% 0.00 interrupts.CPU3.PMI:Performance_monitoring_interrupts 1918 ± 12% +91.3% 3670 ± 15% interrupts.CPU3.RES:Rescheduling_interrupts 204.83 ± 13% -23.4% 157.00 ± 16% interrupts.CPU3.TLB:TLB_shootdowns 3407 ± 36% -100.0% 0.00 interrupts.CPU30.NMI:Non-maskable_interrupts 3407 ± 36% -100.0% 0.00 interrupts.CPU30.PMI:Performance_monitoring_interrupts 2958 ± 21% -100.0% 0.00 interrupts.CPU31.NMI:Non-maskable_interrupts 2958 ± 21% -100.0% 0.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts 121.83 ± 9% +22.4% 149.17 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts 2865 ± 10% +18.4% 3392 ± 6% interrupts.CPU32.CAL:Function_call_interrupts 3409 ± 34% -100.0% 0.00 interrupts.CPU32.NMI:Non-maskable_interrupts 3409 ± 34% -100.0% 0.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts 3036 ± 37% -100.0% 0.00 interrupts.CPU33.NMI:Non-maskable_interrupts 3036 ± 37% -100.0% 0.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts 3938 ± 36% -100.0% 0.00 interrupts.CPU34.NMI:Non-maskable_interrupts 3938 ± 36% -100.0% 0.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts 3899 ± 34% -100.0% 0.00 interrupts.CPU35.NMI:Non-maskable_interrupts 3899 ± 34% -100.0% 0.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts 3522 ± 43% -100.0% 0.00 interrupts.CPU36.NMI:Non-maskable_interrupts 3522 ± 43% -100.0% 0.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts 3030 ± 38% -100.0% 0.00 interrupts.CPU37.NMI:Non-maskable_interrupts 3030 ± 38% -100.0% 0.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts 3446 ± 32% -100.0% 0.00 interrupts.CPU38.NMI:Non-maskable_interrupts 3446 ± 32% -100.0% 0.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts 88.00 ± 6% +27.3% 112.00 ± 6% interrupts.CPU38.RES:Rescheduling_interrupts 4665 ± 33% -100.0% 0.00 interrupts.CPU39.NMI:Non-maskable_interrupts 4665 ± 33% -100.0% 0.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts 4347 ± 32% -100.0% 0.00 interrupts.CPU4.NMI:Non-maskable_interrupts 4347 ± 32% -100.0% 0.00 interrupts.CPU4.PMI:Performance_monitoring_interrupts 2140 ± 14% +42.3% 3045 ± 11% interrupts.CPU40.CAL:Function_call_interrupts 3382 ± 33% -100.0% 0.00 interrupts.CPU40.NMI:Non-maskable_interrupts 3382 ± 33% -100.0% 0.00 interrupts.CPU40.PMI:Performance_monitoring_interrupts 2914 ± 21% -100.0% 0.00 interrupts.CPU41.NMI:Non-maskable_interrupts 2914 ± 21% -100.0% 0.00 interrupts.CPU41.PMI:Performance_monitoring_interrupts 2903 ± 21% -100.0% 0.00 interrupts.CPU42.NMI:Non-maskable_interrupts 2903 ± 21% -100.0% 0.00 interrupts.CPU42.PMI:Performance_monitoring_interrupts 3031 ± 37% -100.0% 0.00 interrupts.CPU43.NMI:Non-maskable_interrupts 3031 ± 37% -100.0% 0.00 interrupts.CPU43.PMI:Performance_monitoring_interrupts 4301 ± 32% -100.0% 0.00 interrupts.CPU44.NMI:Non-maskable_interrupts 4301 ± 32% -100.0% 0.00 interrupts.CPU44.PMI:Performance_monitoring_interrupts 4244 ± 28% -100.0% 0.00 interrupts.CPU45.NMI:Non-maskable_interrupts 4244 ± 28% -100.0% 0.00 interrupts.CPU45.PMI:Performance_monitoring_interrupts 3340 ± 35% -100.0% 0.00 interrupts.CPU46.NMI:Non-maskable_interrupts 3340 ± 35% -100.0% 0.00 interrupts.CPU46.PMI:Performance_monitoring_interrupts 3743 ± 28% -100.0% 0.00 interrupts.CPU47.NMI:Non-maskable_interrupts 3743 ± 28% -100.0% 0.00 interrupts.CPU47.PMI:Performance_monitoring_interrupts 3068 ± 39% -100.0% 0.00 interrupts.CPU48.NMI:Non-maskable_interrupts 3068 ± 39% -100.0% 0.00 interrupts.CPU48.PMI:Performance_monitoring_interrupts 3738 ± 27% -100.0% 0.00 interrupts.CPU49.NMI:Non-maskable_interrupts 3738 ± 27% -100.0% 0.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts 3921 ± 36% -100.0% 0.00 interrupts.CPU5.NMI:Non-maskable_interrupts 3921 ± 36% -100.0% 0.00 interrupts.CPU5.PMI:Performance_monitoring_interrupts 553.67 ± 18% +31.2% 726.33 ± 9% interrupts.CPU5.RES:Rescheduling_interrupts 1952 ± 7% +22.4% 2389 ± 9% interrupts.CPU50.CAL:Function_call_interrupts 2941 ± 21% -100.0% 0.00 interrupts.CPU50.NMI:Non-maskable_interrupts 2941 ± 21% -100.0% 0.00 interrupts.CPU50.PMI:Performance_monitoring_interrupts 3383 ± 33% -100.0% 0.00 interrupts.CPU51.NMI:Non-maskable_interrupts 3383 ± 33% -100.0% 0.00 interrupts.CPU51.PMI:Performance_monitoring_interrupts 2927 ± 23% -100.0% 0.00 interrupts.CPU52.NMI:Non-maskable_interrupts 2927 ± 23% -100.0% 0.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts 58.17 ± 15% +37.8% 80.17 ± 22% interrupts.CPU52.RES:Rescheduling_interrupts 3860 ± 36% -100.0% 0.00 interrupts.CPU53.NMI:Non-maskable_interrupts 3860 ± 36% -100.0% 0.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts 2889 ± 21% -100.0% 0.00 interrupts.CPU54.NMI:Non-maskable_interrupts 2889 ± 21% -100.0% 0.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts 1555 ± 3% +13.2% 1761 ± 5% interrupts.CPU55.CAL:Function_call_interrupts 3436 ± 33% -100.0% 0.00 interrupts.CPU55.NMI:Non-maskable_interrupts 3436 ± 33% -100.0% 0.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts 3727 ± 29% -100.0% 0.00 interrupts.CPU56.NMI:Non-maskable_interrupts 3727 ± 29% -100.0% 0.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts 3014 ± 37% -100.0% 0.00 interrupts.CPU57.NMI:Non-maskable_interrupts 3014 ± 37% -100.0% 0.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts 3869 ± 35% -100.0% 0.00 interrupts.CPU58.NMI:Non-maskable_interrupts 3869 ± 35% -100.0% 0.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts 3399 ± 34% -100.0% 0.00 interrupts.CPU59.NMI:Non-maskable_interrupts 3399 ± 34% -100.0% 0.00 interrupts.CPU59.PMI:Performance_monitoring_interrupts 3752 ± 27% -100.0% 0.00 interrupts.CPU6.NMI:Non-maskable_interrupts 3752 ± 27% -100.0% 0.00 interrupts.CPU6.PMI:Performance_monitoring_interrupts 496.67 ± 8% +70.4% 846.17 ± 44% interrupts.CPU6.RES:Rescheduling_interrupts 3022 ± 37% -100.0% 0.00 interrupts.CPU60.NMI:Non-maskable_interrupts 3022 ± 37% -100.0% 0.00 interrupts.CPU60.PMI:Performance_monitoring_interrupts 3077 ± 35% -100.0% 0.00 interrupts.CPU61.NMI:Non-maskable_interrupts 3077 ± 35% -100.0% 0.00 interrupts.CPU61.PMI:Performance_monitoring_interrupts 3340 ± 21% -100.0% 0.00 interrupts.CPU62.NMI:Non-maskable_interrupts 3340 ± 21% -100.0% 0.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts 3404 ± 33% -100.0% 0.00 interrupts.CPU7.NMI:Non-maskable_interrupts 3404 ± 33% -100.0% 0.00 interrupts.CPU7.PMI:Performance_monitoring_interrupts 499.83 ± 12% +47.1% 735.33 ± 11% interrupts.CPU7.RES:Rescheduling_interrupts 3055 ± 39% -100.0% 0.00 interrupts.CPU8.NMI:Non-maskable_interrupts 3055 ± 39% -100.0% 0.00 interrupts.CPU8.PMI:Performance_monitoring_interrupts 3539 ± 45% -100.0% 0.00 interrupts.CPU9.NMI:Non-maskable_interrupts 3539 ± 45% -100.0% 0.00 interrupts.CPU9.PMI:Performance_monitoring_interrupts 215427 ± 9% -100.0% 13.00 ±103% interrupts.NMI:Non-maskable_interrupts 215427 ± 9% -100.0% 13.00 ±103% interrupts.PMI:Performance_monitoring_interrupts 25669 ± 4% +64.0% 42084 ± 6% interrupts.RES:Rescheduling_interrupts 4748 ± 6% +67.2% 7939 ± 5% interrupts.TLB:TLB_shootdowns 0.04 ± 15% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 1.94 ±199% -100.0% 0.00 perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.04 ± 27% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 0.23 ±169% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 0.03 ± 67% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 0.12 ± 39% -100.0% 0.00 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.02 ± 88% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 0.13 ±127% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.05 ± 83% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 0.00 ± 57% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.01 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.ret_from_fork.[unknown] 0.07 ± 34% -100.0% 0.00 perf-sched.sch_delay.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.01 ± 37% -100.0% 0.00 perf-sched.sch_delay.avg.ms.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle 0.02 ± 11% -100.0% 0.00 perf-sched.sch_delay.avg.ms.kthreadd.ret_from_fork 0.04 ± 12% -100.0% 0.00 perf-sched.sch_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.02 ± 79% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 0.02 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.pagecache_get_page.grab_cache_page_write_begin 0.33 ± 70% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large 1.21 ± 90% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_createname 0.74 ±137% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_lookup 0.51 ±193% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iext_insert 0.01 ± 55% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 1.17 ±188% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit 0.02 ± 53% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_map_blocks.iomap_writepage_map 0.09 ± 77% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_bmapi_convert_delalloc 0.42 ± 64% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_trans_alloc_inode 0.24 ±104% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 1.71 ± 91% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open 0.01 ±153% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 0.41 ± 63% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 1.34 ± 76% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.iomap_write_actor.iomap_apply.iomap_file_buffered_write 0.23 ±144% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.83 ± 52% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 0.13 ± 66% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_allocbt_init_common.xfs_allocbt_init_cursor 0.07 ± 50% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin 2.11 ±118% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inobt_init_common.xfs_inobt_init_cursor 1.31 ±107% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget 0.48 ± 94% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin 1.36 ± 93% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate 0.29 ± 94% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_inode 0.33 ±138% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve 0.08 ± 50% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_writepage_map 0.20 ± 26% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread 0.36 ± 19% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 0.02 ± 73% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 0.04 ± 9% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio 0.02 ± 77% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 0.38 ±209% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.xfs_btree_split.xfs_btree_make_block_unfull 0.06 ± 67% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages 0.05 ± 25% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback 0.15 ± 39% -100.0% 0.00 perf-sched.sch_delay.avg.ms.preempt_schedule_common.__cond_resched.xfs_trans_alloc.xfs_trans_alloc_inode.xfs_iomap_write_unwritten 0.03 ± 4% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create 0.02 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten 0.12 ± 7% -100.0% 0.00 perf-sched.sch_delay.avg.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2 0.06 ± 29% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.14 ± 57% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.01 ± 5% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.01 ± 42% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.kthread.ret_from_fork 0.31 ± 78% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.__down.down.xfs_buf_lock 0.05 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.06 ± 12% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.06 ± 27% -100.0% 0.00 perf-sched.sch_delay.avg.ms.schedule_timeout.wait_for_completion_killable.__kthread_create_on_node.kthread_create_on_node 0.10 ± 24% -100.0% 0.00 perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.06 ± 77% -100.0% 0.00 perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 0.15 ± 16% -100.0% 0.00 perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork 0.02 ± 25% -100.0% 0.00 perf-sched.sch_delay.avg.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work 0.11 ± 99% -100.0% 0.00 perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 13.67 ±197% -100.0% 0.00 perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 0.11 ±123% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 1.75 ±197% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 3.80 ±120% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 9.99 ± 33% -100.0% 0.00 perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1.12 ±122% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 2.09 ±126% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 7.09 ±108% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 4.94 ± 57% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 0.02 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.ret_from_fork.[unknown] 0.17 ± 77% -100.0% 0.00 perf-sched.sch_delay.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 0.80 ± 71% -100.0% 0.00 perf-sched.sch_delay.max.ms.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle 0.05 ± 15% -100.0% 0.00 perf-sched.sch_delay.max.ms.kthreadd.ret_from_fork 11.62 ± 46% -100.0% 0.00 perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.11 ± 85% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 0.03 ± 19% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.pagecache_get_page.grab_cache_page_write_begin 32.99 ±122% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large 5.53 ± 88% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_createname 3.61 ±140% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_lookup 2.06 ±191% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iext_insert 0.32 ± 97% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 25.05 ±204% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit 0.15 ± 48% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_map_blocks.iomap_writepage_map 0.94 ± 78% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_bmapi_convert_delalloc 4.92 ± 95% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_trans_alloc_inode 6.02 ±114% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 29.19 ±142% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open 0.16 ±167% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 9.74 ± 79% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 9.53 ± 74% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.iomap_write_actor.iomap_apply.iomap_file_buffered_write 4.52 ±146% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 9.53 ± 48% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 2.23 ± 69% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_allocbt_init_common.xfs_allocbt_init_cursor 1.50 ± 53% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin 19.39 ±145% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inobt_init_common.xfs_inobt_init_cursor 6.72 ± 92% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget 2.77 ±105% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin 14.49 ± 81% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate 2.41 ±188% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_inode 2.40 ±185% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve 1.59 ± 81% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_writepage_map 4.64 ±112% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread 5.62 ± 60% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 0.08 ± 21% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 0.19 ±115% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 1.05 ± 24% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio 0.09 ± 84% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 1.09 ±218% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.xfs_btree_split.xfs_btree_make_block_unfull 1.57 ± 64% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages 0.73 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback 0.27 ± 23% -100.0% 0.00 perf-sched.sch_delay.max.ms.preempt_schedule_common.__cond_resched.xfs_trans_alloc.xfs_trans_alloc_inode.xfs_iomap_write_unwritten 0.03 ± 4% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create 0.13 ± 71% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten 73.71 ± 38% -100.0% 0.00 perf-sched.sch_delay.max.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2 0.53 ± 99% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 0.22 ± 56% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_select 0.05 ± 8% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 0.16 ±198% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_preempt_disabled.kthread.ret_from_fork 47.53 ± 97% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.__down.down.xfs_buf_lock 0.45 ± 88% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 15.35 ± 79% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.11 ± 35% -100.0% 0.00 perf-sched.sch_delay.max.ms.schedule_timeout.wait_for_completion_killable.__kthread_create_on_node.kthread_create_on_node 16.02 ± 63% -100.0% 0.00 perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 2.95 ± 98% -100.0% 0.00 perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 38.70 ± 45% -100.0% 0.00 perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork 0.28 ± 32% -100.0% 0.00 perf-sched.sch_delay.max.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work 0.11 ± 9% -100.0% 0.00 perf-sched.total_sch_delay.average.ms 109.33 ± 28% -100.0% 0.00 perf-sched.total_sch_delay.max.ms 43.12 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.average.ms 48053 -100.0% 0.00 perf-sched.total_wait_and_delay.count.ms 8044 ± 3% -100.0% 0.00 perf-sched.total_wait_and_delay.max.ms 43.01 ± 3% -100.0% 0.00 perf-sched.total_wait_time.average.ms 8044 ± 3% -100.0% 0.00 perf-sched.total_wait_time.max.ms 881.95 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 529.71 ± 54% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 720.34 ± 10% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 518.81 ± 59% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 237.03 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 44.52 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.43 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 145.88 ± 37% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.kthreadd.ret_from_fork 46.93 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 262.42 ± 32% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 12.63 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 19.58 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio 5.78 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2 282.33 ± 32% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 266.72 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 15.45 ± 8% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.__down.down.xfs_buf_lock 478.58 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 5.04 -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 561.56 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 152.83 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork 20.00 -100.0% 0.00 perf-sched.wait_and_delay.count.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 8.83 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64 12.67 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 9.17 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read 243.00 -100.0% 0.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 261.17 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 1711 ± 30% -100.0% 0.00 perf-sched.wait_and_delay.count.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 57.50 ± 32% -100.0% 0.00 perf-sched.wait_and_delay.count.kthreadd.ret_from_fork 3326 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read 106.83 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 755.50 -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 409.00 ± 14% -100.0% 0.00 perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio 29962 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2 25.50 ± 33% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 94.00 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 293.50 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.__down.down.xfs_buf_lock 39.67 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork 1995 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 1327 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork 4837 -100.0% 0.00 perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork 999.11 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2671 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2649 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1066 ± 13% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1008 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 212.73 ±169% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 3881 ± 20% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.kthreadd.ret_from_fork 2663 ± 4% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 5888 ± 26% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 1127 ± 25% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 512.52 ± 52% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio 217.59 ± 33% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2 2233 ± 5% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5668 ± 3% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 186.08 ± 49% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.__down.down.xfs_buf_lock 505.26 -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 37.86 ± 62% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 7862 ± 6% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork 7499 ± 9% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork 881.91 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 527.77 ± 54% -100.0% 0.00 perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 720.30 ± 10% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 518.58 ± 59% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read 237.00 ± 4% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 44.40 ± 13% -100.0% 0.00 perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 0.20 ±104% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 17.21 ± 96% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 0.35 ± 59% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 2.11 ± 88% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 0.42 ± 31% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 304.79 ±107% -100.0% 0.00 perf-sched.wait_time.avg.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 3.07 ± 11% -100.0% 0.00 perf-sched.wait_time.avg.ms.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle 145.86 ± 37% -100.0% 0.00 perf-sched.wait_time.avg.ms.kthreadd.ret_from_fork 46.89 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read 0.13 ±121% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page 86.38 ±123% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 0.08 ±135% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 9.95 ± 53% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.pagecache_get_page.grab_cache_page_write_begin 0.62 ±213% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.pte_alloc_one.__pte_alloc 78.13 ±146% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault 0.02 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__get_user_pages.__get_user_pages_remote.get_arg_page 21.40 ± 24% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large 13.69 ± 23% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_createname 12.03 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_lookup 11.51 ± 36% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iext_insert 0.02 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.load_elf_phdrs.load_elf_binary 0.02 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__kmalloc.security_prepare_creds.prepare_creds 0.02 ± 23% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.__put_anon_vma.unlink_anon_vmas.free_pgtables 0.57 ±203% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.apparmor_file_alloc_security.security_file_alloc.__alloc_file 0.05 ±133% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.change_p4d_range.change_protection.mprotect_fixup 0.03 ± 36% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.copy_strings.isra.0 0.02 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.count.isra.0 10.20 ±166% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.06 ± 93% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk 0.04 ± 95% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.path_lookupat 30.48 ± 54% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit 8.82 ± 63% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read.xfs_map_blocks.iomap_writepage_map 0.96 ±218% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_read_killable.create_elf_tables.isra 0.02 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.__anon_vma_prepare.do_anonymous_page 0.27 ±133% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.__vma_adjust.__split_vma 0.03 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.__vma_adjust.shift_arg_pages 0.15 ±133% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.unlink_anon_vmas.free_pgtables 0.06 ± 56% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.unlink_file_vma.free_pgtables 0.09 ±163% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 9.29 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_bmapi_convert_delalloc 56.27 ± 43% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_trans_alloc_inode 0.02 ± 23% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.setup_arg_pages.load_elf_binary 0.09 ±105% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.elf_map 0.07 ± 97% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff 5.72 ±167% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 0.03 ± 17% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.nd_jump_root.pick_link 11.48 ± 58% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open 0.16 ± 87% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 3.09 ± 68% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 0.02 ± 17% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.exit_mmap.mmput.begin_new_exec 0.06 ±134% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.filemap_read.__kernel_read.exec_binprm 0.06 ± 99% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.filemap_read.new_sync_read.vfs_read 92.13 ±101% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 8.28 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.iomap_write_actor.iomap_apply.iomap_file_buffered_write 1.57 ± 59% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.10 ±176% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__anon_vma_prepare.do_anonymous_page 7.92 ± 19% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 0.29 ±154% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 0.03 ± 24% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.user_path_at_empty 25.44 ±210% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file 0.03 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.alloc_bprm 0.14 ±189% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.10 ±115% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma 7.72 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_allocbt_init_common.xfs_allocbt_init_cursor 8.68 ± 30% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin 15.08 ± 42% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inobt_init_common.xfs_inobt_init_cursor 13.58 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget 12.28 ± 25% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin 9.05 ± 47% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_bmapi_convert_delalloc 11.31 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate 61.03 ± 46% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_inode 27.83 ± 90% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve 0.11 ±159% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_trace.perf_event_mmap.mmap_region 9.94 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_writepage_map 8.90 ± 54% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mnt_want_write.path_openat.do_filp_open 0.02 ± 25% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.futex_exec_release.exec_mm_release 23.23 ± 55% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 50.16 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread 0.13 ±179% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.mmap_region 0.12 ±140% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.remove_vma.exit_mmap.mmput 262.06 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 12.63 ± 6% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 1.31 ± 82% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 19.55 -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio 4.21 ± 63% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 0.12 ±175% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.exit_mmap 42.23 ±220% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_vmas.exit_mmap.mmput 0.11 ± 81% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 9.56 ± 78% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.vfs_write.ksys_write.do_syscall_64 11.06 ± 28% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wait_for_completion.xfs_btree_split.xfs_btree_make_block_unfull 0.08 ±167% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault 8.99 ± 26% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages 23.58 ± 8% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback 3.06 ±172% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.xfs_trans_alloc.xfs_bmapi_convert_delalloc.xfs_map_blocks 20.07 ±145% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.xfs_trans_alloc.xfs_trans_alloc_inode.xfs_iomap_write_unwritten 12.31 ±141% -100.0% 0.00 perf-sched.wait_time.avg.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 4.37 ±100% -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_bmapi_convert_delalloc 28.10 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create 0.44 ± 48% -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten 5.66 ± 9% -100.0% 0.00 perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2 282.27 ± 32% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 266.71 ± 3% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 15.14 ± 7% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.__down.down.xfs_buf_lock 478.53 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.15 ±113% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 4.98 -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.08 ± 33% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_timeout.wait_for_completion_killable.__kthread_create_on_node.kthread_create_on_node 561.46 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork 0.49 ± 51% -100.0% 0.00 perf-sched.wait_time.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 152.68 ± 2% -100.0% 0.00 perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork 10.18 ± 57% -100.0% 0.00 perf-sched.wait_time.avg.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work 999.02 -100.0% 0.00 perf-sched.wait_time.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown] 2658 ± 4% -100.0% 0.00 perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64 1000 -100.0% 0.00 perf-sched.wait_time.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64 2647 ± 5% -100.0% 0.00 perf-sched.wait_time.max.ms.do_syslog.part.0.kmsg_read.vfs_read 1066 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64 1008 -100.0% 0.00 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64 6.25 ± 99% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault.[unknown] 385.80 ±114% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown] 15.36 ± 48% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown] 4.38 ± 93% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown] 212.34 ±169% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown] 880.39 ±101% -100.0% 0.00 perf-sched.wait_time.max.ms.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex 75.58 ± 16% -100.0% 0.00 perf-sched.wait_time.max.ms.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle 3881 ± 20% -100.0% 0.00 perf-sched.wait_time.max.ms.kthreadd.ret_from_fork 2662 ± 4% -100.0% 0.00 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read 1.07 ±167% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page 2038 ±103% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 0.76 ±177% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy 15.76 ± 59% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.pagecache_get_page.grab_cache_page_write_begin 1.81 ±219% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__alloc_pages_nodemask.pte_alloc_one.__pte_alloc 822.40 ±153% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__do_fault.do_fault.__handle_mm_fault 0.04 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__get_user_pages.__get_user_pages_remote.get_arg_page 809.30 ± 20% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.kmem_alloc_large 43.16 ± 97% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_createname 22.61 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_dir_lookup 17.27 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.kmem_alloc.xfs_iext_insert 0.03 ± 20% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.load_elf_phdrs.load_elf_binary 0.03 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__kmalloc.security_prepare_creds.prepare_creds 0.03 ± 39% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.__put_anon_vma.unlink_anon_vmas.free_pgtables 1.13 ±202% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.apparmor_file_alloc_security.security_file_alloc.__alloc_file 0.29 ±199% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.change_p4d_range.change_protection.mprotect_fixup 0.13 ±153% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.copy_strings.isra.0 0.03 ± 29% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.count.isra.0 325.44 ±143% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.do_user_addr_fault.exc_page_fault.asm_exc_page_fault 0.45 ±135% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.link_path_walk 0.18 ±189% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.walk_component.path_lookupat 365.90 ± 72% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_log_commit_cil.__xfs_trans_commit 49.74 ± 63% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read.xfs_map_blocks.iomap_writepage_map 2.84 ±221% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_read_killable.create_elf_tables.isra 0.03 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.__anon_vma_prepare.do_anonymous_page 1.16 ±144% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.__vma_adjust.__split_vma 0.03 ± 13% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.__vma_adjust.shift_arg_pages 0.77 ±129% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.unlink_anon_vmas.free_pgtables 0.41 ± 94% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.unlink_file_vma.free_pgtables 0.71 ±210% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.vma_link.mmap_region 56.34 ± 21% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_bmapi_convert_delalloc 712.05 ± 42% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write.xfs_ilock.xfs_trans_alloc_inode 0.04 ± 26% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.setup_arg_pages.load_elf_binary 0.89 ±161% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.elf_map 0.61 ±135% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff 181.52 ±201% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.__fput.task_work_run 0.03 ± 15% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.nd_jump_root.pick_link 87.75 ±121% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.path_openat.do_filp_open 2.88 ± 98% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.step_into.walk_component 48.73 ±126% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.dput.terminate_walk.path_openat 0.02 -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.exit_mmap.mmput.begin_new_exec 0.54 ±206% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.filemap_read.__kernel_read.exec_binprm 0.16 ±151% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.filemap_read.new_sync_read.vfs_read 2577 ± 61% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 22.97 ± 25% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.iomap_write_actor.iomap_apply.iomap_file_buffered_write 15.55 ± 67% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__alloc_file.alloc_empty_file 0.27 ±199% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__anon_vma_prepare.do_anonymous_page 27.61 ± 23% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.__d_alloc.d_alloc 2.41 ±170% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.do_sys_openat2 0.03 ± 39% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.getname_flags.user_path_at_empty 175.05 ±210% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.security_file_alloc.__alloc_file 0.03 ± 24% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.alloc_bprm 1.31 ±216% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_alloc.mmap_region 0.70 ±136% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.vm_area_dup.__split_vma 62.81 ± 18% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_allocbt_init_common.xfs_allocbt_init_cursor 65.27 ± 30% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_buf_item_init._xfs_trans_bjoin 85.81 ±118% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inobt_init_common.xfs_inobt_init_cursor 42.45 ±105% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_alloc.xfs_iget 18.88 ± 22% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_inode_item_init.xfs_trans_ijoin 49.96 ± 50% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_bmapi_convert_delalloc 38.77 ± 68% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_icreate 668.93 ± 34% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xfs_trans_alloc.xfs_trans_alloc_inode 255.90 ±112% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc.xlog_ticket_alloc.xfs_log_reserve 1.10 ±196% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.kmem_cache_alloc_trace.perf_event_mmap.mmap_region 71.52 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mempool_alloc.bio_alloc_bioset.iomap_writepage_map 21.38 ± 53% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mnt_want_write.path_openat.do_filp_open 0.03 ± 16% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.futex_exec_release.exec_mm_release 313.44 ±100% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.mutex_lock.perf_poll.do_sys_poll 936.94 ± 5% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.process_one_work.worker_thread.kthread 0.24 ±195% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.remove_vma.__do_munmap.mmap_region 1.40 ±155% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.remove_vma.exit_mmap.mmput 5887 ± 26% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork 1127 ± 25% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.affine_move_task.__set_cpus_allowed_ptr 7.63 ± 61% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.stop_one_cpu.sched_exec.bprm_execve 512.47 ± 52% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.submit_bio_checks.submit_bio_noacct.submit_bio 19.90 ± 56% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode 0.86 ±211% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_page_range.unmap_vmas.exit_mmap 168.02 ±221% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.exit_mmap.mmput 1.19 ±107% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.unmap_vmas.unmap_region.__do_munmap 19.12 ±102% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.vfs_write.ksys_write.do_syscall_64 14.29 ± 37% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.xfs_btree_split.xfs_btree_make_block_unfull 1.04 ±214% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wp_page_copy.__handle_mm_fault.handle_mm_fault 78.63 ± 8% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.write_cache_pages.iomap_writepages.xfs_vm_writepages 83.07 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback 6.79 ±183% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.xfs_trans_alloc.xfs_bmapi_convert_delalloc.xfs_map_blocks 79.84 ±145% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.xfs_trans_alloc.xfs_trans_alloc_inode.xfs_iomap_write_unwritten 334.59 ±140% -100.0% 0.00 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.zap_pte_range.unmap_page_range.unmap_vmas 7.24 ±115% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_bmapi_convert_delalloc 28.10 ± 33% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_create 14.35 ± 43% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_read_slowpath.xfs_log_commit_cil.__xfs_trans_commit.xfs_iomap_write_unwritten 179.25 ± 39% -100.0% 0.00 perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_openat2 2233 ± 5% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait 5668 ± 3% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll 149.74 ± 57% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.__down.down.xfs_buf_lock 505.03 -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork 0.15 ±113% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.khugepaged.kthread.ret_from_fork 24.14 ± 54% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork 0.75 ±181% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_timeout.wait_for_completion_killable.__kthread_create_on_node.kthread_create_on_node 7862 ± 6% -100.0% 0.00 perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork 9.53 ± 39% -100.0% 0.00 perf-sched.wait_time.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open.isra 7499 ± 9% -100.0% 0.00 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork 223.44 ± 11% -100.0% 0.00 perf-sched.wait_time.max.ms.xlog_state_get_iclog_space.xlog_write.xlog_cil_push_work.process_one_work Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. --- 0DAY/LKP+ Test Infrastructure Open Source Technology Center https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation Thanks, Oliver Sang [-- Attachment #2: config-5.12.0-rc2-00298-g43b2a76b1a5a --] [-- Type: text/plain, Size: 172899 bytes --] # # Automatically generated file; DO NOT EDIT. # Linux/x86_64 5.12.0-rc2 Kernel Configuration # CONFIG_CC_VERSION_TEXT="gcc-9 (Debian 9.3.0-22) 9.3.0" CONFIG_CC_IS_GCC=y CONFIG_GCC_VERSION=90300 CONFIG_CLANG_VERSION=0 CONFIG_LD_IS_BFD=y CONFIG_LD_VERSION=23502 CONFIG_LLD_VERSION=0 CONFIG_CC_CAN_LINK=y CONFIG_CC_CAN_LINK_STATIC=y CONFIG_CC_HAS_ASM_GOTO=y CONFIG_CC_HAS_ASM_INLINE=y CONFIG_IRQ_WORK=y CONFIG_BUILDTIME_TABLE_SORT=y CONFIG_THREAD_INFO_IN_TASK=y # # General setup # CONFIG_INIT_ENV_ARG_LIMIT=32 # CONFIG_COMPILE_TEST is not set CONFIG_LOCALVERSION="" CONFIG_LOCALVERSION_AUTO=y CONFIG_BUILD_SALT="" CONFIG_HAVE_KERNEL_GZIP=y CONFIG_HAVE_KERNEL_BZIP2=y CONFIG_HAVE_KERNEL_LZMA=y CONFIG_HAVE_KERNEL_XZ=y CONFIG_HAVE_KERNEL_LZO=y CONFIG_HAVE_KERNEL_LZ4=y CONFIG_HAVE_KERNEL_ZSTD=y CONFIG_KERNEL_GZIP=y # CONFIG_KERNEL_BZIP2 is not set # CONFIG_KERNEL_LZMA is not set # CONFIG_KERNEL_XZ is not set # CONFIG_KERNEL_LZO is not set # CONFIG_KERNEL_LZ4 is not set # CONFIG_KERNEL_ZSTD is not set CONFIG_DEFAULT_INIT="" CONFIG_DEFAULT_HOSTNAME="(none)" CONFIG_SWAP=y CONFIG_SYSVIPC=y CONFIG_SYSVIPC_SYSCTL=y CONFIG_POSIX_MQUEUE=y CONFIG_POSIX_MQUEUE_SYSCTL=y # CONFIG_WATCH_QUEUE is not set CONFIG_CROSS_MEMORY_ATTACH=y # CONFIG_USELIB is not set CONFIG_AUDIT=y CONFIG_HAVE_ARCH_AUDITSYSCALL=y CONFIG_AUDITSYSCALL=y # # IRQ subsystem # CONFIG_GENERIC_IRQ_PROBE=y CONFIG_GENERIC_IRQ_SHOW=y CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y CONFIG_GENERIC_PENDING_IRQ=y CONFIG_GENERIC_IRQ_MIGRATION=y CONFIG_GENERIC_IRQ_INJECTION=y CONFIG_HARDIRQS_SW_RESEND=y CONFIG_IRQ_DOMAIN=y CONFIG_IRQ_DOMAIN_HIERARCHY=y CONFIG_GENERIC_MSI_IRQ=y CONFIG_GENERIC_MSI_IRQ_DOMAIN=y CONFIG_IRQ_MSI_IOMMU=y CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y CONFIG_GENERIC_IRQ_RESERVATION_MODE=y CONFIG_IRQ_FORCED_THREADING=y CONFIG_SPARSE_IRQ=y # CONFIG_GENERIC_IRQ_DEBUGFS is not set # end of IRQ subsystem CONFIG_CLOCKSOURCE_WATCHDOG=y CONFIG_ARCH_CLOCKSOURCE_INIT=y CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y CONFIG_GENERIC_TIME_VSYSCALL=y CONFIG_GENERIC_CLOCKEVENTS=y CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y CONFIG_GENERIC_CMOS_UPDATE=y CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y # # Timers subsystem # CONFIG_TICK_ONESHOT=y CONFIG_NO_HZ_COMMON=y # CONFIG_HZ_PERIODIC is not set # CONFIG_NO_HZ_IDLE is not set CONFIG_NO_HZ_FULL=y CONFIG_CONTEXT_TRACKING=y # CONFIG_CONTEXT_TRACKING_FORCE is not set CONFIG_NO_HZ=y CONFIG_HIGH_RES_TIMERS=y # end of Timers subsystem # CONFIG_PREEMPT_NONE is not set CONFIG_PREEMPT_VOLUNTARY=y # CONFIG_PREEMPT is not set CONFIG_PREEMPT_COUNT=y # # CPU/Task time and stats accounting # CONFIG_VIRT_CPU_ACCOUNTING=y CONFIG_VIRT_CPU_ACCOUNTING_GEN=y CONFIG_IRQ_TIME_ACCOUNTING=y CONFIG_HAVE_SCHED_AVG_IRQ=y CONFIG_BSD_PROCESS_ACCT=y CONFIG_BSD_PROCESS_ACCT_V3=y CONFIG_TASKSTATS=y CONFIG_TASK_DELAY_ACCT=y CONFIG_TASK_XACCT=y CONFIG_TASK_IO_ACCOUNTING=y # CONFIG_PSI is not set # end of CPU/Task time and stats accounting CONFIG_CPU_ISOLATION=y # # RCU Subsystem # CONFIG_TREE_RCU=y # CONFIG_RCU_EXPERT is not set CONFIG_SRCU=y CONFIG_TREE_SRCU=y CONFIG_TASKS_RCU_GENERIC=y CONFIG_TASKS_RCU=y CONFIG_TASKS_RUDE_RCU=y CONFIG_TASKS_TRACE_RCU=y CONFIG_RCU_STALL_COMMON=y CONFIG_RCU_NEED_SEGCBLIST=y CONFIG_RCU_NOCB_CPU=y # end of RCU Subsystem CONFIG_BUILD_BIN2C=y CONFIG_IKCONFIG=y CONFIG_IKCONFIG_PROC=y # CONFIG_IKHEADERS is not set CONFIG_LOG_BUF_SHIFT=20 CONFIG_LOG_CPU_MAX_BUF_SHIFT=12 CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13 CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y # # Scheduler features # # CONFIG_UCLAMP_TASK is not set # end of Scheduler features CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y CONFIG_CC_HAS_INT128=y CONFIG_ARCH_SUPPORTS_INT128=y CONFIG_NUMA_BALANCING=y CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y CONFIG_CGROUPS=y CONFIG_PAGE_COUNTER=y CONFIG_MEMCG=y CONFIG_MEMCG_SWAP=y CONFIG_MEMCG_KMEM=y CONFIG_BLK_CGROUP=y CONFIG_CGROUP_WRITEBACK=y CONFIG_CGROUP_SCHED=y CONFIG_FAIR_GROUP_SCHED=y CONFIG_CFS_BANDWIDTH=y CONFIG_RT_GROUP_SCHED=y CONFIG_CGROUP_PIDS=y CONFIG_CGROUP_RDMA=y CONFIG_CGROUP_FREEZER=y CONFIG_CGROUP_HUGETLB=y CONFIG_CPUSETS=y CONFIG_PROC_PID_CPUSET=y CONFIG_CGROUP_DEVICE=y CONFIG_CGROUP_CPUACCT=y CONFIG_CGROUP_PERF=y CONFIG_CGROUP_BPF=y # CONFIG_CGROUP_DEBUG is not set CONFIG_SOCK_CGROUP_DATA=y CONFIG_NAMESPACES=y CONFIG_UTS_NS=y CONFIG_TIME_NS=y CONFIG_IPC_NS=y CONFIG_USER_NS=y CONFIG_PID_NS=y CONFIG_NET_NS=y # CONFIG_CHECKPOINT_RESTORE is not set CONFIG_SCHED_AUTOGROUP=y # CONFIG_SYSFS_DEPRECATED is not set CONFIG_RELAY=y CONFIG_BLK_DEV_INITRD=y CONFIG_INITRAMFS_SOURCE="" CONFIG_RD_GZIP=y CONFIG_RD_BZIP2=y CONFIG_RD_LZMA=y CONFIG_RD_XZ=y CONFIG_RD_LZO=y CONFIG_RD_LZ4=y CONFIG_RD_ZSTD=y # CONFIG_BOOT_CONFIG is not set CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set CONFIG_LD_ORPHAN_WARN=y CONFIG_SYSCTL=y CONFIG_HAVE_UID16=y CONFIG_SYSCTL_EXCEPTION_TRACE=y CONFIG_HAVE_PCSPKR_PLATFORM=y CONFIG_BPF=y # CONFIG_EXPERT is not set CONFIG_UID16=y CONFIG_MULTIUSER=y CONFIG_SGETMASK_SYSCALL=y CONFIG_SYSFS_SYSCALL=y CONFIG_FHANDLE=y CONFIG_POSIX_TIMERS=y CONFIG_PRINTK=y CONFIG_PRINTK_NMI=y CONFIG_BUG=y CONFIG_ELF_CORE=y CONFIG_PCSPKR_PLATFORM=y CONFIG_BASE_FULL=y CONFIG_FUTEX=y CONFIG_FUTEX_PI=y CONFIG_EPOLL=y CONFIG_SIGNALFD=y CONFIG_TIMERFD=y CONFIG_EVENTFD=y CONFIG_SHMEM=y CONFIG_AIO=y CONFIG_IO_URING=y CONFIG_ADVISE_SYSCALLS=y CONFIG_HAVE_ARCH_USERFAULTFD_WP=y CONFIG_MEMBARRIER=y CONFIG_KALLSYMS=y CONFIG_KALLSYMS_ALL=y CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y CONFIG_KALLSYMS_BASE_RELATIVE=y # CONFIG_BPF_LSM is not set CONFIG_BPF_SYSCALL=y CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y CONFIG_BPF_JIT_ALWAYS_ON=y CONFIG_BPF_JIT_DEFAULT_ON=y # CONFIG_BPF_PRELOAD is not set CONFIG_USERFAULTFD=y CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y CONFIG_KCMP=y CONFIG_RSEQ=y # CONFIG_EMBEDDED is not set CONFIG_HAVE_PERF_EVENTS=y # # Kernel Performance Events And Counters # CONFIG_PERF_EVENTS=y # CONFIG_DEBUG_PERF_USE_VMALLOC is not set # end of Kernel Performance Events And Counters CONFIG_VM_EVENT_COUNTERS=y CONFIG_SLUB_DEBUG=y # CONFIG_COMPAT_BRK is not set # CONFIG_SLAB is not set CONFIG_SLUB=y CONFIG_SLAB_MERGE_DEFAULT=y CONFIG_SLAB_FREELIST_RANDOM=y # CONFIG_SLAB_FREELIST_HARDENED is not set CONFIG_SHUFFLE_PAGE_ALLOCATOR=y CONFIG_SLUB_CPU_PARTIAL=y CONFIG_SYSTEM_DATA_VERIFICATION=y CONFIG_PROFILING=y CONFIG_TRACEPOINTS=y # end of General setup CONFIG_64BIT=y CONFIG_X86_64=y CONFIG_X86=y CONFIG_INSTRUCTION_DECODER=y CONFIG_OUTPUT_FORMAT="elf64-x86-64" CONFIG_LOCKDEP_SUPPORT=y CONFIG_STACKTRACE_SUPPORT=y CONFIG_MMU=y CONFIG_ARCH_MMAP_RND_BITS_MIN=28 CONFIG_ARCH_MMAP_RND_BITS_MAX=32 CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8 CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16 CONFIG_GENERIC_ISA_DMA=y CONFIG_GENERIC_BUG=y CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y CONFIG_ARCH_MAY_HAVE_PC_FDC=y CONFIG_GENERIC_CALIBRATE_DELAY=y CONFIG_ARCH_HAS_CPU_RELAX=y CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y CONFIG_ARCH_HAS_FILTER_PGPROT=y CONFIG_HAVE_SETUP_PER_CPU_AREA=y CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y CONFIG_ARCH_HIBERNATION_POSSIBLE=y CONFIG_ARCH_SUSPEND_POSSIBLE=y CONFIG_ARCH_WANT_GENERAL_HUGETLB=y CONFIG_ZONE_DMA32=y CONFIG_AUDIT_ARCH=y CONFIG_HAVE_INTEL_TXT=y CONFIG_X86_64_SMP=y CONFIG_ARCH_SUPPORTS_UPROBES=y CONFIG_FIX_EARLYCON_MEM=y CONFIG_DYNAMIC_PHYSICAL_MASK=y CONFIG_PGTABLE_LEVELS=5 CONFIG_CC_HAS_SANE_STACKPROTECTOR=y # # Processor type and features # CONFIG_ZONE_DMA=y CONFIG_SMP=y CONFIG_X86_FEATURE_NAMES=y CONFIG_X86_X2APIC=y CONFIG_X86_MPPARSE=y # CONFIG_GOLDFISH is not set CONFIG_RETPOLINE=y CONFIG_X86_CPU_RESCTRL=y CONFIG_X86_EXTENDED_PLATFORM=y # CONFIG_X86_NUMACHIP is not set # CONFIG_X86_VSMP is not set CONFIG_X86_UV=y # CONFIG_X86_GOLDFISH is not set # CONFIG_X86_INTEL_MID is not set CONFIG_X86_INTEL_LPSS=y CONFIG_X86_AMD_PLATFORM_DEVICE=y CONFIG_IOSF_MBI=y # CONFIG_IOSF_MBI_DEBUG is not set CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y # CONFIG_SCHED_OMIT_FRAME_POINTER is not set CONFIG_HYPERVISOR_GUEST=y CONFIG_PARAVIRT=y # CONFIG_PARAVIRT_DEBUG is not set CONFIG_PARAVIRT_SPINLOCKS=y CONFIG_X86_HV_CALLBACK_VECTOR=y CONFIG_XEN=y # CONFIG_XEN_PV is not set CONFIG_XEN_PVHVM=y CONFIG_XEN_PVHVM_SMP=y CONFIG_XEN_PVHVM_GUEST=y CONFIG_XEN_SAVE_RESTORE=y # CONFIG_XEN_DEBUG_FS is not set # CONFIG_XEN_PVH is not set CONFIG_KVM_GUEST=y CONFIG_ARCH_CPUIDLE_HALTPOLL=y # CONFIG_PVH is not set CONFIG_PARAVIRT_TIME_ACCOUNTING=y CONFIG_PARAVIRT_CLOCK=y # CONFIG_JAILHOUSE_GUEST is not set # CONFIG_ACRN_GUEST is not set # CONFIG_MK8 is not set # CONFIG_MPSC is not set # CONFIG_MCORE2 is not set # CONFIG_MATOM is not set CONFIG_GENERIC_CPU=y CONFIG_X86_INTERNODE_CACHE_SHIFT=6 CONFIG_X86_L1_CACHE_SHIFT=6 CONFIG_X86_TSC=y CONFIG_X86_CMPXCHG64=y CONFIG_X86_CMOV=y CONFIG_X86_MINIMUM_CPU_FAMILY=64 CONFIG_X86_DEBUGCTLMSR=y CONFIG_IA32_FEAT_CTL=y CONFIG_X86_VMX_FEATURE_NAMES=y CONFIG_CPU_SUP_INTEL=y CONFIG_CPU_SUP_AMD=y CONFIG_CPU_SUP_HYGON=y CONFIG_CPU_SUP_CENTAUR=y CONFIG_CPU_SUP_ZHAOXIN=y CONFIG_HPET_TIMER=y CONFIG_HPET_EMULATE_RTC=y CONFIG_DMI=y # CONFIG_GART_IOMMU is not set CONFIG_MAXSMP=y CONFIG_NR_CPUS_RANGE_BEGIN=8192 CONFIG_NR_CPUS_RANGE_END=8192 CONFIG_NR_CPUS_DEFAULT=8192 CONFIG_NR_CPUS=8192 CONFIG_SCHED_SMT=y CONFIG_SCHED_MC=y CONFIG_SCHED_MC_PRIO=y CONFIG_X86_LOCAL_APIC=y CONFIG_X86_IO_APIC=y CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y CONFIG_X86_MCE=y CONFIG_X86_MCELOG_LEGACY=y CONFIG_X86_MCE_INTEL=y CONFIG_X86_MCE_AMD=y CONFIG_X86_MCE_THRESHOLD=y CONFIG_X86_MCE_INJECT=m # # Performance monitoring # CONFIG_PERF_EVENTS_INTEL_UNCORE=m CONFIG_PERF_EVENTS_INTEL_RAPL=m CONFIG_PERF_EVENTS_INTEL_CSTATE=m CONFIG_PERF_EVENTS_AMD_POWER=m # end of Performance monitoring CONFIG_X86_16BIT=y CONFIG_X86_ESPFIX64=y CONFIG_X86_VSYSCALL_EMULATION=y CONFIG_X86_IOPL_IOPERM=y CONFIG_I8K=m CONFIG_MICROCODE=y CONFIG_MICROCODE_INTEL=y CONFIG_MICROCODE_AMD=y CONFIG_MICROCODE_OLD_INTERFACE=y CONFIG_X86_MSR=y CONFIG_X86_CPUID=y CONFIG_X86_5LEVEL=y CONFIG_X86_DIRECT_GBPAGES=y # CONFIG_X86_CPA_STATISTICS is not set CONFIG_AMD_MEM_ENCRYPT=y # CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT is not set CONFIG_NUMA=y CONFIG_AMD_NUMA=y CONFIG_X86_64_ACPI_NUMA=y CONFIG_NUMA_EMU=y CONFIG_NODES_SHIFT=10 CONFIG_ARCH_SPARSEMEM_ENABLE=y CONFIG_ARCH_SPARSEMEM_DEFAULT=y CONFIG_ARCH_SELECT_MEMORY_MODEL=y # CONFIG_ARCH_MEMORY_PROBE is not set CONFIG_ARCH_PROC_KCORE_TEXT=y CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000 CONFIG_X86_PMEM_LEGACY_DEVICE=y CONFIG_X86_PMEM_LEGACY=m CONFIG_X86_CHECK_BIOS_CORRUPTION=y # CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK is not set CONFIG_X86_RESERVE_LOW=64 CONFIG_MTRR=y CONFIG_MTRR_SANITIZER=y CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1 CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1 CONFIG_X86_PAT=y CONFIG_ARCH_USES_PG_UNCACHED=y CONFIG_ARCH_RANDOM=y CONFIG_X86_SMAP=y CONFIG_X86_UMIP=y CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y CONFIG_X86_INTEL_TSX_MODE_OFF=y # CONFIG_X86_INTEL_TSX_MODE_ON is not set # CONFIG_X86_INTEL_TSX_MODE_AUTO is not set # CONFIG_X86_SGX is not set CONFIG_EFI=y CONFIG_EFI_STUB=y CONFIG_EFI_MIXED=y # CONFIG_HZ_100 is not set # CONFIG_HZ_250 is not set # CONFIG_HZ_300 is not set CONFIG_HZ_1000=y CONFIG_HZ=1000 CONFIG_SCHED_HRTICK=y CONFIG_KEXEC=y CONFIG_KEXEC_FILE=y CONFIG_ARCH_HAS_KEXEC_PURGATORY=y # CONFIG_KEXEC_SIG is not set CONFIG_CRASH_DUMP=y CONFIG_KEXEC_JUMP=y CONFIG_PHYSICAL_START=0x1000000 CONFIG_RELOCATABLE=y CONFIG_RANDOMIZE_BASE=y CONFIG_X86_NEED_RELOCS=y CONFIG_PHYSICAL_ALIGN=0x200000 CONFIG_DYNAMIC_MEMORY_LAYOUT=y CONFIG_RANDOMIZE_MEMORY=y CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa CONFIG_HOTPLUG_CPU=y CONFIG_BOOTPARAM_HOTPLUG_CPU0=y # CONFIG_DEBUG_HOTPLUG_CPU0 is not set # CONFIG_COMPAT_VDSO is not set CONFIG_LEGACY_VSYSCALL_EMULATE=y # CONFIG_LEGACY_VSYSCALL_XONLY is not set # CONFIG_LEGACY_VSYSCALL_NONE is not set # CONFIG_CMDLINE_BOOL is not set CONFIG_MODIFY_LDT_SYSCALL=y CONFIG_HAVE_LIVEPATCH=y CONFIG_LIVEPATCH=y # end of Processor type and features CONFIG_ARCH_HAS_ADD_PAGES=y CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y CONFIG_USE_PERCPU_NUMA_NODE_ID=y CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y CONFIG_ARCH_ENABLE_THP_MIGRATION=y # # Power management and ACPI options # CONFIG_ARCH_HIBERNATION_HEADER=y CONFIG_SUSPEND=y CONFIG_SUSPEND_FREEZER=y CONFIG_HIBERNATE_CALLBACKS=y CONFIG_HIBERNATION=y CONFIG_HIBERNATION_SNAPSHOT_DEV=y CONFIG_PM_STD_PARTITION="" CONFIG_PM_SLEEP=y CONFIG_PM_SLEEP_SMP=y # CONFIG_PM_AUTOSLEEP is not set # CONFIG_PM_WAKELOCKS is not set CONFIG_PM=y CONFIG_PM_DEBUG=y # CONFIG_PM_ADVANCED_DEBUG is not set # CONFIG_PM_TEST_SUSPEND is not set CONFIG_PM_SLEEP_DEBUG=y # CONFIG_PM_TRACE_RTC is not set CONFIG_PM_CLK=y # CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set # CONFIG_ENERGY_MODEL is not set CONFIG_ARCH_SUPPORTS_ACPI=y CONFIG_ACPI=y CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y # CONFIG_ACPI_DEBUGGER is not set CONFIG_ACPI_SPCR_TABLE=y # CONFIG_ACPI_FPDT is not set CONFIG_ACPI_LPIT=y CONFIG_ACPI_SLEEP=y CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y CONFIG_ACPI_EC_DEBUGFS=m CONFIG_ACPI_AC=y CONFIG_ACPI_BATTERY=y CONFIG_ACPI_BUTTON=y CONFIG_ACPI_VIDEO=m CONFIG_ACPI_FAN=y CONFIG_ACPI_TAD=m CONFIG_ACPI_DOCK=y CONFIG_ACPI_CPU_FREQ_PSS=y CONFIG_ACPI_PROCESSOR_CSTATE=y CONFIG_ACPI_PROCESSOR_IDLE=y CONFIG_ACPI_CPPC_LIB=y CONFIG_ACPI_PROCESSOR=y CONFIG_ACPI_IPMI=m CONFIG_ACPI_HOTPLUG_CPU=y CONFIG_ACPI_PROCESSOR_AGGREGATOR=m CONFIG_ACPI_THERMAL=y CONFIG_ACPI_PLATFORM_PROFILE=m CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y CONFIG_ACPI_TABLE_UPGRADE=y # CONFIG_ACPI_DEBUG is not set CONFIG_ACPI_PCI_SLOT=y CONFIG_ACPI_CONTAINER=y CONFIG_ACPI_HOTPLUG_MEMORY=y CONFIG_ACPI_HOTPLUG_IOAPIC=y CONFIG_ACPI_SBS=m CONFIG_ACPI_HED=y # CONFIG_ACPI_CUSTOM_METHOD is not set CONFIG_ACPI_BGRT=y CONFIG_ACPI_NFIT=m # CONFIG_NFIT_SECURITY_DEBUG is not set CONFIG_ACPI_NUMA=y # CONFIG_ACPI_HMAT is not set CONFIG_HAVE_ACPI_APEI=y CONFIG_HAVE_ACPI_APEI_NMI=y CONFIG_ACPI_APEI=y CONFIG_ACPI_APEI_GHES=y CONFIG_ACPI_APEI_PCIEAER=y CONFIG_ACPI_APEI_MEMORY_FAILURE=y CONFIG_ACPI_APEI_EINJ=m CONFIG_ACPI_APEI_ERST_DEBUG=y # CONFIG_ACPI_DPTF is not set CONFIG_ACPI_WATCHDOG=y CONFIG_ACPI_EXTLOG=m CONFIG_ACPI_ADXL=y # CONFIG_ACPI_CONFIGFS is not set CONFIG_PMIC_OPREGION=y CONFIG_X86_PM_TIMER=y # # CPU Frequency scaling # CONFIG_CPU_FREQ=y CONFIG_CPU_FREQ_GOV_ATTR_SET=y CONFIG_CPU_FREQ_GOV_COMMON=y CONFIG_CPU_FREQ_STAT=y CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y # CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set # CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set # CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set CONFIG_CPU_FREQ_GOV_PERFORMANCE=y CONFIG_CPU_FREQ_GOV_POWERSAVE=y CONFIG_CPU_FREQ_GOV_USERSPACE=y CONFIG_CPU_FREQ_GOV_ONDEMAND=y CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y # # CPU frequency scaling drivers # CONFIG_X86_INTEL_PSTATE=y # CONFIG_X86_PCC_CPUFREQ is not set CONFIG_X86_ACPI_CPUFREQ=m CONFIG_X86_ACPI_CPUFREQ_CPB=y CONFIG_X86_POWERNOW_K8=m CONFIG_X86_AMD_FREQ_SENSITIVITY=m # CONFIG_X86_SPEEDSTEP_CENTRINO is not set CONFIG_X86_P4_CLOCKMOD=m # # shared options # CONFIG_X86_SPEEDSTEP_LIB=m # end of CPU Frequency scaling # # CPU Idle # CONFIG_CPU_IDLE=y # CONFIG_CPU_IDLE_GOV_LADDER is not set CONFIG_CPU_IDLE_GOV_MENU=y # CONFIG_CPU_IDLE_GOV_TEO is not set # CONFIG_CPU_IDLE_GOV_HALTPOLL is not set CONFIG_HALTPOLL_CPUIDLE=y # end of CPU Idle CONFIG_INTEL_IDLE=y # end of Power management and ACPI options # # Bus options (PCI etc.) # CONFIG_PCI_DIRECT=y CONFIG_PCI_MMCONFIG=y CONFIG_PCI_XEN=y CONFIG_MMCONF_FAM10H=y CONFIG_ISA_DMA_API=y CONFIG_AMD_NB=y # CONFIG_X86_SYSFB is not set # end of Bus options (PCI etc.) # # Binary Emulations # CONFIG_IA32_EMULATION=y # CONFIG_X86_X32 is not set CONFIG_COMPAT_32=y CONFIG_COMPAT=y CONFIG_COMPAT_FOR_U64_ALIGNMENT=y CONFIG_SYSVIPC_COMPAT=y # end of Binary Emulations # # Firmware Drivers # CONFIG_EDD=m # CONFIG_EDD_OFF is not set CONFIG_FIRMWARE_MEMMAP=y CONFIG_DMIID=y CONFIG_DMI_SYSFS=y CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y # CONFIG_ISCSI_IBFT is not set CONFIG_FW_CFG_SYSFS=y # CONFIG_FW_CFG_SYSFS_CMDLINE is not set # CONFIG_GOOGLE_FIRMWARE is not set # # EFI (Extensible Firmware Interface) Support # CONFIG_EFI_VARS=y CONFIG_EFI_ESRT=y CONFIG_EFI_VARS_PSTORE=y CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE=y CONFIG_EFI_RUNTIME_MAP=y # CONFIG_EFI_FAKE_MEMMAP is not set CONFIG_EFI_RUNTIME_WRAPPERS=y CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y # CONFIG_EFI_BOOTLOADER_CONTROL is not set # CONFIG_EFI_CAPSULE_LOADER is not set # CONFIG_EFI_TEST is not set CONFIG_APPLE_PROPERTIES=y # CONFIG_RESET_ATTACK_MITIGATION is not set # CONFIG_EFI_RCI2_TABLE is not set # CONFIG_EFI_DISABLE_PCI_DMA is not set # end of EFI (Extensible Firmware Interface) Support CONFIG_UEFI_CPER=y CONFIG_UEFI_CPER_X86=y CONFIG_EFI_DEV_PATH_PARSER=y CONFIG_EFI_EARLYCON=y CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y # # Tegra firmware driver # # end of Tegra firmware driver # end of Firmware Drivers CONFIG_HAVE_KVM=y CONFIG_HAVE_KVM_IRQCHIP=y CONFIG_HAVE_KVM_IRQFD=y CONFIG_HAVE_KVM_IRQ_ROUTING=y CONFIG_HAVE_KVM_EVENTFD=y CONFIG_KVM_MMIO=y CONFIG_KVM_ASYNC_PF=y CONFIG_HAVE_KVM_MSI=y CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y CONFIG_KVM_VFIO=y CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y CONFIG_KVM_COMPAT=y CONFIG_HAVE_KVM_IRQ_BYPASS=y CONFIG_HAVE_KVM_NO_POLL=y CONFIG_KVM_XFER_TO_GUEST_WORK=y CONFIG_VIRTUALIZATION=y CONFIG_KVM=m CONFIG_KVM_INTEL=m # CONFIG_KVM_AMD is not set # CONFIG_KVM_XEN is not set CONFIG_KVM_MMU_AUDIT=y CONFIG_AS_AVX512=y CONFIG_AS_SHA1_NI=y CONFIG_AS_SHA256_NI=y CONFIG_AS_TPAUSE=y # # General architecture-dependent options # CONFIG_CRASH_CORE=y CONFIG_KEXEC_CORE=y CONFIG_HOTPLUG_SMT=y CONFIG_GENERIC_ENTRY=y CONFIG_KPROBES=y CONFIG_JUMP_LABEL=y # CONFIG_STATIC_KEYS_SELFTEST is not set # CONFIG_STATIC_CALL_SELFTEST is not set CONFIG_OPTPROBES=y CONFIG_KPROBES_ON_FTRACE=y CONFIG_UPROBES=y CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y CONFIG_ARCH_USE_BUILTIN_BSWAP=y CONFIG_KRETPROBES=y CONFIG_USER_RETURN_NOTIFIER=y CONFIG_HAVE_IOREMAP_PROT=y CONFIG_HAVE_KPROBES=y CONFIG_HAVE_KRETPROBES=y CONFIG_HAVE_OPTPROBES=y CONFIG_HAVE_KPROBES_ON_FTRACE=y CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y CONFIG_HAVE_NMI=y CONFIG_HAVE_ARCH_TRACEHOOK=y CONFIG_HAVE_DMA_CONTIGUOUS=y CONFIG_GENERIC_SMP_IDLE_THREAD=y CONFIG_ARCH_HAS_FORTIFY_SOURCE=y CONFIG_ARCH_HAS_SET_MEMORY=y CONFIG_ARCH_HAS_SET_DIRECT_MAP=y CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y CONFIG_HAVE_ASM_MODVERSIONS=y CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y CONFIG_HAVE_RSEQ=y CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y CONFIG_HAVE_HW_BREAKPOINT=y CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y CONFIG_HAVE_USER_RETURN_NOTIFIER=y CONFIG_HAVE_PERF_EVENTS_NMI=y CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y CONFIG_HAVE_PERF_REGS=y CONFIG_HAVE_PERF_USER_STACK_DUMP=y CONFIG_HAVE_ARCH_JUMP_LABEL=y CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y CONFIG_MMU_GATHER_TABLE_FREE=y CONFIG_MMU_GATHER_RCU_TABLE_FREE=y CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y CONFIG_HAVE_CMPXCHG_LOCAL=y CONFIG_HAVE_CMPXCHG_DOUBLE=y CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y CONFIG_HAVE_ARCH_SECCOMP=y CONFIG_HAVE_ARCH_SECCOMP_FILTER=y CONFIG_SECCOMP=y CONFIG_SECCOMP_FILTER=y # CONFIG_SECCOMP_CACHE_DEBUG is not set CONFIG_HAVE_ARCH_STACKLEAK=y CONFIG_HAVE_STACKPROTECTOR=y CONFIG_STACKPROTECTOR=y CONFIG_STACKPROTECTOR_STRONG=y CONFIG_ARCH_SUPPORTS_LTO_CLANG=y CONFIG_ARCH_SUPPORTS_LTO_CLANG_THIN=y CONFIG_LTO_NONE=y CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y CONFIG_HAVE_CONTEXT_TRACKING=y CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK=y CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y CONFIG_HAVE_MOVE_PUD=y CONFIG_HAVE_MOVE_PMD=y CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y CONFIG_HAVE_ARCH_HUGE_VMAP=y CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y CONFIG_HAVE_ARCH_SOFT_DIRTY=y CONFIG_HAVE_MOD_ARCH_SPECIFIC=y CONFIG_MODULES_USE_ELF_RELA=y CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y CONFIG_ARCH_HAS_ELF_RANDOMIZE=y CONFIG_HAVE_ARCH_MMAP_RND_BITS=y CONFIG_HAVE_EXIT_THREAD=y CONFIG_ARCH_MMAP_RND_BITS=28 CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8 CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y CONFIG_HAVE_STACK_VALIDATION=y CONFIG_HAVE_RELIABLE_STACKTRACE=y CONFIG_OLD_SIGSUSPEND3=y CONFIG_COMPAT_OLD_SIGACTION=y CONFIG_COMPAT_32BIT_TIME=y CONFIG_HAVE_ARCH_VMAP_STACK=y CONFIG_VMAP_STACK=y CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y CONFIG_STRICT_KERNEL_RWX=y CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y CONFIG_STRICT_MODULE_RWX=y CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y CONFIG_ARCH_USE_MEMREMAP_PROT=y # CONFIG_LOCK_EVENT_COUNTS is not set CONFIG_ARCH_HAS_MEM_ENCRYPT=y CONFIG_HAVE_STATIC_CALL=y CONFIG_HAVE_STATIC_CALL_INLINE=y CONFIG_HAVE_PREEMPT_DYNAMIC=y CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y CONFIG_ARCH_HAS_ELFCORE_COMPAT=y # # GCOV-based kernel profiling # # CONFIG_GCOV_KERNEL is not set CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y # end of GCOV-based kernel profiling CONFIG_HAVE_GCC_PLUGINS=y # end of General architecture-dependent options CONFIG_RT_MUTEXES=y CONFIG_BASE_SMALL=0 CONFIG_MODULE_SIG_FORMAT=y CONFIG_MODULES=y CONFIG_MODULE_FORCE_LOAD=y CONFIG_MODULE_UNLOAD=y # CONFIG_MODULE_FORCE_UNLOAD is not set # CONFIG_MODVERSIONS is not set # CONFIG_MODULE_SRCVERSION_ALL is not set CONFIG_MODULE_SIG=y # CONFIG_MODULE_SIG_FORCE is not set CONFIG_MODULE_SIG_ALL=y # CONFIG_MODULE_SIG_SHA1 is not set # CONFIG_MODULE_SIG_SHA224 is not set CONFIG_MODULE_SIG_SHA256=y # CONFIG_MODULE_SIG_SHA384 is not set # CONFIG_MODULE_SIG_SHA512 is not set CONFIG_MODULE_SIG_HASH="sha256" # CONFIG_MODULE_COMPRESS is not set # CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set CONFIG_MODULES_TREE_LOOKUP=y CONFIG_BLOCK=y CONFIG_BLK_SCSI_REQUEST=y CONFIG_BLK_CGROUP_RWSTAT=y CONFIG_BLK_DEV_BSG=y CONFIG_BLK_DEV_BSGLIB=y CONFIG_BLK_DEV_INTEGRITY=y CONFIG_BLK_DEV_INTEGRITY_T10=m CONFIG_BLK_DEV_ZONED=y CONFIG_BLK_DEV_THROTTLING=y # CONFIG_BLK_DEV_THROTTLING_LOW is not set # CONFIG_BLK_CMDLINE_PARSER is not set CONFIG_BLK_WBT=y # CONFIG_BLK_CGROUP_IOLATENCY is not set # CONFIG_BLK_CGROUP_IOCOST is not set CONFIG_BLK_WBT_MQ=y CONFIG_BLK_DEBUG_FS=y CONFIG_BLK_DEBUG_FS_ZONED=y # CONFIG_BLK_SED_OPAL is not set # CONFIG_BLK_INLINE_ENCRYPTION is not set # # Partition Types # CONFIG_PARTITION_ADVANCED=y # CONFIG_ACORN_PARTITION is not set # CONFIG_AIX_PARTITION is not set CONFIG_OSF_PARTITION=y CONFIG_AMIGA_PARTITION=y # CONFIG_ATARI_PARTITION is not set CONFIG_MAC_PARTITION=y CONFIG_MSDOS_PARTITION=y CONFIG_BSD_DISKLABEL=y CONFIG_MINIX_SUBPARTITION=y CONFIG_SOLARIS_X86_PARTITION=y CONFIG_UNIXWARE_DISKLABEL=y # CONFIG_LDM_PARTITION is not set CONFIG_SGI_PARTITION=y # CONFIG_ULTRIX_PARTITION is not set CONFIG_SUN_PARTITION=y CONFIG_KARMA_PARTITION=y CONFIG_EFI_PARTITION=y # CONFIG_SYSV68_PARTITION is not set # CONFIG_CMDLINE_PARTITION is not set # end of Partition Types CONFIG_BLOCK_COMPAT=y CONFIG_BLK_MQ_PCI=y CONFIG_BLK_MQ_VIRTIO=y CONFIG_BLK_MQ_RDMA=y CONFIG_BLK_PM=y # # IO Schedulers # CONFIG_MQ_IOSCHED_DEADLINE=y CONFIG_MQ_IOSCHED_KYBER=y CONFIG_IOSCHED_BFQ=y CONFIG_BFQ_GROUP_IOSCHED=y # CONFIG_BFQ_CGROUP_DEBUG is not set # end of IO Schedulers CONFIG_PREEMPT_NOTIFIERS=y CONFIG_PADATA=y CONFIG_ASN1=y CONFIG_INLINE_SPIN_UNLOCK_IRQ=y CONFIG_INLINE_READ_UNLOCK=y CONFIG_INLINE_READ_UNLOCK_IRQ=y CONFIG_INLINE_WRITE_UNLOCK=y CONFIG_INLINE_WRITE_UNLOCK_IRQ=y CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y CONFIG_MUTEX_SPIN_ON_OWNER=y CONFIG_RWSEM_SPIN_ON_OWNER=y CONFIG_LOCK_SPIN_ON_OWNER=y CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y CONFIG_QUEUED_SPINLOCKS=y CONFIG_ARCH_USE_QUEUED_RWLOCKS=y CONFIG_QUEUED_RWLOCKS=y CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y CONFIG_FREEZER=y # # Executable file formats # CONFIG_BINFMT_ELF=y CONFIG_COMPAT_BINFMT_ELF=y CONFIG_ELFCORE=y CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y CONFIG_BINFMT_SCRIPT=y CONFIG_BINFMT_MISC=m CONFIG_COREDUMP=y # end of Executable file formats # # Memory Management options # CONFIG_SELECT_MEMORY_MODEL=y CONFIG_SPARSEMEM_MANUAL=y CONFIG_SPARSEMEM=y CONFIG_NEED_MULTIPLE_NODES=y CONFIG_SPARSEMEM_EXTREME=y CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y CONFIG_SPARSEMEM_VMEMMAP=y CONFIG_HAVE_FAST_GUP=y CONFIG_NUMA_KEEP_MEMINFO=y CONFIG_MEMORY_ISOLATION=y CONFIG_HAVE_BOOTMEM_INFO_NODE=y CONFIG_MEMORY_HOTPLUG=y CONFIG_MEMORY_HOTPLUG_SPARSE=y # CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set CONFIG_MEMORY_HOTREMOVE=y CONFIG_SPLIT_PTLOCK_CPUS=4 CONFIG_MEMORY_BALLOON=y CONFIG_BALLOON_COMPACTION=y CONFIG_COMPACTION=y CONFIG_PAGE_REPORTING=y CONFIG_MIGRATION=y CONFIG_CONTIG_ALLOC=y CONFIG_PHYS_ADDR_T_64BIT=y CONFIG_BOUNCE=y CONFIG_VIRT_TO_BUS=y CONFIG_MMU_NOTIFIER=y CONFIG_KSM=y CONFIG_DEFAULT_MMAP_MIN_ADDR=4096 CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y CONFIG_MEMORY_FAILURE=y CONFIG_HWPOISON_INJECT=m CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y # CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set CONFIG_ARCH_WANTS_THP_SWAP=y CONFIG_THP_SWAP=y CONFIG_CLEANCACHE=y CONFIG_FRONTSWAP=y CONFIG_CMA=y # CONFIG_CMA_DEBUG is not set # CONFIG_CMA_DEBUGFS is not set CONFIG_CMA_AREAS=19 CONFIG_ZSWAP=y # CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y # CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set # CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set # CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set # CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo" CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y # CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set # CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set CONFIG_ZSWAP_ZPOOL_DEFAULT="zbud" # CONFIG_ZSWAP_DEFAULT_ON is not set CONFIG_ZPOOL=y CONFIG_ZBUD=y # CONFIG_Z3FOLD is not set CONFIG_ZSMALLOC=y CONFIG_ZSMALLOC_STAT=y CONFIG_GENERIC_EARLY_IOREMAP=y CONFIG_DEFERRED_STRUCT_PAGE_INIT=y CONFIG_IDLE_PAGE_TRACKING=y CONFIG_ARCH_HAS_PTE_DEVMAP=y CONFIG_ZONE_DEVICE=y CONFIG_DEV_PAGEMAP_OPS=y CONFIG_HMM_MIRROR=y CONFIG_DEVICE_PRIVATE=y CONFIG_VMAP_PFN=y CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y CONFIG_ARCH_HAS_PKEYS=y # CONFIG_PERCPU_STATS is not set # CONFIG_GUP_TEST is not set # CONFIG_READ_ONLY_THP_FOR_FS is not set CONFIG_ARCH_HAS_PTE_SPECIAL=y CONFIG_MAPPING_DIRTY_HELPERS=y # end of Memory Management options CONFIG_NET=y CONFIG_COMPAT_NETLINK_MESSAGES=y CONFIG_NET_INGRESS=y CONFIG_NET_EGRESS=y CONFIG_SKB_EXTENSIONS=y # # Networking options # CONFIG_PACKET=y CONFIG_PACKET_DIAG=m CONFIG_UNIX=y CONFIG_UNIX_SCM=y CONFIG_UNIX_DIAG=m CONFIG_TLS=m CONFIG_TLS_DEVICE=y # CONFIG_TLS_TOE is not set CONFIG_XFRM=y CONFIG_XFRM_OFFLOAD=y CONFIG_XFRM_ALGO=y CONFIG_XFRM_USER=y # CONFIG_XFRM_USER_COMPAT is not set # CONFIG_XFRM_INTERFACE is not set CONFIG_XFRM_SUB_POLICY=y CONFIG_XFRM_MIGRATE=y CONFIG_XFRM_STATISTICS=y CONFIG_XFRM_AH=m CONFIG_XFRM_ESP=m CONFIG_XFRM_IPCOMP=m CONFIG_NET_KEY=m CONFIG_NET_KEY_MIGRATE=y # CONFIG_SMC is not set CONFIG_XDP_SOCKETS=y # CONFIG_XDP_SOCKETS_DIAG is not set CONFIG_INET=y CONFIG_IP_MULTICAST=y CONFIG_IP_ADVANCED_ROUTER=y CONFIG_IP_FIB_TRIE_STATS=y CONFIG_IP_MULTIPLE_TABLES=y CONFIG_IP_ROUTE_MULTIPATH=y CONFIG_IP_ROUTE_VERBOSE=y CONFIG_IP_ROUTE_CLASSID=y CONFIG_IP_PNP=y CONFIG_IP_PNP_DHCP=y # CONFIG_IP_PNP_BOOTP is not set # CONFIG_IP_PNP_RARP is not set CONFIG_NET_IPIP=m CONFIG_NET_IPGRE_DEMUX=m CONFIG_NET_IP_TUNNEL=m CONFIG_NET_IPGRE=m CONFIG_NET_IPGRE_BROADCAST=y CONFIG_IP_MROUTE_COMMON=y CONFIG_IP_MROUTE=y CONFIG_IP_MROUTE_MULTIPLE_TABLES=y CONFIG_IP_PIMSM_V1=y CONFIG_IP_PIMSM_V2=y CONFIG_SYN_COOKIES=y CONFIG_NET_IPVTI=m CONFIG_NET_UDP_TUNNEL=m # CONFIG_NET_FOU is not set # CONFIG_NET_FOU_IP_TUNNELS is not set CONFIG_INET_AH=m CONFIG_INET_ESP=m CONFIG_INET_ESP_OFFLOAD=m # CONFIG_INET_ESPINTCP is not set CONFIG_INET_IPCOMP=m CONFIG_INET_XFRM_TUNNEL=m CONFIG_INET_TUNNEL=m CONFIG_INET_DIAG=m CONFIG_INET_TCP_DIAG=m CONFIG_INET_UDP_DIAG=m CONFIG_INET_RAW_DIAG=m # CONFIG_INET_DIAG_DESTROY is not set CONFIG_TCP_CONG_ADVANCED=y CONFIG_TCP_CONG_BIC=m CONFIG_TCP_CONG_CUBIC=y CONFIG_TCP_CONG_WESTWOOD=m CONFIG_TCP_CONG_HTCP=m CONFIG_TCP_CONG_HSTCP=m CONFIG_TCP_CONG_HYBLA=m CONFIG_TCP_CONG_VEGAS=m CONFIG_TCP_CONG_NV=m CONFIG_TCP_CONG_SCALABLE=m CONFIG_TCP_CONG_LP=m CONFIG_TCP_CONG_VENO=m CONFIG_TCP_CONG_YEAH=m CONFIG_TCP_CONG_ILLINOIS=m CONFIG_TCP_CONG_DCTCP=m # CONFIG_TCP_CONG_CDG is not set CONFIG_TCP_CONG_BBR=m CONFIG_DEFAULT_CUBIC=y # CONFIG_DEFAULT_RENO is not set CONFIG_DEFAULT_TCP_CONG="cubic" CONFIG_TCP_MD5SIG=y CONFIG_IPV6=y CONFIG_IPV6_ROUTER_PREF=y CONFIG_IPV6_ROUTE_INFO=y CONFIG_IPV6_OPTIMISTIC_DAD=y CONFIG_INET6_AH=m CONFIG_INET6_ESP=m CONFIG_INET6_ESP_OFFLOAD=m # CONFIG_INET6_ESPINTCP is not set CONFIG_INET6_IPCOMP=m CONFIG_IPV6_MIP6=m # CONFIG_IPV6_ILA is not set CONFIG_INET6_XFRM_TUNNEL=m CONFIG_INET6_TUNNEL=m CONFIG_IPV6_VTI=m CONFIG_IPV6_SIT=m CONFIG_IPV6_SIT_6RD=y CONFIG_IPV6_NDISC_NODETYPE=y CONFIG_IPV6_TUNNEL=m CONFIG_IPV6_GRE=m CONFIG_IPV6_MULTIPLE_TABLES=y # CONFIG_IPV6_SUBTREES is not set CONFIG_IPV6_MROUTE=y CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y CONFIG_IPV6_PIMSM_V2=y # CONFIG_IPV6_SEG6_LWTUNNEL is not set # CONFIG_IPV6_SEG6_HMAC is not set # CONFIG_IPV6_RPL_LWTUNNEL is not set CONFIG_NETLABEL=y # CONFIG_MPTCP is not set CONFIG_NETWORK_SECMARK=y CONFIG_NET_PTP_CLASSIFY=y CONFIG_NETWORK_PHY_TIMESTAMPING=y CONFIG_NETFILTER=y CONFIG_NETFILTER_ADVANCED=y CONFIG_BRIDGE_NETFILTER=m # # Core Netfilter Configuration # CONFIG_NETFILTER_INGRESS=y CONFIG_NETFILTER_NETLINK=m CONFIG_NETFILTER_FAMILY_BRIDGE=y CONFIG_NETFILTER_FAMILY_ARP=y # CONFIG_NETFILTER_NETLINK_ACCT is not set CONFIG_NETFILTER_NETLINK_QUEUE=m CONFIG_NETFILTER_NETLINK_LOG=m CONFIG_NETFILTER_NETLINK_OSF=m CONFIG_NF_CONNTRACK=m CONFIG_NF_LOG_COMMON=m CONFIG_NF_LOG_NETDEV=m CONFIG_NETFILTER_CONNCOUNT=m CONFIG_NF_CONNTRACK_MARK=y CONFIG_NF_CONNTRACK_SECMARK=y CONFIG_NF_CONNTRACK_ZONES=y CONFIG_NF_CONNTRACK_PROCFS=y CONFIG_NF_CONNTRACK_EVENTS=y CONFIG_NF_CONNTRACK_TIMEOUT=y CONFIG_NF_CONNTRACK_TIMESTAMP=y CONFIG_NF_CONNTRACK_LABELS=y CONFIG_NF_CT_PROTO_DCCP=y CONFIG_NF_CT_PROTO_GRE=y CONFIG_NF_CT_PROTO_SCTP=y CONFIG_NF_CT_PROTO_UDPLITE=y CONFIG_NF_CONNTRACK_AMANDA=m CONFIG_NF_CONNTRACK_FTP=m CONFIG_NF_CONNTRACK_H323=m CONFIG_NF_CONNTRACK_IRC=m CONFIG_NF_CONNTRACK_BROADCAST=m CONFIG_NF_CONNTRACK_NETBIOS_NS=m CONFIG_NF_CONNTRACK_SNMP=m CONFIG_NF_CONNTRACK_PPTP=m CONFIG_NF_CONNTRACK_SANE=m CONFIG_NF_CONNTRACK_SIP=m CONFIG_NF_CONNTRACK_TFTP=m CONFIG_NF_CT_NETLINK=m CONFIG_NF_CT_NETLINK_TIMEOUT=m CONFIG_NF_CT_NETLINK_HELPER=m CONFIG_NETFILTER_NETLINK_GLUE_CT=y CONFIG_NF_NAT=m CONFIG_NF_NAT_AMANDA=m CONFIG_NF_NAT_FTP=m CONFIG_NF_NAT_IRC=m CONFIG_NF_NAT_SIP=m CONFIG_NF_NAT_TFTP=m CONFIG_NF_NAT_REDIRECT=y CONFIG_NF_NAT_MASQUERADE=y CONFIG_NETFILTER_SYNPROXY=m CONFIG_NF_TABLES=m CONFIG_NF_TABLES_INET=y CONFIG_NF_TABLES_NETDEV=y CONFIG_NFT_NUMGEN=m CONFIG_NFT_CT=m CONFIG_NFT_COUNTER=m CONFIG_NFT_CONNLIMIT=m CONFIG_NFT_LOG=m CONFIG_NFT_LIMIT=m CONFIG_NFT_MASQ=m CONFIG_NFT_REDIR=m CONFIG_NFT_NAT=m # CONFIG_NFT_TUNNEL is not set CONFIG_NFT_OBJREF=m CONFIG_NFT_QUEUE=m CONFIG_NFT_QUOTA=m CONFIG_NFT_REJECT=m CONFIG_NFT_REJECT_INET=m CONFIG_NFT_COMPAT=m CONFIG_NFT_HASH=m CONFIG_NFT_FIB=m CONFIG_NFT_FIB_INET=m # CONFIG_NFT_XFRM is not set CONFIG_NFT_SOCKET=m # CONFIG_NFT_OSF is not set # CONFIG_NFT_TPROXY is not set # CONFIG_NFT_SYNPROXY is not set CONFIG_NF_DUP_NETDEV=m CONFIG_NFT_DUP_NETDEV=m CONFIG_NFT_FWD_NETDEV=m CONFIG_NFT_FIB_NETDEV=m # CONFIG_NFT_REJECT_NETDEV is not set # CONFIG_NF_FLOW_TABLE is not set CONFIG_NETFILTER_XTABLES=y # # Xtables combined modules # CONFIG_NETFILTER_XT_MARK=m CONFIG_NETFILTER_XT_CONNMARK=m CONFIG_NETFILTER_XT_SET=m # # Xtables targets # CONFIG_NETFILTER_XT_TARGET_AUDIT=m CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m CONFIG_NETFILTER_XT_TARGET_CONNMARK=m CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m CONFIG_NETFILTER_XT_TARGET_CT=m CONFIG_NETFILTER_XT_TARGET_DSCP=m CONFIG_NETFILTER_XT_TARGET_HL=m CONFIG_NETFILTER_XT_TARGET_HMARK=m CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m # CONFIG_NETFILTER_XT_TARGET_LED is not set CONFIG_NETFILTER_XT_TARGET_LOG=m CONFIG_NETFILTER_XT_TARGET_MARK=m CONFIG_NETFILTER_XT_NAT=m CONFIG_NETFILTER_XT_TARGET_NETMAP=m CONFIG_NETFILTER_XT_TARGET_NFLOG=m CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m CONFIG_NETFILTER_XT_TARGET_NOTRACK=m CONFIG_NETFILTER_XT_TARGET_RATEEST=m CONFIG_NETFILTER_XT_TARGET_REDIRECT=m CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m CONFIG_NETFILTER_XT_TARGET_TEE=m CONFIG_NETFILTER_XT_TARGET_TPROXY=m CONFIG_NETFILTER_XT_TARGET_TRACE=m CONFIG_NETFILTER_XT_TARGET_SECMARK=m CONFIG_NETFILTER_XT_TARGET_TCPMSS=m CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m # # Xtables matches # CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m CONFIG_NETFILTER_XT_MATCH_BPF=m CONFIG_NETFILTER_XT_MATCH_CGROUP=m CONFIG_NETFILTER_XT_MATCH_CLUSTER=m CONFIG_NETFILTER_XT_MATCH_COMMENT=m CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m CONFIG_NETFILTER_XT_MATCH_CONNMARK=m CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m CONFIG_NETFILTER_XT_MATCH_CPU=m CONFIG_NETFILTER_XT_MATCH_DCCP=m CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m CONFIG_NETFILTER_XT_MATCH_DSCP=m CONFIG_NETFILTER_XT_MATCH_ECN=m CONFIG_NETFILTER_XT_MATCH_ESP=m CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m CONFIG_NETFILTER_XT_MATCH_HELPER=m CONFIG_NETFILTER_XT_MATCH_HL=m # CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set CONFIG_NETFILTER_XT_MATCH_IPRANGE=m CONFIG_NETFILTER_XT_MATCH_IPVS=m # CONFIG_NETFILTER_XT_MATCH_L2TP is not set CONFIG_NETFILTER_XT_MATCH_LENGTH=m CONFIG_NETFILTER_XT_MATCH_LIMIT=m CONFIG_NETFILTER_XT_MATCH_MAC=m CONFIG_NETFILTER_XT_MATCH_MARK=m CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m # CONFIG_NETFILTER_XT_MATCH_NFACCT is not set CONFIG_NETFILTER_XT_MATCH_OSF=m CONFIG_NETFILTER_XT_MATCH_OWNER=m CONFIG_NETFILTER_XT_MATCH_POLICY=m CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m CONFIG_NETFILTER_XT_MATCH_QUOTA=m CONFIG_NETFILTER_XT_MATCH_RATEEST=m CONFIG_NETFILTER_XT_MATCH_REALM=m CONFIG_NETFILTER_XT_MATCH_RECENT=m CONFIG_NETFILTER_XT_MATCH_SCTP=m CONFIG_NETFILTER_XT_MATCH_SOCKET=m CONFIG_NETFILTER_XT_MATCH_STATE=m CONFIG_NETFILTER_XT_MATCH_STATISTIC=m CONFIG_NETFILTER_XT_MATCH_STRING=m CONFIG_NETFILTER_XT_MATCH_TCPMSS=m # CONFIG_NETFILTER_XT_MATCH_TIME is not set # CONFIG_NETFILTER_XT_MATCH_U32 is not set # end of Core Netfilter Configuration CONFIG_IP_SET=m CONFIG_IP_SET_MAX=256 CONFIG_IP_SET_BITMAP_IP=m CONFIG_IP_SET_BITMAP_IPMAC=m CONFIG_IP_SET_BITMAP_PORT=m CONFIG_IP_SET_HASH_IP=m CONFIG_IP_SET_HASH_IPMARK=m CONFIG_IP_SET_HASH_IPPORT=m CONFIG_IP_SET_HASH_IPPORTIP=m CONFIG_IP_SET_HASH_IPPORTNET=m CONFIG_IP_SET_HASH_IPMAC=m CONFIG_IP_SET_HASH_MAC=m CONFIG_IP_SET_HASH_NETPORTNET=m CONFIG_IP_SET_HASH_NET=m CONFIG_IP_SET_HASH_NETNET=m CONFIG_IP_SET_HASH_NETPORT=m CONFIG_IP_SET_HASH_NETIFACE=m CONFIG_IP_SET_LIST_SET=m CONFIG_IP_VS=m CONFIG_IP_VS_IPV6=y # CONFIG_IP_VS_DEBUG is not set CONFIG_IP_VS_TAB_BITS=12 # # IPVS transport protocol load balancing support # CONFIG_IP_VS_PROTO_TCP=y CONFIG_IP_VS_PROTO_UDP=y CONFIG_IP_VS_PROTO_AH_ESP=y CONFIG_IP_VS_PROTO_ESP=y CONFIG_IP_VS_PROTO_AH=y CONFIG_IP_VS_PROTO_SCTP=y # # IPVS scheduler # CONFIG_IP_VS_RR=m CONFIG_IP_VS_WRR=m CONFIG_IP_VS_LC=m CONFIG_IP_VS_WLC=m CONFIG_IP_VS_FO=m CONFIG_IP_VS_OVF=m CONFIG_IP_VS_LBLC=m CONFIG_IP_VS_LBLCR=m CONFIG_IP_VS_DH=m CONFIG_IP_VS_SH=m # CONFIG_IP_VS_MH is not set CONFIG_IP_VS_SED=m CONFIG_IP_VS_NQ=m # CONFIG_IP_VS_TWOS is not set # # IPVS SH scheduler # CONFIG_IP_VS_SH_TAB_BITS=8 # # IPVS MH scheduler # CONFIG_IP_VS_MH_TAB_INDEX=12 # # IPVS application helper # CONFIG_IP_VS_FTP=m CONFIG_IP_VS_NFCT=y CONFIG_IP_VS_PE_SIP=m # # IP: Netfilter Configuration # CONFIG_NF_DEFRAG_IPV4=m CONFIG_NF_SOCKET_IPV4=m CONFIG_NF_TPROXY_IPV4=m CONFIG_NF_TABLES_IPV4=y CONFIG_NFT_REJECT_IPV4=m CONFIG_NFT_DUP_IPV4=m CONFIG_NFT_FIB_IPV4=m CONFIG_NF_TABLES_ARP=y CONFIG_NF_DUP_IPV4=m CONFIG_NF_LOG_ARP=m CONFIG_NF_LOG_IPV4=m CONFIG_NF_REJECT_IPV4=m CONFIG_NF_NAT_SNMP_BASIC=m CONFIG_NF_NAT_PPTP=m CONFIG_NF_NAT_H323=m CONFIG_IP_NF_IPTABLES=m CONFIG_IP_NF_MATCH_AH=m CONFIG_IP_NF_MATCH_ECN=m CONFIG_IP_NF_MATCH_RPFILTER=m CONFIG_IP_NF_MATCH_TTL=m CONFIG_IP_NF_FILTER=m CONFIG_IP_NF_TARGET_REJECT=m CONFIG_IP_NF_TARGET_SYNPROXY=m CONFIG_IP_NF_NAT=m CONFIG_IP_NF_TARGET_MASQUERADE=m CONFIG_IP_NF_TARGET_NETMAP=m CONFIG_IP_NF_TARGET_REDIRECT=m CONFIG_IP_NF_MANGLE=m # CONFIG_IP_NF_TARGET_CLUSTERIP is not set CONFIG_IP_NF_TARGET_ECN=m CONFIG_IP_NF_TARGET_TTL=m CONFIG_IP_NF_RAW=m CONFIG_IP_NF_SECURITY=m CONFIG_IP_NF_ARPTABLES=m CONFIG_IP_NF_ARPFILTER=m CONFIG_IP_NF_ARP_MANGLE=m # end of IP: Netfilter Configuration # # IPv6: Netfilter Configuration # CONFIG_NF_SOCKET_IPV6=m CONFIG_NF_TPROXY_IPV6=m CONFIG_NF_TABLES_IPV6=y CONFIG_NFT_REJECT_IPV6=m CONFIG_NFT_DUP_IPV6=m CONFIG_NFT_FIB_IPV6=m CONFIG_NF_DUP_IPV6=m CONFIG_NF_REJECT_IPV6=m CONFIG_NF_LOG_IPV6=m CONFIG_IP6_NF_IPTABLES=m CONFIG_IP6_NF_MATCH_AH=m CONFIG_IP6_NF_MATCH_EUI64=m CONFIG_IP6_NF_MATCH_FRAG=m CONFIG_IP6_NF_MATCH_OPTS=m CONFIG_IP6_NF_MATCH_HL=m CONFIG_IP6_NF_MATCH_IPV6HEADER=m CONFIG_IP6_NF_MATCH_MH=m CONFIG_IP6_NF_MATCH_RPFILTER=m CONFIG_IP6_NF_MATCH_RT=m # CONFIG_IP6_NF_MATCH_SRH is not set # CONFIG_IP6_NF_TARGET_HL is not set CONFIG_IP6_NF_FILTER=m CONFIG_IP6_NF_TARGET_REJECT=m CONFIG_IP6_NF_TARGET_SYNPROXY=m CONFIG_IP6_NF_MANGLE=m CONFIG_IP6_NF_RAW=m CONFIG_IP6_NF_SECURITY=m CONFIG_IP6_NF_NAT=m CONFIG_IP6_NF_TARGET_MASQUERADE=m CONFIG_IP6_NF_TARGET_NPT=m # end of IPv6: Netfilter Configuration CONFIG_NF_DEFRAG_IPV6=m CONFIG_NF_TABLES_BRIDGE=m # CONFIG_NFT_BRIDGE_META is not set CONFIG_NFT_BRIDGE_REJECT=m CONFIG_NF_LOG_BRIDGE=m # CONFIG_NF_CONNTRACK_BRIDGE is not set CONFIG_BRIDGE_NF_EBTABLES=m CONFIG_BRIDGE_EBT_BROUTE=m CONFIG_BRIDGE_EBT_T_FILTER=m CONFIG_BRIDGE_EBT_T_NAT=m CONFIG_BRIDGE_EBT_802_3=m CONFIG_BRIDGE_EBT_AMONG=m CONFIG_BRIDGE_EBT_ARP=m CONFIG_BRIDGE_EBT_IP=m CONFIG_BRIDGE_EBT_IP6=m CONFIG_BRIDGE_EBT_LIMIT=m CONFIG_BRIDGE_EBT_MARK=m CONFIG_BRIDGE_EBT_PKTTYPE=m CONFIG_BRIDGE_EBT_STP=m CONFIG_BRIDGE_EBT_VLAN=m CONFIG_BRIDGE_EBT_ARPREPLY=m CONFIG_BRIDGE_EBT_DNAT=m CONFIG_BRIDGE_EBT_MARK_T=m CONFIG_BRIDGE_EBT_REDIRECT=m CONFIG_BRIDGE_EBT_SNAT=m CONFIG_BRIDGE_EBT_LOG=m CONFIG_BRIDGE_EBT_NFLOG=m # CONFIG_BPFILTER is not set # CONFIG_IP_DCCP is not set CONFIG_IP_SCTP=m # CONFIG_SCTP_DBG_OBJCNT is not set # CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y # CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set CONFIG_SCTP_COOKIE_HMAC_MD5=y CONFIG_SCTP_COOKIE_HMAC_SHA1=y CONFIG_INET_SCTP_DIAG=m # CONFIG_RDS is not set CONFIG_TIPC=m # CONFIG_TIPC_MEDIA_IB is not set CONFIG_TIPC_MEDIA_UDP=y CONFIG_TIPC_CRYPTO=y CONFIG_TIPC_DIAG=m CONFIG_ATM=m CONFIG_ATM_CLIP=m # CONFIG_ATM_CLIP_NO_ICMP is not set CONFIG_ATM_LANE=m # CONFIG_ATM_MPOA is not set CONFIG_ATM_BR2684=m # CONFIG_ATM_BR2684_IPFILTER is not set CONFIG_L2TP=m CONFIG_L2TP_DEBUGFS=m CONFIG_L2TP_V3=y CONFIG_L2TP_IP=m CONFIG_L2TP_ETH=m CONFIG_STP=m CONFIG_GARP=m CONFIG_MRP=m CONFIG_BRIDGE=m CONFIG_BRIDGE_IGMP_SNOOPING=y CONFIG_BRIDGE_VLAN_FILTERING=y # CONFIG_BRIDGE_MRP is not set # CONFIG_BRIDGE_CFM is not set CONFIG_HAVE_NET_DSA=y # CONFIG_NET_DSA is not set CONFIG_VLAN_8021Q=m CONFIG_VLAN_8021Q_GVRP=y CONFIG_VLAN_8021Q_MVRP=y # CONFIG_DECNET is not set CONFIG_LLC=m # CONFIG_LLC2 is not set # CONFIG_ATALK is not set # CONFIG_X25 is not set # CONFIG_LAPB is not set # CONFIG_PHONET is not set CONFIG_6LOWPAN=m # CONFIG_6LOWPAN_DEBUGFS is not set # CONFIG_6LOWPAN_NHC is not set CONFIG_IEEE802154=m # CONFIG_IEEE802154_NL802154_EXPERIMENTAL is not set CONFIG_IEEE802154_SOCKET=m CONFIG_IEEE802154_6LOWPAN=m CONFIG_MAC802154=m CONFIG_NET_SCHED=y # # Queueing/Scheduling # CONFIG_NET_SCH_CBQ=m CONFIG_NET_SCH_HTB=m CONFIG_NET_SCH_HFSC=m CONFIG_NET_SCH_ATM=m CONFIG_NET_SCH_PRIO=m CONFIG_NET_SCH_MULTIQ=m CONFIG_NET_SCH_RED=m CONFIG_NET_SCH_SFB=m CONFIG_NET_SCH_SFQ=m CONFIG_NET_SCH_TEQL=m CONFIG_NET_SCH_TBF=m # CONFIG_NET_SCH_CBS is not set # CONFIG_NET_SCH_ETF is not set # CONFIG_NET_SCH_TAPRIO is not set CONFIG_NET_SCH_GRED=m CONFIG_NET_SCH_DSMARK=m CONFIG_NET_SCH_NETEM=m CONFIG_NET_SCH_DRR=m CONFIG_NET_SCH_MQPRIO=m # CONFIG_NET_SCH_SKBPRIO is not set CONFIG_NET_SCH_CHOKE=m CONFIG_NET_SCH_QFQ=m CONFIG_NET_SCH_CODEL=m CONFIG_NET_SCH_FQ_CODEL=y # CONFIG_NET_SCH_CAKE is not set CONFIG_NET_SCH_FQ=m CONFIG_NET_SCH_HHF=m CONFIG_NET_SCH_PIE=m # CONFIG_NET_SCH_FQ_PIE is not set CONFIG_NET_SCH_INGRESS=m CONFIG_NET_SCH_PLUG=m # CONFIG_NET_SCH_ETS is not set CONFIG_NET_SCH_DEFAULT=y # CONFIG_DEFAULT_FQ is not set # CONFIG_DEFAULT_CODEL is not set CONFIG_DEFAULT_FQ_CODEL=y # CONFIG_DEFAULT_SFQ is not set # CONFIG_DEFAULT_PFIFO_FAST is not set CONFIG_DEFAULT_NET_SCH="fq_codel" # # Classification # CONFIG_NET_CLS=y CONFIG_NET_CLS_BASIC=m CONFIG_NET_CLS_TCINDEX=m CONFIG_NET_CLS_ROUTE4=m CONFIG_NET_CLS_FW=m CONFIG_NET_CLS_U32=m CONFIG_CLS_U32_PERF=y CONFIG_CLS_U32_MARK=y CONFIG_NET_CLS_RSVP=m CONFIG_NET_CLS_RSVP6=m CONFIG_NET_CLS_FLOW=m CONFIG_NET_CLS_CGROUP=y CONFIG_NET_CLS_BPF=m CONFIG_NET_CLS_FLOWER=m CONFIG_NET_CLS_MATCHALL=m CONFIG_NET_EMATCH=y CONFIG_NET_EMATCH_STACK=32 CONFIG_NET_EMATCH_CMP=m CONFIG_NET_EMATCH_NBYTE=m CONFIG_NET_EMATCH_U32=m CONFIG_NET_EMATCH_META=m CONFIG_NET_EMATCH_TEXT=m # CONFIG_NET_EMATCH_CANID is not set CONFIG_NET_EMATCH_IPSET=m # CONFIG_NET_EMATCH_IPT is not set CONFIG_NET_CLS_ACT=y CONFIG_NET_ACT_POLICE=m CONFIG_NET_ACT_GACT=m CONFIG_GACT_PROB=y CONFIG_NET_ACT_MIRRED=m CONFIG_NET_ACT_SAMPLE=m # CONFIG_NET_ACT_IPT is not set CONFIG_NET_ACT_NAT=m CONFIG_NET_ACT_PEDIT=m CONFIG_NET_ACT_SIMP=m CONFIG_NET_ACT_SKBEDIT=m CONFIG_NET_ACT_CSUM=m # CONFIG_NET_ACT_MPLS is not set CONFIG_NET_ACT_VLAN=m CONFIG_NET_ACT_BPF=m # CONFIG_NET_ACT_CONNMARK is not set # CONFIG_NET_ACT_CTINFO is not set CONFIG_NET_ACT_SKBMOD=m # CONFIG_NET_ACT_IFE is not set CONFIG_NET_ACT_TUNNEL_KEY=m # CONFIG_NET_ACT_GATE is not set # CONFIG_NET_TC_SKB_EXT is not set CONFIG_NET_SCH_FIFO=y CONFIG_DCB=y CONFIG_DNS_RESOLVER=m # CONFIG_BATMAN_ADV is not set CONFIG_OPENVSWITCH=m CONFIG_OPENVSWITCH_GRE=m CONFIG_VSOCKETS=m CONFIG_VSOCKETS_DIAG=m CONFIG_VSOCKETS_LOOPBACK=m CONFIG_VMWARE_VMCI_VSOCKETS=m CONFIG_VIRTIO_VSOCKETS=m CONFIG_VIRTIO_VSOCKETS_COMMON=m CONFIG_HYPERV_VSOCKETS=m CONFIG_NETLINK_DIAG=m CONFIG_MPLS=y CONFIG_NET_MPLS_GSO=y CONFIG_MPLS_ROUTING=m CONFIG_MPLS_IPTUNNEL=m CONFIG_NET_NSH=y # CONFIG_HSR is not set CONFIG_NET_SWITCHDEV=y CONFIG_NET_L3_MASTER_DEV=y # CONFIG_QRTR is not set # CONFIG_NET_NCSI is not set CONFIG_RPS=y CONFIG_RFS_ACCEL=y CONFIG_SOCK_RX_QUEUE_MAPPING=y CONFIG_XPS=y CONFIG_CGROUP_NET_PRIO=y CONFIG_CGROUP_NET_CLASSID=y CONFIG_NET_RX_BUSY_POLL=y CONFIG_BQL=y CONFIG_BPF_JIT=y CONFIG_BPF_STREAM_PARSER=y CONFIG_NET_FLOW_LIMIT=y # # Network testing # CONFIG_NET_PKTGEN=m CONFIG_NET_DROP_MONITOR=y # end of Network testing # end of Networking options # CONFIG_HAMRADIO is not set CONFIG_CAN=m CONFIG_CAN_RAW=m CONFIG_CAN_BCM=m CONFIG_CAN_GW=m # CONFIG_CAN_J1939 is not set # CONFIG_CAN_ISOTP is not set # # CAN Device Drivers # CONFIG_CAN_VCAN=m # CONFIG_CAN_VXCAN is not set CONFIG_CAN_SLCAN=m CONFIG_CAN_DEV=m CONFIG_CAN_CALC_BITTIMING=y # CONFIG_CAN_KVASER_PCIEFD is not set CONFIG_CAN_C_CAN=m CONFIG_CAN_C_CAN_PLATFORM=m CONFIG_CAN_C_CAN_PCI=m CONFIG_CAN_CC770=m # CONFIG_CAN_CC770_ISA is not set CONFIG_CAN_CC770_PLATFORM=m # CONFIG_CAN_IFI_CANFD is not set # CONFIG_CAN_M_CAN is not set # CONFIG_CAN_PEAK_PCIEFD is not set CONFIG_CAN_SJA1000=m CONFIG_CAN_EMS_PCI=m # CONFIG_CAN_F81601 is not set CONFIG_CAN_KVASER_PCI=m CONFIG_CAN_PEAK_PCI=m CONFIG_CAN_PEAK_PCIEC=y CONFIG_CAN_PLX_PCI=m # CONFIG_CAN_SJA1000_ISA is not set CONFIG_CAN_SJA1000_PLATFORM=m CONFIG_CAN_SOFTING=m # # CAN SPI interfaces # # CONFIG_CAN_HI311X is not set # CONFIG_CAN_MCP251X is not set # CONFIG_CAN_MCP251XFD is not set # end of CAN SPI interfaces # # CAN USB interfaces # # CONFIG_CAN_8DEV_USB is not set # CONFIG_CAN_EMS_USB is not set # CONFIG_CAN_ESD_USB2 is not set # CONFIG_CAN_GS_USB is not set # CONFIG_CAN_KVASER_USB is not set # CONFIG_CAN_MCBA_USB is not set # CONFIG_CAN_PEAK_USB is not set # CONFIG_CAN_UCAN is not set # end of CAN USB interfaces # CONFIG_CAN_DEBUG_DEVICES is not set # end of CAN Device Drivers CONFIG_BT=m CONFIG_BT_BREDR=y CONFIG_BT_RFCOMM=m CONFIG_BT_RFCOMM_TTY=y CONFIG_BT_BNEP=m CONFIG_BT_BNEP_MC_FILTER=y CONFIG_BT_BNEP_PROTO_FILTER=y CONFIG_BT_HIDP=m CONFIG_BT_HS=y CONFIG_BT_LE=y # CONFIG_BT_6LOWPAN is not set # CONFIG_BT_LEDS is not set # CONFIG_BT_MSFTEXT is not set CONFIG_BT_DEBUGFS=y # CONFIG_BT_SELFTEST is not set # # Bluetooth device drivers # # CONFIG_BT_HCIBTUSB is not set # CONFIG_BT_HCIBTSDIO is not set CONFIG_BT_HCIUART=m CONFIG_BT_HCIUART_H4=y CONFIG_BT_HCIUART_BCSP=y CONFIG_BT_HCIUART_ATH3K=y # CONFIG_BT_HCIUART_INTEL is not set # CONFIG_BT_HCIUART_AG6XX is not set # CONFIG_BT_HCIBCM203X is not set # CONFIG_BT_HCIBPA10X is not set # CONFIG_BT_HCIBFUSB is not set CONFIG_BT_HCIVHCI=m CONFIG_BT_MRVL=m # CONFIG_BT_MRVL_SDIO is not set # CONFIG_BT_MTKSDIO is not set # end of Bluetooth device drivers # CONFIG_AF_RXRPC is not set # CONFIG_AF_KCM is not set CONFIG_STREAM_PARSER=y CONFIG_FIB_RULES=y CONFIG_WIRELESS=y CONFIG_WEXT_CORE=y CONFIG_WEXT_PROC=y CONFIG_CFG80211=m # CONFIG_NL80211_TESTMODE is not set # CONFIG_CFG80211_DEVELOPER_WARNINGS is not set CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y CONFIG_CFG80211_DEFAULT_PS=y # CONFIG_CFG80211_DEBUGFS is not set CONFIG_CFG80211_CRDA_SUPPORT=y CONFIG_CFG80211_WEXT=y CONFIG_MAC80211=m CONFIG_MAC80211_HAS_RC=y CONFIG_MAC80211_RC_MINSTREL=y CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y CONFIG_MAC80211_RC_DEFAULT="minstrel_ht" CONFIG_MAC80211_MESH=y CONFIG_MAC80211_LEDS=y CONFIG_MAC80211_DEBUGFS=y # CONFIG_MAC80211_MESSAGE_TRACING is not set # CONFIG_MAC80211_DEBUG_MENU is not set CONFIG_MAC80211_STA_HASH_MAX_SIZE=0 CONFIG_RFKILL=m CONFIG_RFKILL_LEDS=y CONFIG_RFKILL_INPUT=y # CONFIG_RFKILL_GPIO is not set CONFIG_NET_9P=y CONFIG_NET_9P_VIRTIO=y # CONFIG_NET_9P_XEN is not set # CONFIG_NET_9P_RDMA is not set # CONFIG_NET_9P_DEBUG is not set # CONFIG_CAIF is not set CONFIG_CEPH_LIB=m # CONFIG_CEPH_LIB_PRETTYDEBUG is not set CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y # CONFIG_NFC is not set CONFIG_PSAMPLE=m # CONFIG_NET_IFE is not set CONFIG_LWTUNNEL=y CONFIG_LWTUNNEL_BPF=y CONFIG_DST_CACHE=y CONFIG_GRO_CELLS=y CONFIG_SOCK_VALIDATE_XMIT=y CONFIG_NET_SOCK_MSG=y CONFIG_NET_DEVLINK=y CONFIG_PAGE_POOL=y CONFIG_FAILOVER=m CONFIG_ETHTOOL_NETLINK=y CONFIG_HAVE_EBPF_JIT=y # # Device Drivers # CONFIG_HAVE_EISA=y # CONFIG_EISA is not set CONFIG_HAVE_PCI=y CONFIG_PCI=y CONFIG_PCI_DOMAINS=y CONFIG_PCIEPORTBUS=y CONFIG_HOTPLUG_PCI_PCIE=y CONFIG_PCIEAER=y CONFIG_PCIEAER_INJECT=m CONFIG_PCIE_ECRC=y CONFIG_PCIEASPM=y CONFIG_PCIEASPM_DEFAULT=y # CONFIG_PCIEASPM_POWERSAVE is not set # CONFIG_PCIEASPM_POWER_SUPERSAVE is not set # CONFIG_PCIEASPM_PERFORMANCE is not set CONFIG_PCIE_PME=y CONFIG_PCIE_DPC=y # CONFIG_PCIE_PTM is not set # CONFIG_PCIE_EDR is not set CONFIG_PCI_MSI=y CONFIG_PCI_MSI_IRQ_DOMAIN=y CONFIG_PCI_QUIRKS=y # CONFIG_PCI_DEBUG is not set # CONFIG_PCI_REALLOC_ENABLE_AUTO is not set CONFIG_PCI_STUB=y CONFIG_PCI_PF_STUB=m # CONFIG_XEN_PCIDEV_FRONTEND is not set CONFIG_PCI_ATS=y CONFIG_PCI_LOCKLESS_CONFIG=y CONFIG_PCI_IOV=y CONFIG_PCI_PRI=y CONFIG_PCI_PASID=y # CONFIG_PCI_P2PDMA is not set CONFIG_PCI_LABEL=y CONFIG_PCI_HYPERV=m CONFIG_HOTPLUG_PCI=y CONFIG_HOTPLUG_PCI_ACPI=y CONFIG_HOTPLUG_PCI_ACPI_IBM=m # CONFIG_HOTPLUG_PCI_CPCI is not set CONFIG_HOTPLUG_PCI_SHPC=y # # PCI controller drivers # CONFIG_VMD=y CONFIG_PCI_HYPERV_INTERFACE=m # # DesignWare PCI Core Support # # CONFIG_PCIE_DW_PLAT_HOST is not set # CONFIG_PCI_MESON is not set # end of DesignWare PCI Core Support # # Mobiveil PCIe Core Support # # end of Mobiveil PCIe Core Support # # Cadence PCIe controllers support # # end of Cadence PCIe controllers support # end of PCI controller drivers # # PCI Endpoint # # CONFIG_PCI_ENDPOINT is not set # end of PCI Endpoint # # PCI switch controller drivers # # CONFIG_PCI_SW_SWITCHTEC is not set # end of PCI switch controller drivers # CONFIG_CXL_BUS is not set # CONFIG_PCCARD is not set # CONFIG_RAPIDIO is not set # # Generic Driver Options # # CONFIG_UEVENT_HELPER is not set CONFIG_DEVTMPFS=y CONFIG_DEVTMPFS_MOUNT=y CONFIG_STANDALONE=y CONFIG_PREVENT_FIRMWARE_BUILD=y # # Firmware loader # CONFIG_FW_LOADER=y CONFIG_FW_LOADER_PAGED_BUF=y CONFIG_EXTRA_FIRMWARE="" CONFIG_FW_LOADER_USER_HELPER=y # CONFIG_FW_LOADER_USER_HELPER_FALLBACK is not set # CONFIG_FW_LOADER_COMPRESS is not set CONFIG_FW_CACHE=y # end of Firmware loader CONFIG_ALLOW_DEV_COREDUMP=y # CONFIG_DEBUG_DRIVER is not set # CONFIG_DEBUG_DEVRES is not set # CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set # CONFIG_PM_QOS_KUNIT_TEST is not set # CONFIG_TEST_ASYNC_DRIVER_PROBE is not set CONFIG_KUNIT_DRIVER_PE_TEST=y CONFIG_SYS_HYPERVISOR=y CONFIG_GENERIC_CPU_AUTOPROBE=y CONFIG_GENERIC_CPU_VULNERABILITIES=y CONFIG_REGMAP=y CONFIG_REGMAP_I2C=m CONFIG_REGMAP_SPI=m CONFIG_DMA_SHARED_BUFFER=y # CONFIG_DMA_FENCE_TRACE is not set # end of Generic Driver Options # # Bus devices # # CONFIG_MHI_BUS is not set # end of Bus devices CONFIG_CONNECTOR=y CONFIG_PROC_EVENTS=y # CONFIG_GNSS is not set # CONFIG_MTD is not set # CONFIG_OF is not set CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y CONFIG_PARPORT=m CONFIG_PARPORT_PC=m CONFIG_PARPORT_SERIAL=m # CONFIG_PARPORT_PC_FIFO is not set # CONFIG_PARPORT_PC_SUPERIO is not set # CONFIG_PARPORT_AX88796 is not set CONFIG_PARPORT_1284=y CONFIG_PNP=y # CONFIG_PNP_DEBUG_MESSAGES is not set # # Protocols # CONFIG_PNPACPI=y CONFIG_BLK_DEV=y CONFIG_BLK_DEV_NULL_BLK=m CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION=y # CONFIG_BLK_DEV_FD is not set CONFIG_CDROM=m # CONFIG_PARIDE is not set # CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set # CONFIG_ZRAM is not set # CONFIG_BLK_DEV_UMEM is not set CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_LOOP_MIN_COUNT=0 # CONFIG_BLK_DEV_CRYPTOLOOP is not set # CONFIG_BLK_DEV_DRBD is not set CONFIG_BLK_DEV_NBD=m # CONFIG_BLK_DEV_SX8 is not set CONFIG_BLK_DEV_RAM=m CONFIG_BLK_DEV_RAM_COUNT=16 CONFIG_BLK_DEV_RAM_SIZE=16384 CONFIG_CDROM_PKTCDVD=m CONFIG_CDROM_PKTCDVD_BUFFERS=8 # CONFIG_CDROM_PKTCDVD_WCACHE is not set # CONFIG_ATA_OVER_ETH is not set CONFIG_XEN_BLKDEV_FRONTEND=m CONFIG_VIRTIO_BLK=m CONFIG_BLK_DEV_RBD=m # CONFIG_BLK_DEV_RSXX is not set # # NVME Support # CONFIG_NVME_CORE=m CONFIG_BLK_DEV_NVME=m CONFIG_NVME_MULTIPATH=y # CONFIG_NVME_HWMON is not set CONFIG_NVME_FABRICS=m # CONFIG_NVME_RDMA is not set CONFIG_NVME_FC=m # CONFIG_NVME_TCP is not set CONFIG_NVME_TARGET=m # CONFIG_NVME_TARGET_PASSTHRU is not set CONFIG_NVME_TARGET_LOOP=m # CONFIG_NVME_TARGET_RDMA is not set CONFIG_NVME_TARGET_FC=m CONFIG_NVME_TARGET_FCLOOP=m # CONFIG_NVME_TARGET_TCP is not set # end of NVME Support # # Misc devices # CONFIG_SENSORS_LIS3LV02D=m # CONFIG_AD525X_DPOT is not set # CONFIG_DUMMY_IRQ is not set # CONFIG_IBM_ASM is not set # CONFIG_PHANTOM is not set CONFIG_TIFM_CORE=m CONFIG_TIFM_7XX1=m # CONFIG_ICS932S401 is not set CONFIG_ENCLOSURE_SERVICES=m CONFIG_SGI_XP=m CONFIG_HP_ILO=m CONFIG_SGI_GRU=m # CONFIG_SGI_GRU_DEBUG is not set CONFIG_APDS9802ALS=m CONFIG_ISL29003=m CONFIG_ISL29020=m CONFIG_SENSORS_TSL2550=m CONFIG_SENSORS_BH1770=m CONFIG_SENSORS_APDS990X=m # CONFIG_HMC6352 is not set # CONFIG_DS1682 is not set CONFIG_VMWARE_BALLOON=m # CONFIG_LATTICE_ECP3_CONFIG is not set # CONFIG_SRAM is not set # CONFIG_PCI_ENDPOINT_TEST is not set # CONFIG_XILINX_SDFEC is not set CONFIG_MISC_RTSX=m CONFIG_PVPANIC=y # CONFIG_C2PORT is not set # # EEPROM support # # CONFIG_EEPROM_AT24 is not set # CONFIG_EEPROM_AT25 is not set CONFIG_EEPROM_LEGACY=m CONFIG_EEPROM_MAX6875=m CONFIG_EEPROM_93CX6=m # CONFIG_EEPROM_93XX46 is not set # CONFIG_EEPROM_IDT_89HPESX is not set # CONFIG_EEPROM_EE1004 is not set # end of EEPROM support CONFIG_CB710_CORE=m # CONFIG_CB710_DEBUG is not set CONFIG_CB710_DEBUG_ASSUMPTIONS=y # # Texas Instruments shared transport line discipline # # CONFIG_TI_ST is not set # end of Texas Instruments shared transport line discipline CONFIG_SENSORS_LIS3_I2C=m CONFIG_ALTERA_STAPL=m CONFIG_INTEL_MEI=m CONFIG_INTEL_MEI_ME=m # CONFIG_INTEL_MEI_TXE is not set # CONFIG_INTEL_MEI_HDCP is not set CONFIG_VMWARE_VMCI=m # CONFIG_GENWQE is not set # CONFIG_ECHO is not set # CONFIG_BCM_VK is not set # CONFIG_MISC_ALCOR_PCI is not set CONFIG_MISC_RTSX_PCI=m # CONFIG_MISC_RTSX_USB is not set # CONFIG_HABANA_AI is not set # CONFIG_UACCE is not set # end of Misc devices CONFIG_HAVE_IDE=y # CONFIG_IDE is not set # # SCSI device support # CONFIG_SCSI_MOD=y CONFIG_RAID_ATTRS=m CONFIG_SCSI=y CONFIG_SCSI_DMA=y CONFIG_SCSI_NETLINK=y CONFIG_SCSI_PROC_FS=y # # SCSI support type (disk, tape, CD-ROM) # CONFIG_BLK_DEV_SD=m CONFIG_CHR_DEV_ST=m CONFIG_BLK_DEV_SR=m CONFIG_CHR_DEV_SG=m CONFIG_CHR_DEV_SCH=m CONFIG_SCSI_ENCLOSURE=m CONFIG_SCSI_CONSTANTS=y CONFIG_SCSI_LOGGING=y CONFIG_SCSI_SCAN_ASYNC=y # # SCSI Transports # CONFIG_SCSI_SPI_ATTRS=m CONFIG_SCSI_FC_ATTRS=m CONFIG_SCSI_ISCSI_ATTRS=m CONFIG_SCSI_SAS_ATTRS=m CONFIG_SCSI_SAS_LIBSAS=m CONFIG_SCSI_SAS_ATA=y CONFIG_SCSI_SAS_HOST_SMP=y CONFIG_SCSI_SRP_ATTRS=m # end of SCSI Transports CONFIG_SCSI_LOWLEVEL=y # CONFIG_ISCSI_TCP is not set # CONFIG_ISCSI_BOOT_SYSFS is not set # CONFIG_SCSI_CXGB3_ISCSI is not set # CONFIG_SCSI_CXGB4_ISCSI is not set # CONFIG_SCSI_BNX2_ISCSI is not set # CONFIG_BE2ISCSI is not set # CONFIG_BLK_DEV_3W_XXXX_RAID is not set # CONFIG_SCSI_HPSA is not set # CONFIG_SCSI_3W_9XXX is not set # CONFIG_SCSI_3W_SAS is not set # CONFIG_SCSI_ACARD is not set # CONFIG_SCSI_AACRAID is not set # CONFIG_SCSI_AIC7XXX is not set # CONFIG_SCSI_AIC79XX is not set # CONFIG_SCSI_AIC94XX is not set # CONFIG_SCSI_MVSAS is not set # CONFIG_SCSI_MVUMI is not set # CONFIG_SCSI_DPT_I2O is not set # CONFIG_SCSI_ADVANSYS is not set # CONFIG_SCSI_ARCMSR is not set # CONFIG_SCSI_ESAS2R is not set # CONFIG_MEGARAID_NEWGEN is not set # CONFIG_MEGARAID_LEGACY is not set # CONFIG_MEGARAID_SAS is not set CONFIG_SCSI_MPT3SAS=m CONFIG_SCSI_MPT2SAS_MAX_SGE=128 CONFIG_SCSI_MPT3SAS_MAX_SGE=128 # CONFIG_SCSI_MPT2SAS is not set # CONFIG_SCSI_SMARTPQI is not set # CONFIG_SCSI_UFSHCD is not set # CONFIG_SCSI_HPTIOP is not set # CONFIG_SCSI_BUSLOGIC is not set # CONFIG_SCSI_MYRB is not set # CONFIG_SCSI_MYRS is not set # CONFIG_VMWARE_PVSCSI is not set # CONFIG_XEN_SCSI_FRONTEND is not set CONFIG_HYPERV_STORAGE=m # CONFIG_LIBFC is not set # CONFIG_SCSI_SNIC is not set # CONFIG_SCSI_DMX3191D is not set # CONFIG_SCSI_FDOMAIN_PCI is not set CONFIG_SCSI_ISCI=m # CONFIG_SCSI_IPS is not set # CONFIG_SCSI_INITIO is not set # CONFIG_SCSI_INIA100 is not set # CONFIG_SCSI_PPA is not set # CONFIG_SCSI_IMM is not set # CONFIG_SCSI_STEX is not set # CONFIG_SCSI_SYM53C8XX_2 is not set # CONFIG_SCSI_IPR is not set # CONFIG_SCSI_QLOGIC_1280 is not set # CONFIG_SCSI_QLA_FC is not set # CONFIG_SCSI_QLA_ISCSI is not set # CONFIG_SCSI_LPFC is not set # CONFIG_SCSI_DC395x is not set # CONFIG_SCSI_AM53C974 is not set # CONFIG_SCSI_WD719X is not set CONFIG_SCSI_DEBUG=m # CONFIG_SCSI_PMCRAID is not set # CONFIG_SCSI_PM8001 is not set # CONFIG_SCSI_BFA_FC is not set # CONFIG_SCSI_VIRTIO is not set # CONFIG_SCSI_CHELSIO_FCOE is not set CONFIG_SCSI_DH=y CONFIG_SCSI_DH_RDAC=y CONFIG_SCSI_DH_HP_SW=y CONFIG_SCSI_DH_EMC=y CONFIG_SCSI_DH_ALUA=y # end of SCSI device support CONFIG_ATA=m CONFIG_SATA_HOST=y CONFIG_PATA_TIMINGS=y CONFIG_ATA_VERBOSE_ERROR=y CONFIG_ATA_FORCE=y CONFIG_ATA_ACPI=y # CONFIG_SATA_ZPODD is not set CONFIG_SATA_PMP=y # # Controllers with non-SFF native interface # CONFIG_SATA_AHCI=m CONFIG_SATA_MOBILE_LPM_POLICY=0 CONFIG_SATA_AHCI_PLATFORM=m # CONFIG_SATA_INIC162X is not set # CONFIG_SATA_ACARD_AHCI is not set # CONFIG_SATA_SIL24 is not set CONFIG_ATA_SFF=y # # SFF controllers with custom DMA interface # # CONFIG_PDC_ADMA is not set # CONFIG_SATA_QSTOR is not set # CONFIG_SATA_SX4 is not set CONFIG_ATA_BMDMA=y # # SATA SFF controllers with BMDMA # CONFIG_ATA_PIIX=m # CONFIG_SATA_DWC is not set # CONFIG_SATA_MV is not set # CONFIG_SATA_NV is not set # CONFIG_SATA_PROMISE is not set # CONFIG_SATA_SIL is not set # CONFIG_SATA_SIS is not set # CONFIG_SATA_SVW is not set # CONFIG_SATA_ULI is not set # CONFIG_SATA_VIA is not set # CONFIG_SATA_VITESSE is not set # # PATA SFF controllers with BMDMA # # CONFIG_PATA_ALI is not set # CONFIG_PATA_AMD is not set # CONFIG_PATA_ARTOP is not set # CONFIG_PATA_ATIIXP is not set # CONFIG_PATA_ATP867X is not set # CONFIG_PATA_CMD64X is not set # CONFIG_PATA_CYPRESS is not set # CONFIG_PATA_EFAR is not set # CONFIG_PATA_HPT366 is not set # CONFIG_PATA_HPT37X is not set # CONFIG_PATA_HPT3X2N is not set # CONFIG_PATA_HPT3X3 is not set # CONFIG_PATA_IT8213 is not set # CONFIG_PATA_IT821X is not set # CONFIG_PATA_JMICRON is not set # CONFIG_PATA_MARVELL is not set # CONFIG_PATA_NETCELL is not set # CONFIG_PATA_NINJA32 is not set # CONFIG_PATA_NS87415 is not set # CONFIG_PATA_OLDPIIX is not set # CONFIG_PATA_OPTIDMA is not set # CONFIG_PATA_PDC2027X is not set # CONFIG_PATA_PDC_OLD is not set # CONFIG_PATA_RADISYS is not set # CONFIG_PATA_RDC is not set # CONFIG_PATA_SCH is not set # CONFIG_PATA_SERVERWORKS is not set # CONFIG_PATA_SIL680 is not set # CONFIG_PATA_SIS is not set # CONFIG_PATA_TOSHIBA is not set # CONFIG_PATA_TRIFLEX is not set # CONFIG_PATA_VIA is not set # CONFIG_PATA_WINBOND is not set # # PIO-only SFF controllers # # CONFIG_PATA_CMD640_PCI is not set # CONFIG_PATA_MPIIX is not set # CONFIG_PATA_NS87410 is not set # CONFIG_PATA_OPTI is not set # CONFIG_PATA_RZ1000 is not set # # Generic fallback / legacy drivers # # CONFIG_PATA_ACPI is not set CONFIG_ATA_GENERIC=m # CONFIG_PATA_LEGACY is not set CONFIG_MD=y CONFIG_BLK_DEV_MD=y CONFIG_MD_AUTODETECT=y CONFIG_MD_LINEAR=m CONFIG_MD_RAID0=m CONFIG_MD_RAID1=m CONFIG_MD_RAID10=m CONFIG_MD_RAID456=m CONFIG_MD_MULTIPATH=m CONFIG_MD_FAULTY=m CONFIG_MD_CLUSTER=m # CONFIG_BCACHE is not set CONFIG_BLK_DEV_DM_BUILTIN=y CONFIG_BLK_DEV_DM=m CONFIG_DM_DEBUG=y CONFIG_DM_BUFIO=m # CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set CONFIG_DM_BIO_PRISON=m CONFIG_DM_PERSISTENT_DATA=m # CONFIG_DM_UNSTRIPED is not set CONFIG_DM_CRYPT=m CONFIG_DM_SNAPSHOT=m CONFIG_DM_THIN_PROVISIONING=m CONFIG_DM_CACHE=m CONFIG_DM_CACHE_SMQ=m CONFIG_DM_WRITECACHE=m # CONFIG_DM_EBS is not set CONFIG_DM_ERA=m # CONFIG_DM_CLONE is not set CONFIG_DM_MIRROR=m CONFIG_DM_LOG_USERSPACE=m CONFIG_DM_RAID=m CONFIG_DM_ZERO=m CONFIG_DM_MULTIPATH=m CONFIG_DM_MULTIPATH_QL=m CONFIG_DM_MULTIPATH_ST=m # CONFIG_DM_MULTIPATH_HST is not set # CONFIG_DM_MULTIPATH_IOA is not set CONFIG_DM_DELAY=m # CONFIG_DM_DUST is not set CONFIG_DM_UEVENT=y CONFIG_DM_FLAKEY=m CONFIG_DM_VERITY=m # CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG is not set # CONFIG_DM_VERITY_FEC is not set CONFIG_DM_SWITCH=m CONFIG_DM_LOG_WRITES=m CONFIG_DM_INTEGRITY=m # CONFIG_DM_ZONED is not set CONFIG_TARGET_CORE=m CONFIG_TCM_IBLOCK=m CONFIG_TCM_FILEIO=m CONFIG_TCM_PSCSI=m CONFIG_TCM_USER2=m CONFIG_LOOPBACK_TARGET=m CONFIG_ISCSI_TARGET=m # CONFIG_SBP_TARGET is not set # CONFIG_FUSION is not set # # IEEE 1394 (FireWire) support # CONFIG_FIREWIRE=m CONFIG_FIREWIRE_OHCI=m CONFIG_FIREWIRE_SBP2=m CONFIG_FIREWIRE_NET=m # CONFIG_FIREWIRE_NOSY is not set # end of IEEE 1394 (FireWire) support CONFIG_MACINTOSH_DRIVERS=y CONFIG_MAC_EMUMOUSEBTN=y CONFIG_NETDEVICES=y CONFIG_MII=y CONFIG_NET_CORE=y # CONFIG_BONDING is not set # CONFIG_DUMMY is not set # CONFIG_WIREGUARD is not set # CONFIG_EQUALIZER is not set # CONFIG_NET_FC is not set # CONFIG_IFB is not set # CONFIG_NET_TEAM is not set # CONFIG_MACVLAN is not set # CONFIG_IPVLAN is not set # CONFIG_VXLAN is not set # CONFIG_GENEVE is not set # CONFIG_BAREUDP is not set # CONFIG_GTP is not set # CONFIG_MACSEC is not set CONFIG_NETCONSOLE=m CONFIG_NETCONSOLE_DYNAMIC=y CONFIG_NETPOLL=y CONFIG_NET_POLL_CONTROLLER=y CONFIG_TUN=m # CONFIG_TUN_VNET_CROSS_LE is not set CONFIG_VETH=m CONFIG_VIRTIO_NET=m # CONFIG_NLMON is not set # CONFIG_NET_VRF is not set # CONFIG_VSOCKMON is not set # CONFIG_ARCNET is not set CONFIG_ATM_DRIVERS=y # CONFIG_ATM_DUMMY is not set # CONFIG_ATM_TCP is not set # CONFIG_ATM_LANAI is not set # CONFIG_ATM_ENI is not set # CONFIG_ATM_FIRESTREAM is not set # CONFIG_ATM_ZATM is not set # CONFIG_ATM_NICSTAR is not set # CONFIG_ATM_IDT77252 is not set # CONFIG_ATM_AMBASSADOR is not set # CONFIG_ATM_HORIZON is not set # CONFIG_ATM_IA is not set # CONFIG_ATM_FORE200E is not set # CONFIG_ATM_HE is not set # CONFIG_ATM_SOLOS is not set # # Distributed Switch Architecture drivers # # CONFIG_NET_DSA_MV88E6XXX_PTP is not set # end of Distributed Switch Architecture drivers CONFIG_ETHERNET=y CONFIG_MDIO=y CONFIG_NET_VENDOR_3COM=y # CONFIG_VORTEX is not set # CONFIG_TYPHOON is not set CONFIG_NET_VENDOR_ADAPTEC=y # CONFIG_ADAPTEC_STARFIRE is not set CONFIG_NET_VENDOR_AGERE=y # CONFIG_ET131X is not set CONFIG_NET_VENDOR_ALACRITECH=y # CONFIG_SLICOSS is not set CONFIG_NET_VENDOR_ALTEON=y # CONFIG_ACENIC is not set # CONFIG_ALTERA_TSE is not set CONFIG_NET_VENDOR_AMAZON=y # CONFIG_ENA_ETHERNET is not set CONFIG_NET_VENDOR_AMD=y # CONFIG_AMD8111_ETH is not set # CONFIG_PCNET32 is not set # CONFIG_AMD_XGBE is not set CONFIG_NET_VENDOR_AQUANTIA=y # CONFIG_AQTION is not set CONFIG_NET_VENDOR_ARC=y CONFIG_NET_VENDOR_ATHEROS=y # CONFIG_ATL2 is not set # CONFIG_ATL1 is not set # CONFIG_ATL1E is not set # CONFIG_ATL1C is not set # CONFIG_ALX is not set CONFIG_NET_VENDOR_BROADCOM=y # CONFIG_B44 is not set # CONFIG_BCMGENET is not set # CONFIG_BNX2 is not set # CONFIG_CNIC is not set # CONFIG_TIGON3 is not set # CONFIG_BNX2X is not set # CONFIG_SYSTEMPORT is not set # CONFIG_BNXT is not set CONFIG_NET_VENDOR_BROCADE=y # CONFIG_BNA is not set CONFIG_NET_VENDOR_CADENCE=y # CONFIG_MACB is not set CONFIG_NET_VENDOR_CAVIUM=y # CONFIG_THUNDER_NIC_PF is not set # CONFIG_THUNDER_NIC_VF is not set # CONFIG_THUNDER_NIC_BGX is not set # CONFIG_THUNDER_NIC_RGX is not set CONFIG_CAVIUM_PTP=y # CONFIG_LIQUIDIO is not set # CONFIG_LIQUIDIO_VF is not set CONFIG_NET_VENDOR_CHELSIO=y # CONFIG_CHELSIO_T1 is not set # CONFIG_CHELSIO_T3 is not set # CONFIG_CHELSIO_T4 is not set # CONFIG_CHELSIO_T4VF is not set CONFIG_NET_VENDOR_CISCO=y # CONFIG_ENIC is not set CONFIG_NET_VENDOR_CORTINA=y # CONFIG_CX_ECAT is not set # CONFIG_DNET is not set CONFIG_NET_VENDOR_DEC=y # CONFIG_NET_TULIP is not set CONFIG_NET_VENDOR_DLINK=y # CONFIG_DL2K is not set # CONFIG_SUNDANCE is not set CONFIG_NET_VENDOR_EMULEX=y # CONFIG_BE2NET is not set CONFIG_NET_VENDOR_EZCHIP=y CONFIG_NET_VENDOR_GOOGLE=y # CONFIG_GVE is not set CONFIG_NET_VENDOR_HUAWEI=y # CONFIG_HINIC is not set CONFIG_NET_VENDOR_I825XX=y CONFIG_NET_VENDOR_INTEL=y # CONFIG_E100 is not set CONFIG_E1000=y CONFIG_E1000E=y CONFIG_E1000E_HWTS=y CONFIG_IGB=y CONFIG_IGB_HWMON=y # CONFIG_IGBVF is not set # CONFIG_IXGB is not set CONFIG_IXGBE=y CONFIG_IXGBE_HWMON=y # CONFIG_IXGBE_DCB is not set CONFIG_IXGBE_IPSEC=y # CONFIG_IXGBEVF is not set CONFIG_I40E=y # CONFIG_I40E_DCB is not set # CONFIG_I40EVF is not set # CONFIG_ICE is not set # CONFIG_FM10K is not set # CONFIG_IGC is not set # CONFIG_JME is not set CONFIG_NET_VENDOR_MARVELL=y # CONFIG_MVMDIO is not set # CONFIG_SKGE is not set # CONFIG_SKY2 is not set # CONFIG_PRESTERA is not set CONFIG_NET_VENDOR_MELLANOX=y # CONFIG_MLX4_EN is not set # CONFIG_MLX5_CORE is not set # CONFIG_MLXSW_CORE is not set # CONFIG_MLXFW is not set CONFIG_NET_VENDOR_MICREL=y # CONFIG_KS8842 is not set # CONFIG_KS8851 is not set # CONFIG_KS8851_MLL is not set # CONFIG_KSZ884X_PCI is not set CONFIG_NET_VENDOR_MICROCHIP=y # CONFIG_ENC28J60 is not set # CONFIG_ENCX24J600 is not set # CONFIG_LAN743X is not set CONFIG_NET_VENDOR_MICROSEMI=y CONFIG_NET_VENDOR_MYRI=y # CONFIG_MYRI10GE is not set # CONFIG_FEALNX is not set CONFIG_NET_VENDOR_NATSEMI=y # CONFIG_NATSEMI is not set # CONFIG_NS83820 is not set CONFIG_NET_VENDOR_NETERION=y # CONFIG_S2IO is not set # CONFIG_VXGE is not set CONFIG_NET_VENDOR_NETRONOME=y # CONFIG_NFP is not set CONFIG_NET_VENDOR_NI=y # CONFIG_NI_XGE_MANAGEMENT_ENET is not set CONFIG_NET_VENDOR_8390=y # CONFIG_NE2K_PCI is not set CONFIG_NET_VENDOR_NVIDIA=y # CONFIG_FORCEDETH is not set CONFIG_NET_VENDOR_OKI=y # CONFIG_ETHOC is not set CONFIG_NET_VENDOR_PACKET_ENGINES=y # CONFIG_HAMACHI is not set # CONFIG_YELLOWFIN is not set CONFIG_NET_VENDOR_PENSANDO=y # CONFIG_IONIC is not set CONFIG_NET_VENDOR_QLOGIC=y # CONFIG_QLA3XXX is not set # CONFIG_QLCNIC is not set # CONFIG_NETXEN_NIC is not set # CONFIG_QED is not set CONFIG_NET_VENDOR_QUALCOMM=y # CONFIG_QCOM_EMAC is not set # CONFIG_RMNET is not set CONFIG_NET_VENDOR_RDC=y # CONFIG_R6040 is not set CONFIG_NET_VENDOR_REALTEK=y # CONFIG_ATP is not set # CONFIG_8139CP is not set # CONFIG_8139TOO is not set CONFIG_R8169=y CONFIG_NET_VENDOR_RENESAS=y CONFIG_NET_VENDOR_ROCKER=y # CONFIG_ROCKER is not set CONFIG_NET_VENDOR_SAMSUNG=y # CONFIG_SXGBE_ETH is not set CONFIG_NET_VENDOR_SEEQ=y CONFIG_NET_VENDOR_SOLARFLARE=y # CONFIG_SFC is not set # CONFIG_SFC_FALCON is not set CONFIG_NET_VENDOR_SILAN=y # CONFIG_SC92031 is not set CONFIG_NET_VENDOR_SIS=y # CONFIG_SIS900 is not set # CONFIG_SIS190 is not set CONFIG_NET_VENDOR_SMSC=y # CONFIG_EPIC100 is not set # CONFIG_SMSC911X is not set # CONFIG_SMSC9420 is not set CONFIG_NET_VENDOR_SOCIONEXT=y CONFIG_NET_VENDOR_STMICRO=y # CONFIG_STMMAC_ETH is not set CONFIG_NET_VENDOR_SUN=y # CONFIG_HAPPYMEAL is not set # CONFIG_SUNGEM is not set # CONFIG_CASSINI is not set # CONFIG_NIU is not set CONFIG_NET_VENDOR_SYNOPSYS=y # CONFIG_DWC_XLGMAC is not set CONFIG_NET_VENDOR_TEHUTI=y # CONFIG_TEHUTI is not set CONFIG_NET_VENDOR_TI=y # CONFIG_TI_CPSW_PHY_SEL is not set # CONFIG_TLAN is not set CONFIG_NET_VENDOR_VIA=y # CONFIG_VIA_RHINE is not set # CONFIG_VIA_VELOCITY is not set CONFIG_NET_VENDOR_WIZNET=y # CONFIG_WIZNET_W5100 is not set # CONFIG_WIZNET_W5300 is not set CONFIG_NET_VENDOR_XILINX=y # CONFIG_XILINX_EMACLITE is not set # CONFIG_XILINX_AXI_EMAC is not set # CONFIG_XILINX_LL_TEMAC is not set # CONFIG_FDDI is not set # CONFIG_HIPPI is not set # CONFIG_NET_SB1000 is not set CONFIG_PHYLIB=y # CONFIG_LED_TRIGGER_PHY is not set # CONFIG_FIXED_PHY is not set # # MII PHY device drivers # # CONFIG_AMD_PHY is not set # CONFIG_ADIN_PHY is not set # CONFIG_AQUANTIA_PHY is not set # CONFIG_AX88796B_PHY is not set # CONFIG_BROADCOM_PHY is not set # CONFIG_BCM54140_PHY is not set # CONFIG_BCM7XXX_PHY is not set # CONFIG_BCM84881_PHY is not set # CONFIG_BCM87XX_PHY is not set # CONFIG_CICADA_PHY is not set # CONFIG_CORTINA_PHY is not set # CONFIG_DAVICOM_PHY is not set # CONFIG_ICPLUS_PHY is not set # CONFIG_LXT_PHY is not set # CONFIG_INTEL_XWAY_PHY is not set # CONFIG_LSI_ET1011C_PHY is not set # CONFIG_MARVELL_PHY is not set # CONFIG_MARVELL_10G_PHY is not set # CONFIG_MICREL_PHY is not set # CONFIG_MICROCHIP_PHY is not set # CONFIG_MICROCHIP_T1_PHY is not set # CONFIG_MICROSEMI_PHY is not set # CONFIG_NATIONAL_PHY is not set # CONFIG_NXP_TJA11XX_PHY is not set # CONFIG_QSEMI_PHY is not set CONFIG_REALTEK_PHY=y # CONFIG_RENESAS_PHY is not set # CONFIG_ROCKCHIP_PHY is not set # CONFIG_SMSC_PHY is not set # CONFIG_STE10XP is not set # CONFIG_TERANETICS_PHY is not set # CONFIG_DP83822_PHY is not set # CONFIG_DP83TC811_PHY is not set # CONFIG_DP83848_PHY is not set # CONFIG_DP83867_PHY is not set # CONFIG_DP83869_PHY is not set # CONFIG_VITESSE_PHY is not set # CONFIG_XILINX_GMII2RGMII is not set # CONFIG_MICREL_KS8995MA is not set CONFIG_MDIO_DEVICE=y CONFIG_MDIO_BUS=y CONFIG_MDIO_DEVRES=y # CONFIG_MDIO_BITBANG is not set # CONFIG_MDIO_BCM_UNIMAC is not set # CONFIG_MDIO_MVUSB is not set # CONFIG_MDIO_MSCC_MIIM is not set # CONFIG_MDIO_THUNDER is not set # # MDIO Multiplexers # # # PCS device drivers # # CONFIG_PCS_XPCS is not set # end of PCS device drivers # CONFIG_PLIP is not set # CONFIG_PPP is not set # CONFIG_SLIP is not set CONFIG_USB_NET_DRIVERS=y # CONFIG_USB_CATC is not set # CONFIG_USB_KAWETH is not set # CONFIG_USB_PEGASUS is not set # CONFIG_USB_RTL8150 is not set CONFIG_USB_RTL8152=y # CONFIG_USB_LAN78XX is not set CONFIG_USB_USBNET=y CONFIG_USB_NET_AX8817X=y CONFIG_USB_NET_AX88179_178A=y # CONFIG_USB_NET_CDCETHER is not set # CONFIG_USB_NET_CDC_EEM is not set # CONFIG_USB_NET_CDC_NCM is not set # CONFIG_USB_NET_HUAWEI_CDC_NCM is not set # CONFIG_USB_NET_CDC_MBIM is not set # CONFIG_USB_NET_DM9601 is not set # CONFIG_USB_NET_SR9700 is not set # CONFIG_USB_NET_SR9800 is not set # CONFIG_USB_NET_SMSC75XX is not set # CONFIG_USB_NET_SMSC95XX is not set # CONFIG_USB_NET_GL620A is not set # CONFIG_USB_NET_NET1080 is not set # CONFIG_USB_NET_PLUSB is not set # CONFIG_USB_NET_MCS7830 is not set # CONFIG_USB_NET_RNDIS_HOST is not set # CONFIG_USB_NET_CDC_SUBSET is not set # CONFIG_USB_NET_ZAURUS is not set # CONFIG_USB_NET_CX82310_ETH is not set # CONFIG_USB_NET_KALMIA is not set # CONFIG_USB_NET_QMI_WWAN is not set # CONFIG_USB_HSO is not set # CONFIG_USB_NET_INT51X1 is not set # CONFIG_USB_IPHETH is not set # CONFIG_USB_SIERRA_NET is not set # CONFIG_USB_NET_CH9200 is not set # CONFIG_USB_NET_AQC111 is not set CONFIG_WLAN=y CONFIG_WLAN_VENDOR_ADMTEK=y # CONFIG_ADM8211 is not set CONFIG_WLAN_VENDOR_ATH=y # CONFIG_ATH_DEBUG is not set # CONFIG_ATH5K is not set # CONFIG_ATH5K_PCI is not set # CONFIG_ATH9K is not set # CONFIG_ATH9K_HTC is not set # CONFIG_CARL9170 is not set # CONFIG_ATH6KL is not set # CONFIG_AR5523 is not set # CONFIG_WIL6210 is not set # CONFIG_ATH10K is not set # CONFIG_WCN36XX is not set # CONFIG_ATH11K is not set CONFIG_WLAN_VENDOR_ATMEL=y # CONFIG_ATMEL is not set # CONFIG_AT76C50X_USB is not set CONFIG_WLAN_VENDOR_BROADCOM=y # CONFIG_B43 is not set # CONFIG_B43LEGACY is not set # CONFIG_BRCMSMAC is not set # CONFIG_BRCMFMAC is not set CONFIG_WLAN_VENDOR_CISCO=y # CONFIG_AIRO is not set CONFIG_WLAN_VENDOR_INTEL=y # CONFIG_IPW2100 is not set # CONFIG_IPW2200 is not set # CONFIG_IWL4965 is not set # CONFIG_IWL3945 is not set # CONFIG_IWLWIFI is not set CONFIG_WLAN_VENDOR_INTERSIL=y # CONFIG_HOSTAP is not set # CONFIG_HERMES is not set # CONFIG_P54_COMMON is not set # CONFIG_PRISM54 is not set CONFIG_WLAN_VENDOR_MARVELL=y # CONFIG_LIBERTAS is not set # CONFIG_LIBERTAS_THINFIRM is not set # CONFIG_MWIFIEX is not set # CONFIG_MWL8K is not set CONFIG_WLAN_VENDOR_MEDIATEK=y # CONFIG_MT7601U is not set # CONFIG_MT76x0U is not set # CONFIG_MT76x0E is not set # CONFIG_MT76x2E is not set # CONFIG_MT76x2U is not set # CONFIG_MT7603E is not set # CONFIG_MT7615E is not set # CONFIG_MT7663U is not set # CONFIG_MT7663S is not set # CONFIG_MT7915E is not set # CONFIG_MT7921E is not set CONFIG_WLAN_VENDOR_MICROCHIP=y # CONFIG_WILC1000_SDIO is not set # CONFIG_WILC1000_SPI is not set CONFIG_WLAN_VENDOR_RALINK=y # CONFIG_RT2X00 is not set CONFIG_WLAN_VENDOR_REALTEK=y # CONFIG_RTL8180 is not set # CONFIG_RTL8187 is not set CONFIG_RTL_CARDS=m # CONFIG_RTL8192CE is not set # CONFIG_RTL8192SE is not set # CONFIG_RTL8192DE is not set # CONFIG_RTL8723AE is not set # CONFIG_RTL8723BE is not set # CONFIG_RTL8188EE is not set # CONFIG_RTL8192EE is not set # CONFIG_RTL8821AE is not set # CONFIG_RTL8192CU is not set # CONFIG_RTL8XXXU is not set # CONFIG_RTW88 is not set CONFIG_WLAN_VENDOR_RSI=y # CONFIG_RSI_91X is not set CONFIG_WLAN_VENDOR_ST=y # CONFIG_CW1200 is not set CONFIG_WLAN_VENDOR_TI=y # CONFIG_WL1251 is not set # CONFIG_WL12XX is not set # CONFIG_WL18XX is not set # CONFIG_WLCORE is not set CONFIG_WLAN_VENDOR_ZYDAS=y # CONFIG_USB_ZD1201 is not set # CONFIG_ZD1211RW is not set CONFIG_WLAN_VENDOR_QUANTENNA=y # CONFIG_QTNFMAC_PCIE is not set CONFIG_MAC80211_HWSIM=m # CONFIG_USB_NET_RNDIS_WLAN is not set # CONFIG_VIRT_WIFI is not set # CONFIG_WAN is not set CONFIG_IEEE802154_DRIVERS=m # CONFIG_IEEE802154_FAKELB is not set # CONFIG_IEEE802154_AT86RF230 is not set # CONFIG_IEEE802154_MRF24J40 is not set # CONFIG_IEEE802154_CC2520 is not set # CONFIG_IEEE802154_ATUSB is not set # CONFIG_IEEE802154_ADF7242 is not set # CONFIG_IEEE802154_CA8210 is not set # CONFIG_IEEE802154_MCR20A is not set # CONFIG_IEEE802154_HWSIM is not set CONFIG_XEN_NETDEV_FRONTEND=y # CONFIG_VMXNET3 is not set # CONFIG_FUJITSU_ES is not set # CONFIG_HYPERV_NET is not set CONFIG_NETDEVSIM=m CONFIG_NET_FAILOVER=m # CONFIG_ISDN is not set # CONFIG_NVM is not set # # Input device support # CONFIG_INPUT=y CONFIG_INPUT_LEDS=y CONFIG_INPUT_FF_MEMLESS=m CONFIG_INPUT_SPARSEKMAP=m # CONFIG_INPUT_MATRIXKMAP is not set # # Userland interfaces # CONFIG_INPUT_MOUSEDEV=y # CONFIG_INPUT_MOUSEDEV_PSAUX is not set CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024 CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768 CONFIG_INPUT_JOYDEV=m CONFIG_INPUT_EVDEV=y # CONFIG_INPUT_EVBUG is not set # # Input Device Drivers # CONFIG_INPUT_KEYBOARD=y # CONFIG_KEYBOARD_ADP5588 is not set # CONFIG_KEYBOARD_ADP5589 is not set # CONFIG_KEYBOARD_APPLESPI is not set CONFIG_KEYBOARD_ATKBD=y # CONFIG_KEYBOARD_QT1050 is not set # CONFIG_KEYBOARD_QT1070 is not set # CONFIG_KEYBOARD_QT2160 is not set # CONFIG_KEYBOARD_DLINK_DIR685 is not set # CONFIG_KEYBOARD_LKKBD is not set # CONFIG_KEYBOARD_GPIO is not set # CONFIG_KEYBOARD_GPIO_POLLED is not set # CONFIG_KEYBOARD_TCA6416 is not set # CONFIG_KEYBOARD_TCA8418 is not set # CONFIG_KEYBOARD_MATRIX is not set # CONFIG_KEYBOARD_LM8323 is not set # CONFIG_KEYBOARD_LM8333 is not set # CONFIG_KEYBOARD_MAX7359 is not set # CONFIG_KEYBOARD_MCS is not set # CONFIG_KEYBOARD_MPR121 is not set # CONFIG_KEYBOARD_NEWTON is not set # CONFIG_KEYBOARD_OPENCORES is not set # CONFIG_KEYBOARD_SAMSUNG is not set # CONFIG_KEYBOARD_STOWAWAY is not set # CONFIG_KEYBOARD_SUNKBD is not set # CONFIG_KEYBOARD_TM2_TOUCHKEY is not set # CONFIG_KEYBOARD_XTKBD is not set CONFIG_INPUT_MOUSE=y CONFIG_MOUSE_PS2=y CONFIG_MOUSE_PS2_ALPS=y CONFIG_MOUSE_PS2_BYD=y CONFIG_MOUSE_PS2_LOGIPS2PP=y CONFIG_MOUSE_PS2_SYNAPTICS=y CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y CONFIG_MOUSE_PS2_CYPRESS=y CONFIG_MOUSE_PS2_LIFEBOOK=y CONFIG_MOUSE_PS2_TRACKPOINT=y CONFIG_MOUSE_PS2_ELANTECH=y CONFIG_MOUSE_PS2_ELANTECH_SMBUS=y CONFIG_MOUSE_PS2_SENTELIC=y # CONFIG_MOUSE_PS2_TOUCHKIT is not set CONFIG_MOUSE_PS2_FOCALTECH=y CONFIG_MOUSE_PS2_VMMOUSE=y CONFIG_MOUSE_PS2_SMBUS=y CONFIG_MOUSE_SERIAL=m # CONFIG_MOUSE_APPLETOUCH is not set # CONFIG_MOUSE_BCM5974 is not set CONFIG_MOUSE_CYAPA=m CONFIG_MOUSE_ELAN_I2C=m CONFIG_MOUSE_ELAN_I2C_I2C=y CONFIG_MOUSE_ELAN_I2C_SMBUS=y CONFIG_MOUSE_VSXXXAA=m # CONFIG_MOUSE_GPIO is not set CONFIG_MOUSE_SYNAPTICS_I2C=m # CONFIG_MOUSE_SYNAPTICS_USB is not set # CONFIG_INPUT_JOYSTICK is not set # CONFIG_INPUT_TABLET is not set # CONFIG_INPUT_TOUCHSCREEN is not set # CONFIG_INPUT_MISC is not set CONFIG_RMI4_CORE=m CONFIG_RMI4_I2C=m CONFIG_RMI4_SPI=m CONFIG_RMI4_SMB=m CONFIG_RMI4_F03=y CONFIG_RMI4_F03_SERIO=m CONFIG_RMI4_2D_SENSOR=y CONFIG_RMI4_F11=y CONFIG_RMI4_F12=y CONFIG_RMI4_F30=y CONFIG_RMI4_F34=y # CONFIG_RMI4_F3A is not set # CONFIG_RMI4_F54 is not set CONFIG_RMI4_F55=y # # Hardware I/O ports # CONFIG_SERIO=y CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y CONFIG_SERIO_I8042=y CONFIG_SERIO_SERPORT=y # CONFIG_SERIO_CT82C710 is not set # CONFIG_SERIO_PARKBD is not set # CONFIG_SERIO_PCIPS2 is not set CONFIG_SERIO_LIBPS2=y CONFIG_SERIO_RAW=m CONFIG_SERIO_ALTERA_PS2=m # CONFIG_SERIO_PS2MULT is not set CONFIG_SERIO_ARC_PS2=m CONFIG_HYPERV_KEYBOARD=m # CONFIG_SERIO_GPIO_PS2 is not set # CONFIG_USERIO is not set # CONFIG_GAMEPORT is not set # end of Hardware I/O ports # end of Input device support # # Character devices # CONFIG_TTY=y CONFIG_VT=y CONFIG_CONSOLE_TRANSLATIONS=y CONFIG_VT_CONSOLE=y CONFIG_VT_CONSOLE_SLEEP=y CONFIG_HW_CONSOLE=y CONFIG_VT_HW_CONSOLE_BINDING=y CONFIG_UNIX98_PTYS=y # CONFIG_LEGACY_PTYS is not set CONFIG_LDISC_AUTOLOAD=y # # Serial drivers # CONFIG_SERIAL_EARLYCON=y CONFIG_SERIAL_8250=y # CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set CONFIG_SERIAL_8250_PNP=y # CONFIG_SERIAL_8250_16550A_VARIANTS is not set # CONFIG_SERIAL_8250_FINTEK is not set CONFIG_SERIAL_8250_CONSOLE=y CONFIG_SERIAL_8250_DMA=y CONFIG_SERIAL_8250_PCI=y CONFIG_SERIAL_8250_EXAR=y CONFIG_SERIAL_8250_NR_UARTS=64 CONFIG_SERIAL_8250_RUNTIME_UARTS=4 CONFIG_SERIAL_8250_EXTENDED=y CONFIG_SERIAL_8250_MANY_PORTS=y CONFIG_SERIAL_8250_SHARE_IRQ=y # CONFIG_SERIAL_8250_DETECT_IRQ is not set CONFIG_SERIAL_8250_RSA=y CONFIG_SERIAL_8250_DWLIB=y CONFIG_SERIAL_8250_DW=y # CONFIG_SERIAL_8250_RT288X is not set CONFIG_SERIAL_8250_LPSS=y CONFIG_SERIAL_8250_MID=y # # Non-8250 serial port support # # CONFIG_SERIAL_MAX3100 is not set # CONFIG_SERIAL_MAX310X is not set # CONFIG_SERIAL_UARTLITE is not set CONFIG_SERIAL_CORE=y CONFIG_SERIAL_CORE_CONSOLE=y CONFIG_SERIAL_JSM=m # CONFIG_SERIAL_LANTIQ is not set # CONFIG_SERIAL_SCCNXP is not set # CONFIG_SERIAL_SC16IS7XX is not set # CONFIG_SERIAL_BCM63XX is not set # CONFIG_SERIAL_ALTERA_JTAGUART is not set # CONFIG_SERIAL_ALTERA_UART is not set CONFIG_SERIAL_ARC=m CONFIG_SERIAL_ARC_NR_PORTS=1 # CONFIG_SERIAL_RP2 is not set # CONFIG_SERIAL_FSL_LPUART is not set # CONFIG_SERIAL_FSL_LINFLEXUART is not set # CONFIG_SERIAL_SPRD is not set # end of Serial drivers CONFIG_SERIAL_MCTRL_GPIO=y CONFIG_SERIAL_NONSTANDARD=y # CONFIG_ROCKETPORT is not set CONFIG_CYCLADES=m # CONFIG_CYZ_INTR is not set # CONFIG_MOXA_INTELLIO is not set # CONFIG_MOXA_SMARTIO is not set CONFIG_SYNCLINK_GT=m # CONFIG_ISI is not set CONFIG_N_HDLC=m CONFIG_N_GSM=m CONFIG_NOZOMI=m # CONFIG_NULL_TTY is not set # CONFIG_TRACE_SINK is not set CONFIG_HVC_DRIVER=y CONFIG_HVC_IRQ=y CONFIG_HVC_XEN=y CONFIG_HVC_XEN_FRONTEND=y # CONFIG_SERIAL_DEV_BUS is not set CONFIG_PRINTER=m # CONFIG_LP_CONSOLE is not set CONFIG_PPDEV=m CONFIG_VIRTIO_CONSOLE=m CONFIG_IPMI_HANDLER=m CONFIG_IPMI_DMI_DECODE=y CONFIG_IPMI_PLAT_DATA=y CONFIG_IPMI_PANIC_EVENT=y CONFIG_IPMI_PANIC_STRING=y CONFIG_IPMI_DEVICE_INTERFACE=m CONFIG_IPMI_SI=m CONFIG_IPMI_SSIF=m CONFIG_IPMI_WATCHDOG=m CONFIG_IPMI_POWEROFF=m CONFIG_HW_RANDOM=y CONFIG_HW_RANDOM_TIMERIOMEM=m CONFIG_HW_RANDOM_INTEL=m CONFIG_HW_RANDOM_AMD=m # CONFIG_HW_RANDOM_BA431 is not set CONFIG_HW_RANDOM_VIA=m CONFIG_HW_RANDOM_VIRTIO=y # CONFIG_HW_RANDOM_XIPHERA is not set # CONFIG_APPLICOM is not set # CONFIG_MWAVE is not set CONFIG_DEVMEM=y # CONFIG_DEVKMEM is not set CONFIG_NVRAM=y CONFIG_RAW_DRIVER=y CONFIG_MAX_RAW_DEVS=8192 CONFIG_DEVPORT=y CONFIG_HPET=y CONFIG_HPET_MMAP=y # CONFIG_HPET_MMAP_DEFAULT is not set CONFIG_HANGCHECK_TIMER=m CONFIG_UV_MMTIMER=m CONFIG_TCG_TPM=y CONFIG_HW_RANDOM_TPM=y CONFIG_TCG_TIS_CORE=y CONFIG_TCG_TIS=y # CONFIG_TCG_TIS_SPI is not set # CONFIG_TCG_TIS_I2C_CR50 is not set CONFIG_TCG_TIS_I2C_ATMEL=m CONFIG_TCG_TIS_I2C_INFINEON=m CONFIG_TCG_TIS_I2C_NUVOTON=m CONFIG_TCG_NSC=m CONFIG_TCG_ATMEL=m CONFIG_TCG_INFINEON=m # CONFIG_TCG_XEN is not set CONFIG_TCG_CRB=y # CONFIG_TCG_VTPM_PROXY is not set CONFIG_TCG_TIS_ST33ZP24=m CONFIG_TCG_TIS_ST33ZP24_I2C=m # CONFIG_TCG_TIS_ST33ZP24_SPI is not set CONFIG_TELCLOCK=m # CONFIG_XILLYBUS is not set # end of Character devices # CONFIG_RANDOM_TRUST_CPU is not set # CONFIG_RANDOM_TRUST_BOOTLOADER is not set # # I2C support # CONFIG_I2C=y CONFIG_ACPI_I2C_OPREGION=y CONFIG_I2C_BOARDINFO=y CONFIG_I2C_COMPAT=y CONFIG_I2C_CHARDEV=m CONFIG_I2C_MUX=m # # Multiplexer I2C Chip support # # CONFIG_I2C_MUX_GPIO is not set # CONFIG_I2C_MUX_LTC4306 is not set # CONFIG_I2C_MUX_PCA9541 is not set # CONFIG_I2C_MUX_PCA954x is not set # CONFIG_I2C_MUX_REG is not set CONFIG_I2C_MUX_MLXCPLD=m # end of Multiplexer I2C Chip support CONFIG_I2C_HELPER_AUTO=y CONFIG_I2C_SMBUS=y CONFIG_I2C_ALGOBIT=y CONFIG_I2C_ALGOPCA=m # # I2C Hardware Bus support # # # PC SMBus host controller drivers # # CONFIG_I2C_ALI1535 is not set # CONFIG_I2C_ALI1563 is not set # CONFIG_I2C_ALI15X3 is not set CONFIG_I2C_AMD756=m CONFIG_I2C_AMD756_S4882=m CONFIG_I2C_AMD8111=m # CONFIG_I2C_AMD_MP2 is not set CONFIG_I2C_I801=y CONFIG_I2C_ISCH=m CONFIG_I2C_ISMT=m CONFIG_I2C_PIIX4=m CONFIG_I2C_NFORCE2=m CONFIG_I2C_NFORCE2_S4985=m # CONFIG_I2C_NVIDIA_GPU is not set # CONFIG_I2C_SIS5595 is not set # CONFIG_I2C_SIS630 is not set CONFIG_I2C_SIS96X=m CONFIG_I2C_VIA=m CONFIG_I2C_VIAPRO=m # # ACPI drivers # CONFIG_I2C_SCMI=m # # I2C system bus drivers (mostly embedded / system-on-chip) # # CONFIG_I2C_CBUS_GPIO is not set CONFIG_I2C_DESIGNWARE_CORE=m # CONFIG_I2C_DESIGNWARE_SLAVE is not set CONFIG_I2C_DESIGNWARE_PLATFORM=m CONFIG_I2C_DESIGNWARE_BAYTRAIL=y # CONFIG_I2C_DESIGNWARE_PCI is not set # CONFIG_I2C_EMEV2 is not set # CONFIG_I2C_GPIO is not set # CONFIG_I2C_OCORES is not set CONFIG_I2C_PCA_PLATFORM=m CONFIG_I2C_SIMTEC=m # CONFIG_I2C_XILINX is not set # # External I2C/SMBus adapter drivers # # CONFIG_I2C_DIOLAN_U2C is not set CONFIG_I2C_PARPORT=m # CONFIG_I2C_ROBOTFUZZ_OSIF is not set # CONFIG_I2C_TAOS_EVM is not set # CONFIG_I2C_TINY_USB is not set # # Other I2C/SMBus bus drivers # CONFIG_I2C_MLXCPLD=m # end of I2C Hardware Bus support CONFIG_I2C_STUB=m # CONFIG_I2C_SLAVE is not set # CONFIG_I2C_DEBUG_CORE is not set # CONFIG_I2C_DEBUG_ALGO is not set # CONFIG_I2C_DEBUG_BUS is not set # end of I2C support # CONFIG_I3C is not set CONFIG_SPI=y # CONFIG_SPI_DEBUG is not set CONFIG_SPI_MASTER=y # CONFIG_SPI_MEM is not set # # SPI Master Controller Drivers # # CONFIG_SPI_ALTERA is not set # CONFIG_SPI_AXI_SPI_ENGINE is not set # CONFIG_SPI_BITBANG is not set # CONFIG_SPI_BUTTERFLY is not set # CONFIG_SPI_CADENCE is not set # CONFIG_SPI_DESIGNWARE is not set # CONFIG_SPI_NXP_FLEXSPI is not set # CONFIG_SPI_GPIO is not set # CONFIG_SPI_LM70_LLP is not set # CONFIG_SPI_LANTIQ_SSC is not set # CONFIG_SPI_OC_TINY is not set # CONFIG_SPI_PXA2XX is not set # CONFIG_SPI_ROCKCHIP is not set # CONFIG_SPI_SC18IS602 is not set # CONFIG_SPI_SIFIVE is not set # CONFIG_SPI_MXIC is not set # CONFIG_SPI_XCOMM is not set # CONFIG_SPI_XILINX is not set # CONFIG_SPI_ZYNQMP_GQSPI is not set # CONFIG_SPI_AMD is not set # # SPI Multiplexer support # # CONFIG_SPI_MUX is not set # # SPI Protocol Masters # # CONFIG_SPI_SPIDEV is not set # CONFIG_SPI_LOOPBACK_TEST is not set # CONFIG_SPI_TLE62X0 is not set # CONFIG_SPI_SLAVE is not set CONFIG_SPI_DYNAMIC=y # CONFIG_SPMI is not set # CONFIG_HSI is not set CONFIG_PPS=y # CONFIG_PPS_DEBUG is not set # # PPS clients support # # CONFIG_PPS_CLIENT_KTIMER is not set CONFIG_PPS_CLIENT_LDISC=m CONFIG_PPS_CLIENT_PARPORT=m CONFIG_PPS_CLIENT_GPIO=m # # PPS generators support # # # PTP clock support # CONFIG_PTP_1588_CLOCK=y # CONFIG_DP83640_PHY is not set # CONFIG_PTP_1588_CLOCK_INES is not set CONFIG_PTP_1588_CLOCK_KVM=m # CONFIG_PTP_1588_CLOCK_IDT82P33 is not set # CONFIG_PTP_1588_CLOCK_IDTCM is not set # CONFIG_PTP_1588_CLOCK_VMW is not set # CONFIG_PTP_1588_CLOCK_OCP is not set # end of PTP clock support CONFIG_PINCTRL=y CONFIG_PINMUX=y CONFIG_PINCONF=y CONFIG_GENERIC_PINCONF=y # CONFIG_DEBUG_PINCTRL is not set CONFIG_PINCTRL_AMD=m # CONFIG_PINCTRL_MCP23S08 is not set # CONFIG_PINCTRL_SX150X is not set CONFIG_PINCTRL_BAYTRAIL=y # CONFIG_PINCTRL_CHERRYVIEW is not set # CONFIG_PINCTRL_LYNXPOINT is not set CONFIG_PINCTRL_INTEL=y # CONFIG_PINCTRL_ALDERLAKE is not set CONFIG_PINCTRL_BROXTON=m CONFIG_PINCTRL_CANNONLAKE=m CONFIG_PINCTRL_CEDARFORK=m CONFIG_PINCTRL_DENVERTON=m # CONFIG_PINCTRL_ELKHARTLAKE is not set # CONFIG_PINCTRL_EMMITSBURG is not set CONFIG_PINCTRL_GEMINILAKE=m # CONFIG_PINCTRL_ICELAKE is not set # CONFIG_PINCTRL_JASPERLAKE is not set # CONFIG_PINCTRL_LAKEFIELD is not set CONFIG_PINCTRL_LEWISBURG=m CONFIG_PINCTRL_SUNRISEPOINT=m # CONFIG_PINCTRL_TIGERLAKE is not set # # Renesas pinctrl drivers # # end of Renesas pinctrl drivers CONFIG_GPIOLIB=y CONFIG_GPIOLIB_FASTPATH_LIMIT=512 CONFIG_GPIO_ACPI=y CONFIG_GPIOLIB_IRQCHIP=y # CONFIG_DEBUG_GPIO is not set CONFIG_GPIO_CDEV=y CONFIG_GPIO_CDEV_V1=y CONFIG_GPIO_GENERIC=m # # Memory mapped GPIO drivers # CONFIG_GPIO_AMDPT=m # CONFIG_GPIO_DWAPB is not set # CONFIG_GPIO_EXAR is not set # CONFIG_GPIO_GENERIC_PLATFORM is not set CONFIG_GPIO_ICH=m # CONFIG_GPIO_MB86S7X is not set # CONFIG_GPIO_VX855 is not set # CONFIG_GPIO_AMD_FCH is not set # end of Memory mapped GPIO drivers # # Port-mapped I/O GPIO drivers # # CONFIG_GPIO_F7188X is not set # CONFIG_GPIO_IT87 is not set # CONFIG_GPIO_SCH is not set # CONFIG_GPIO_SCH311X is not set # CONFIG_GPIO_WINBOND is not set # CONFIG_GPIO_WS16C48 is not set # end of Port-mapped I/O GPIO drivers # # I2C GPIO expanders # # CONFIG_GPIO_ADP5588 is not set # CONFIG_GPIO_MAX7300 is not set # CONFIG_GPIO_MAX732X is not set # CONFIG_GPIO_PCA953X is not set # CONFIG_GPIO_PCA9570 is not set # CONFIG_GPIO_PCF857X is not set # CONFIG_GPIO_TPIC2810 is not set # end of I2C GPIO expanders # # MFD GPIO expanders # # end of MFD GPIO expanders # # PCI GPIO expanders # # CONFIG_GPIO_AMD8111 is not set # CONFIG_GPIO_BT8XX is not set # CONFIG_GPIO_ML_IOH is not set # CONFIG_GPIO_PCI_IDIO_16 is not set # CONFIG_GPIO_PCIE_IDIO_24 is not set # CONFIG_GPIO_RDC321X is not set # end of PCI GPIO expanders # # SPI GPIO expanders # # CONFIG_GPIO_MAX3191X is not set # CONFIG_GPIO_MAX7301 is not set # CONFIG_GPIO_MC33880 is not set # CONFIG_GPIO_PISOSR is not set # CONFIG_GPIO_XRA1403 is not set # end of SPI GPIO expanders # # USB GPIO expanders # # end of USB GPIO expanders # # Virtual GPIO drivers # # CONFIG_GPIO_AGGREGATOR is not set # CONFIG_GPIO_MOCKUP is not set # end of Virtual GPIO drivers # CONFIG_W1 is not set CONFIG_POWER_RESET=y # CONFIG_POWER_RESET_RESTART is not set CONFIG_POWER_SUPPLY=y # CONFIG_POWER_SUPPLY_DEBUG is not set CONFIG_POWER_SUPPLY_HWMON=y # CONFIG_PDA_POWER is not set # CONFIG_TEST_POWER is not set # CONFIG_CHARGER_ADP5061 is not set # CONFIG_BATTERY_CW2015 is not set # CONFIG_BATTERY_DS2780 is not set # CONFIG_BATTERY_DS2781 is not set # CONFIG_BATTERY_DS2782 is not set # CONFIG_BATTERY_SBS is not set # CONFIG_CHARGER_SBS is not set # CONFIG_MANAGER_SBS is not set # CONFIG_BATTERY_BQ27XXX is not set # CONFIG_BATTERY_MAX17040 is not set # CONFIG_BATTERY_MAX17042 is not set # CONFIG_CHARGER_MAX8903 is not set # CONFIG_CHARGER_LP8727 is not set # CONFIG_CHARGER_GPIO is not set # CONFIG_CHARGER_LT3651 is not set # CONFIG_CHARGER_LTC4162L is not set # CONFIG_CHARGER_BQ2415X is not set # CONFIG_CHARGER_BQ24257 is not set # CONFIG_CHARGER_BQ24735 is not set # CONFIG_CHARGER_BQ2515X is not set # CONFIG_CHARGER_BQ25890 is not set # CONFIG_CHARGER_BQ25980 is not set # CONFIG_CHARGER_BQ256XX is not set CONFIG_CHARGER_SMB347=m # CONFIG_BATTERY_GAUGE_LTC2941 is not set # CONFIG_CHARGER_RT9455 is not set # CONFIG_CHARGER_BD99954 is not set CONFIG_HWMON=y CONFIG_HWMON_VID=m # CONFIG_HWMON_DEBUG_CHIP is not set # # Native drivers # CONFIG_SENSORS_ABITUGURU=m CONFIG_SENSORS_ABITUGURU3=m # CONFIG_SENSORS_AD7314 is not set CONFIG_SENSORS_AD7414=m CONFIG_SENSORS_AD7418=m CONFIG_SENSORS_ADM1021=m CONFIG_SENSORS_ADM1025=m CONFIG_SENSORS_ADM1026=m CONFIG_SENSORS_ADM1029=m CONFIG_SENSORS_ADM1031=m # CONFIG_SENSORS_ADM1177 is not set CONFIG_SENSORS_ADM9240=m CONFIG_SENSORS_ADT7X10=m # CONFIG_SENSORS_ADT7310 is not set CONFIG_SENSORS_ADT7410=m CONFIG_SENSORS_ADT7411=m CONFIG_SENSORS_ADT7462=m CONFIG_SENSORS_ADT7470=m CONFIG_SENSORS_ADT7475=m # CONFIG_SENSORS_AHT10 is not set # CONFIG_SENSORS_AS370 is not set CONFIG_SENSORS_ASC7621=m # CONFIG_SENSORS_AXI_FAN_CONTROL is not set CONFIG_SENSORS_K8TEMP=m CONFIG_SENSORS_K10TEMP=m CONFIG_SENSORS_FAM15H_POWER=m # CONFIG_SENSORS_AMD_ENERGY is not set CONFIG_SENSORS_APPLESMC=m CONFIG_SENSORS_ASB100=m # CONFIG_SENSORS_ASPEED is not set CONFIG_SENSORS_ATXP1=m # CONFIG_SENSORS_CORSAIR_CPRO is not set # CONFIG_SENSORS_CORSAIR_PSU is not set # CONFIG_SENSORS_DRIVETEMP is not set CONFIG_SENSORS_DS620=m CONFIG_SENSORS_DS1621=m CONFIG_SENSORS_DELL_SMM=m CONFIG_SENSORS_I5K_AMB=m CONFIG_SENSORS_F71805F=m CONFIG_SENSORS_F71882FG=m CONFIG_SENSORS_F75375S=m CONFIG_SENSORS_FSCHMD=m # CONFIG_SENSORS_FTSTEUTATES is not set CONFIG_SENSORS_GL518SM=m CONFIG_SENSORS_GL520SM=m CONFIG_SENSORS_G760A=m # CONFIG_SENSORS_G762 is not set # CONFIG_SENSORS_HIH6130 is not set CONFIG_SENSORS_IBMAEM=m CONFIG_SENSORS_IBMPEX=m CONFIG_SENSORS_I5500=m CONFIG_SENSORS_CORETEMP=m CONFIG_SENSORS_IT87=m CONFIG_SENSORS_JC42=m # CONFIG_SENSORS_POWR1220 is not set CONFIG_SENSORS_LINEAGE=m # CONFIG_SENSORS_LTC2945 is not set # CONFIG_SENSORS_LTC2947_I2C is not set # CONFIG_SENSORS_LTC2947_SPI is not set # CONFIG_SENSORS_LTC2990 is not set # CONFIG_SENSORS_LTC2992 is not set CONFIG_SENSORS_LTC4151=m CONFIG_SENSORS_LTC4215=m # CONFIG_SENSORS_LTC4222 is not set CONFIG_SENSORS_LTC4245=m # CONFIG_SENSORS_LTC4260 is not set CONFIG_SENSORS_LTC4261=m # CONFIG_SENSORS_MAX1111 is not set # CONFIG_SENSORS_MAX127 is not set CONFIG_SENSORS_MAX16065=m CONFIG_SENSORS_MAX1619=m CONFIG_SENSORS_MAX1668=m CONFIG_SENSORS_MAX197=m # CONFIG_SENSORS_MAX31722 is not set # CONFIG_SENSORS_MAX31730 is not set # CONFIG_SENSORS_MAX6621 is not set CONFIG_SENSORS_MAX6639=m CONFIG_SENSORS_MAX6642=m CONFIG_SENSORS_MAX6650=m CONFIG_SENSORS_MAX6697=m # CONFIG_SENSORS_MAX31790 is not set CONFIG_SENSORS_MCP3021=m # CONFIG_SENSORS_MLXREG_FAN is not set # CONFIG_SENSORS_TC654 is not set # CONFIG_SENSORS_TPS23861 is not set # CONFIG_SENSORS_MR75203 is not set # CONFIG_SENSORS_ADCXX is not set CONFIG_SENSORS_LM63=m # CONFIG_SENSORS_LM70 is not set CONFIG_SENSORS_LM73=m CONFIG_SENSORS_LM75=m CONFIG_SENSORS_LM77=m CONFIG_SENSORS_LM78=m CONFIG_SENSORS_LM80=m CONFIG_SENSORS_LM83=m CONFIG_SENSORS_LM85=m CONFIG_SENSORS_LM87=m CONFIG_SENSORS_LM90=m CONFIG_SENSORS_LM92=m CONFIG_SENSORS_LM93=m CONFIG_SENSORS_LM95234=m CONFIG_SENSORS_LM95241=m CONFIG_SENSORS_LM95245=m CONFIG_SENSORS_PC87360=m CONFIG_SENSORS_PC87427=m CONFIG_SENSORS_NTC_THERMISTOR=m # CONFIG_SENSORS_NCT6683 is not set CONFIG_SENSORS_NCT6775=m # CONFIG_SENSORS_NCT7802 is not set # CONFIG_SENSORS_NCT7904 is not set # CONFIG_SENSORS_NPCM7XX is not set CONFIG_SENSORS_PCF8591=m CONFIG_PMBUS=m CONFIG_SENSORS_PMBUS=m # CONFIG_SENSORS_ADM1266 is not set CONFIG_SENSORS_ADM1275=m # CONFIG_SENSORS_BEL_PFE is not set # CONFIG_SENSORS_IBM_CFFPS is not set # CONFIG_SENSORS_INSPUR_IPSPS is not set # CONFIG_SENSORS_IR35221 is not set # CONFIG_SENSORS_IR38064 is not set # CONFIG_SENSORS_IRPS5401 is not set # CONFIG_SENSORS_ISL68137 is not set CONFIG_SENSORS_LM25066=m CONFIG_SENSORS_LTC2978=m # CONFIG_SENSORS_LTC3815 is not set CONFIG_SENSORS_MAX16064=m # CONFIG_SENSORS_MAX16601 is not set # CONFIG_SENSORS_MAX20730 is not set # CONFIG_SENSORS_MAX20751 is not set # CONFIG_SENSORS_MAX31785 is not set CONFIG_SENSORS_MAX34440=m CONFIG_SENSORS_MAX8688=m # CONFIG_SENSORS_MP2975 is not set # CONFIG_SENSORS_PM6764TR is not set # CONFIG_SENSORS_PXE1610 is not set # CONFIG_SENSORS_Q54SJ108A2 is not set # CONFIG_SENSORS_TPS40422 is not set # CONFIG_SENSORS_TPS53679 is not set CONFIG_SENSORS_UCD9000=m CONFIG_SENSORS_UCD9200=m # CONFIG_SENSORS_XDPE122 is not set CONFIG_SENSORS_ZL6100=m # CONFIG_SENSORS_SBTSI is not set CONFIG_SENSORS_SHT15=m CONFIG_SENSORS_SHT21=m # CONFIG_SENSORS_SHT3x is not set # CONFIG_SENSORS_SHTC1 is not set CONFIG_SENSORS_SIS5595=m CONFIG_SENSORS_DME1737=m CONFIG_SENSORS_EMC1403=m # CONFIG_SENSORS_EMC2103 is not set CONFIG_SENSORS_EMC6W201=m CONFIG_SENSORS_SMSC47M1=m CONFIG_SENSORS_SMSC47M192=m CONFIG_SENSORS_SMSC47B397=m CONFIG_SENSORS_SCH56XX_COMMON=m CONFIG_SENSORS_SCH5627=m CONFIG_SENSORS_SCH5636=m # CONFIG_SENSORS_STTS751 is not set # CONFIG_SENSORS_SMM665 is not set # CONFIG_SENSORS_ADC128D818 is not set CONFIG_SENSORS_ADS7828=m # CONFIG_SENSORS_ADS7871 is not set CONFIG_SENSORS_AMC6821=m CONFIG_SENSORS_INA209=m CONFIG_SENSORS_INA2XX=m # CONFIG_SENSORS_INA3221 is not set # CONFIG_SENSORS_TC74 is not set CONFIG_SENSORS_THMC50=m CONFIG_SENSORS_TMP102=m # CONFIG_SENSORS_TMP103 is not set # CONFIG_SENSORS_TMP108 is not set CONFIG_SENSORS_TMP401=m CONFIG_SENSORS_TMP421=m # CONFIG_SENSORS_TMP513 is not set CONFIG_SENSORS_VIA_CPUTEMP=m CONFIG_SENSORS_VIA686A=m CONFIG_SENSORS_VT1211=m CONFIG_SENSORS_VT8231=m # CONFIG_SENSORS_W83773G is not set CONFIG_SENSORS_W83781D=m CONFIG_SENSORS_W83791D=m CONFIG_SENSORS_W83792D=m CONFIG_SENSORS_W83793=m CONFIG_SENSORS_W83795=m # CONFIG_SENSORS_W83795_FANCTRL is not set CONFIG_SENSORS_W83L785TS=m CONFIG_SENSORS_W83L786NG=m CONFIG_SENSORS_W83627HF=m CONFIG_SENSORS_W83627EHF=m # CONFIG_SENSORS_XGENE is not set # # ACPI drivers # CONFIG_SENSORS_ACPI_POWER=m CONFIG_SENSORS_ATK0110=m CONFIG_THERMAL=y # CONFIG_THERMAL_NETLINK is not set # CONFIG_THERMAL_STATISTICS is not set CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0 CONFIG_THERMAL_HWMON=y CONFIG_THERMAL_WRITABLE_TRIPS=y CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y # CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set # CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set CONFIG_THERMAL_GOV_FAIR_SHARE=y CONFIG_THERMAL_GOV_STEP_WISE=y CONFIG_THERMAL_GOV_BANG_BANG=y CONFIG_THERMAL_GOV_USER_SPACE=y # CONFIG_THERMAL_EMULATION is not set # # Intel thermal drivers # CONFIG_INTEL_POWERCLAMP=m CONFIG_X86_THERMAL_VECTOR=y CONFIG_X86_PKG_TEMP_THERMAL=m CONFIG_INTEL_SOC_DTS_IOSF_CORE=m # CONFIG_INTEL_SOC_DTS_THERMAL is not set # # ACPI INT340X thermal drivers # CONFIG_INT340X_THERMAL=m CONFIG_ACPI_THERMAL_REL=m # CONFIG_INT3406_THERMAL is not set CONFIG_PROC_THERMAL_MMIO_RAPL=m # end of ACPI INT340X thermal drivers CONFIG_INTEL_PCH_THERMAL=m # end of Intel thermal drivers CONFIG_WATCHDOG=y CONFIG_WATCHDOG_CORE=y # CONFIG_WATCHDOG_NOWAYOUT is not set CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y CONFIG_WATCHDOG_OPEN_TIMEOUT=0 CONFIG_WATCHDOG_SYSFS=y # # Watchdog Pretimeout Governors # # CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set # # Watchdog Device Drivers # CONFIG_SOFT_WATCHDOG=m CONFIG_WDAT_WDT=m # CONFIG_XILINX_WATCHDOG is not set # CONFIG_ZIIRAVE_WATCHDOG is not set # CONFIG_MLX_WDT is not set # CONFIG_CADENCE_WATCHDOG is not set # CONFIG_DW_WATCHDOG is not set # CONFIG_MAX63XX_WATCHDOG is not set # CONFIG_ACQUIRE_WDT is not set # CONFIG_ADVANTECH_WDT is not set CONFIG_ALIM1535_WDT=m CONFIG_ALIM7101_WDT=m # CONFIG_EBC_C384_WDT is not set CONFIG_F71808E_WDT=m CONFIG_SP5100_TCO=m CONFIG_SBC_FITPC2_WATCHDOG=m # CONFIG_EUROTECH_WDT is not set CONFIG_IB700_WDT=m CONFIG_IBMASR=m # CONFIG_WAFER_WDT is not set CONFIG_I6300ESB_WDT=y CONFIG_IE6XX_WDT=m CONFIG_ITCO_WDT=y CONFIG_ITCO_VENDOR_SUPPORT=y CONFIG_IT8712F_WDT=m CONFIG_IT87_WDT=m CONFIG_HP_WATCHDOG=m CONFIG_HPWDT_NMI_DECODING=y # CONFIG_SC1200_WDT is not set # CONFIG_PC87413_WDT is not set CONFIG_NV_TCO=m # CONFIG_60XX_WDT is not set # CONFIG_CPU5_WDT is not set CONFIG_SMSC_SCH311X_WDT=m # CONFIG_SMSC37B787_WDT is not set # CONFIG_TQMX86_WDT is not set CONFIG_VIA_WDT=m CONFIG_W83627HF_WDT=m CONFIG_W83877F_WDT=m CONFIG_W83977F_WDT=m CONFIG_MACHZ_WDT=m # CONFIG_SBC_EPX_C3_WATCHDOG is not set CONFIG_INTEL_MEI_WDT=m # CONFIG_NI903X_WDT is not set # CONFIG_NIC7018_WDT is not set # CONFIG_MEN_A21_WDT is not set CONFIG_XEN_WDT=m # # PCI-based Watchdog Cards # CONFIG_PCIPCWATCHDOG=m CONFIG_WDTPCI=m # # USB-based Watchdog Cards # # CONFIG_USBPCWATCHDOG is not set CONFIG_SSB_POSSIBLE=y # CONFIG_SSB is not set CONFIG_BCMA_POSSIBLE=y CONFIG_BCMA=m CONFIG_BCMA_HOST_PCI_POSSIBLE=y CONFIG_BCMA_HOST_PCI=y # CONFIG_BCMA_HOST_SOC is not set CONFIG_BCMA_DRIVER_PCI=y CONFIG_BCMA_DRIVER_GMAC_CMN=y CONFIG_BCMA_DRIVER_GPIO=y # CONFIG_BCMA_DEBUG is not set # # Multifunction device drivers # CONFIG_MFD_CORE=y # CONFIG_MFD_AS3711 is not set # CONFIG_PMIC_ADP5520 is not set # CONFIG_MFD_AAT2870_CORE is not set # CONFIG_MFD_BCM590XX is not set # CONFIG_MFD_BD9571MWV is not set # CONFIG_MFD_AXP20X_I2C is not set # CONFIG_MFD_MADERA is not set # CONFIG_PMIC_DA903X is not set # CONFIG_MFD_DA9052_SPI is not set # CONFIG_MFD_DA9052_I2C is not set # CONFIG_MFD_DA9055 is not set # CONFIG_MFD_DA9062 is not set # CONFIG_MFD_DA9063 is not set # CONFIG_MFD_DA9150 is not set # CONFIG_MFD_DLN2 is not set # CONFIG_MFD_MC13XXX_SPI is not set # CONFIG_MFD_MC13XXX_I2C is not set # CONFIG_MFD_MP2629 is not set # CONFIG_HTC_PASIC3 is not set # CONFIG_HTC_I2CPLD is not set # CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set CONFIG_LPC_ICH=y CONFIG_LPC_SCH=m # CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set CONFIG_MFD_INTEL_LPSS=y CONFIG_MFD_INTEL_LPSS_ACPI=y CONFIG_MFD_INTEL_LPSS_PCI=y # CONFIG_MFD_INTEL_PMC_BXT is not set # CONFIG_MFD_INTEL_PMT is not set # CONFIG_MFD_IQS62X is not set # CONFIG_MFD_JANZ_CMODIO is not set # CONFIG_MFD_KEMPLD is not set # CONFIG_MFD_88PM800 is not set # CONFIG_MFD_88PM805 is not set # CONFIG_MFD_88PM860X is not set # CONFIG_MFD_MAX14577 is not set # CONFIG_MFD_MAX77693 is not set # CONFIG_MFD_MAX77843 is not set # CONFIG_MFD_MAX8907 is not set # CONFIG_MFD_MAX8925 is not set # CONFIG_MFD_MAX8997 is not set # CONFIG_MFD_MAX8998 is not set # CONFIG_MFD_MT6360 is not set # CONFIG_MFD_MT6397 is not set # CONFIG_MFD_MENF21BMC is not set # CONFIG_EZX_PCAP is not set # CONFIG_MFD_VIPERBOARD is not set # CONFIG_MFD_RETU is not set # CONFIG_MFD_PCF50633 is not set # CONFIG_MFD_RDC321X is not set # CONFIG_MFD_RT5033 is not set # CONFIG_MFD_RC5T583 is not set # CONFIG_MFD_SEC_CORE is not set # CONFIG_MFD_SI476X_CORE is not set CONFIG_MFD_SM501=m CONFIG_MFD_SM501_GPIO=y # CONFIG_MFD_SKY81452 is not set # CONFIG_ABX500_CORE is not set # CONFIG_MFD_SYSCON is not set # CONFIG_MFD_TI_AM335X_TSCADC is not set # CONFIG_MFD_LP3943 is not set # CONFIG_MFD_LP8788 is not set # CONFIG_MFD_TI_LMU is not set # CONFIG_MFD_PALMAS is not set # CONFIG_TPS6105X is not set # CONFIG_TPS65010 is not set # CONFIG_TPS6507X is not set # CONFIG_MFD_TPS65086 is not set # CONFIG_MFD_TPS65090 is not set # CONFIG_MFD_TI_LP873X is not set # CONFIG_MFD_TPS6586X is not set # CONFIG_MFD_TPS65910 is not set # CONFIG_MFD_TPS65912_I2C is not set # CONFIG_MFD_TPS65912_SPI is not set # CONFIG_MFD_TPS80031 is not set # CONFIG_TWL4030_CORE is not set # CONFIG_TWL6040_CORE is not set # CONFIG_MFD_WL1273_CORE is not set # CONFIG_MFD_LM3533 is not set # CONFIG_MFD_TQMX86 is not set CONFIG_MFD_VX855=m # CONFIG_MFD_ARIZONA_I2C is not set # CONFIG_MFD_ARIZONA_SPI is not set # CONFIG_MFD_WM8400 is not set # CONFIG_MFD_WM831X_I2C is not set # CONFIG_MFD_WM831X_SPI is not set # CONFIG_MFD_WM8350_I2C is not set # CONFIG_MFD_WM8994 is not set # CONFIG_MFD_INTEL_M10_BMC is not set # end of Multifunction device drivers # CONFIG_REGULATOR is not set CONFIG_RC_CORE=m CONFIG_RC_MAP=m CONFIG_LIRC=y CONFIG_RC_DECODERS=y CONFIG_IR_NEC_DECODER=m CONFIG_IR_RC5_DECODER=m CONFIG_IR_RC6_DECODER=m CONFIG_IR_JVC_DECODER=m CONFIG_IR_SONY_DECODER=m CONFIG_IR_SANYO_DECODER=m # CONFIG_IR_SHARP_DECODER is not set CONFIG_IR_MCE_KBD_DECODER=m # CONFIG_IR_XMP_DECODER is not set CONFIG_IR_IMON_DECODER=m # CONFIG_IR_RCMM_DECODER is not set CONFIG_RC_DEVICES=y # CONFIG_RC_ATI_REMOTE is not set CONFIG_IR_ENE=m # CONFIG_IR_IMON is not set # CONFIG_IR_IMON_RAW is not set # CONFIG_IR_MCEUSB is not set CONFIG_IR_ITE_CIR=m CONFIG_IR_FINTEK=m CONFIG_IR_NUVOTON=m # CONFIG_IR_REDRAT3 is not set # CONFIG_IR_STREAMZAP is not set CONFIG_IR_WINBOND_CIR=m # CONFIG_IR_IGORPLUGUSB is not set # CONFIG_IR_IGUANA is not set # CONFIG_IR_TTUSBIR is not set # CONFIG_RC_LOOPBACK is not set CONFIG_IR_SERIAL=m CONFIG_IR_SERIAL_TRANSMITTER=y CONFIG_IR_SIR=m # CONFIG_RC_XBOX_DVD is not set # CONFIG_IR_TOY is not set CONFIG_MEDIA_CEC_SUPPORT=y # CONFIG_CEC_CH7322 is not set # CONFIG_CEC_SECO is not set # CONFIG_USB_PULSE8_CEC is not set # CONFIG_USB_RAINSHADOW_CEC is not set CONFIG_MEDIA_SUPPORT=m # CONFIG_MEDIA_SUPPORT_FILTER is not set # CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set # # Media device types # CONFIG_MEDIA_CAMERA_SUPPORT=y CONFIG_MEDIA_ANALOG_TV_SUPPORT=y CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y CONFIG_MEDIA_RADIO_SUPPORT=y CONFIG_MEDIA_SDR_SUPPORT=y CONFIG_MEDIA_PLATFORM_SUPPORT=y CONFIG_MEDIA_TEST_SUPPORT=y # end of Media device types # # Media core support # CONFIG_VIDEO_DEV=m CONFIG_MEDIA_CONTROLLER=y CONFIG_DVB_CORE=m # end of Media core support # # Video4Linux options # CONFIG_VIDEO_V4L2=m CONFIG_VIDEO_V4L2_I2C=y CONFIG_VIDEO_V4L2_SUBDEV_API=y # CONFIG_VIDEO_ADV_DEBUG is not set # CONFIG_VIDEO_FIXED_MINOR_RANGES is not set # end of Video4Linux options # # Media controller options # # CONFIG_MEDIA_CONTROLLER_DVB is not set # end of Media controller options # # Digital TV options # # CONFIG_DVB_MMAP is not set CONFIG_DVB_NET=y CONFIG_DVB_MAX_ADAPTERS=16 CONFIG_DVB_DYNAMIC_MINORS=y # CONFIG_DVB_DEMUX_SECTION_LOSS_LOG is not set # CONFIG_DVB_ULE_DEBUG is not set # end of Digital TV options # # Media drivers # # CONFIG_MEDIA_USB_SUPPORT is not set # CONFIG_MEDIA_PCI_SUPPORT is not set CONFIG_RADIO_ADAPTERS=y # CONFIG_RADIO_SI470X is not set # CONFIG_RADIO_SI4713 is not set # CONFIG_USB_MR800 is not set # CONFIG_USB_DSBR is not set # CONFIG_RADIO_MAXIRADIO is not set # CONFIG_RADIO_SHARK is not set # CONFIG_RADIO_SHARK2 is not set # CONFIG_USB_KEENE is not set # CONFIG_USB_RAREMONO is not set # CONFIG_USB_MA901 is not set # CONFIG_RADIO_TEA5764 is not set # CONFIG_RADIO_SAA7706H is not set # CONFIG_RADIO_TEF6862 is not set # CONFIG_RADIO_WL1273 is not set CONFIG_VIDEOBUF2_CORE=m CONFIG_VIDEOBUF2_V4L2=m CONFIG_VIDEOBUF2_MEMOPS=m CONFIG_VIDEOBUF2_VMALLOC=m # CONFIG_V4L_PLATFORM_DRIVERS is not set # CONFIG_V4L_MEM2MEM_DRIVERS is not set # CONFIG_DVB_PLATFORM_DRIVERS is not set # CONFIG_SDR_PLATFORM_DRIVERS is not set # # MMC/SDIO DVB adapters # # CONFIG_SMS_SDIO_DRV is not set # CONFIG_V4L_TEST_DRIVERS is not set # CONFIG_DVB_TEST_DRIVERS is not set # # FireWire (IEEE 1394) Adapters # # CONFIG_DVB_FIREDTV is not set # end of Media drivers # # Media ancillary drivers # CONFIG_MEDIA_ATTACH=y CONFIG_VIDEO_IR_I2C=m # # Audio decoders, processors and mixers # # CONFIG_VIDEO_TVAUDIO is not set # CONFIG_VIDEO_TDA7432 is not set # CONFIG_VIDEO_TDA9840 is not set # CONFIG_VIDEO_TEA6415C is not set # CONFIG_VIDEO_TEA6420 is not set # CONFIG_VIDEO_MSP3400 is not set # CONFIG_VIDEO_CS3308 is not set # CONFIG_VIDEO_CS5345 is not set # CONFIG_VIDEO_CS53L32A is not set # CONFIG_VIDEO_TLV320AIC23B is not set # CONFIG_VIDEO_UDA1342 is not set # CONFIG_VIDEO_WM8775 is not set # CONFIG_VIDEO_WM8739 is not set # CONFIG_VIDEO_VP27SMPX is not set # CONFIG_VIDEO_SONY_BTF_MPX is not set # end of Audio decoders, processors and mixers # # RDS decoders # # CONFIG_VIDEO_SAA6588 is not set # end of RDS decoders # # Video decoders # # CONFIG_VIDEO_ADV7180 is not set # CONFIG_VIDEO_ADV7183 is not set # CONFIG_VIDEO_ADV7604 is not set # CONFIG_VIDEO_ADV7842 is not set # CONFIG_VIDEO_BT819 is not set # CONFIG_VIDEO_BT856 is not set # CONFIG_VIDEO_BT866 is not set # CONFIG_VIDEO_KS0127 is not set # CONFIG_VIDEO_ML86V7667 is not set # CONFIG_VIDEO_SAA7110 is not set # CONFIG_VIDEO_SAA711X is not set # CONFIG_VIDEO_TC358743 is not set # CONFIG_VIDEO_TVP514X is not set # CONFIG_VIDEO_TVP5150 is not set # CONFIG_VIDEO_TVP7002 is not set # CONFIG_VIDEO_TW2804 is not set # CONFIG_VIDEO_TW9903 is not set # CONFIG_VIDEO_TW9906 is not set # CONFIG_VIDEO_TW9910 is not set # CONFIG_VIDEO_VPX3220 is not set # # Video and audio decoders # # CONFIG_VIDEO_SAA717X is not set # CONFIG_VIDEO_CX25840 is not set # end of Video decoders # # Video encoders # # CONFIG_VIDEO_SAA7127 is not set # CONFIG_VIDEO_SAA7185 is not set # CONFIG_VIDEO_ADV7170 is not set # CONFIG_VIDEO_ADV7175 is not set # CONFIG_VIDEO_ADV7343 is not set # CONFIG_VIDEO_ADV7393 is not set # CONFIG_VIDEO_ADV7511 is not set # CONFIG_VIDEO_AD9389B is not set # CONFIG_VIDEO_AK881X is not set # CONFIG_VIDEO_THS8200 is not set # end of Video encoders # # Video improvement chips # # CONFIG_VIDEO_UPD64031A is not set # CONFIG_VIDEO_UPD64083 is not set # end of Video improvement chips # # Audio/Video compression chips # # CONFIG_VIDEO_SAA6752HS is not set # end of Audio/Video compression chips # # SDR tuner chips # # CONFIG_SDR_MAX2175 is not set # end of SDR tuner chips # # Miscellaneous helper chips # # CONFIG_VIDEO_THS7303 is not set # CONFIG_VIDEO_M52790 is not set # CONFIG_VIDEO_I2C is not set # CONFIG_VIDEO_ST_MIPID02 is not set # end of Miscellaneous helper chips # # Camera sensor devices # # CONFIG_VIDEO_HI556 is not set # CONFIG_VIDEO_IMX214 is not set # CONFIG_VIDEO_IMX219 is not set # CONFIG_VIDEO_IMX258 is not set # CONFIG_VIDEO_IMX274 is not set # CONFIG_VIDEO_IMX290 is not set # CONFIG_VIDEO_IMX319 is not set # CONFIG_VIDEO_IMX355 is not set # CONFIG_VIDEO_OV02A10 is not set # CONFIG_VIDEO_OV2640 is not set # CONFIG_VIDEO_OV2659 is not set # CONFIG_VIDEO_OV2680 is not set # CONFIG_VIDEO_OV2685 is not set # CONFIG_VIDEO_OV2740 is not set # CONFIG_VIDEO_OV5647 is not set # CONFIG_VIDEO_OV5648 is not set # CONFIG_VIDEO_OV6650 is not set # CONFIG_VIDEO_OV5670 is not set # CONFIG_VIDEO_OV5675 is not set # CONFIG_VIDEO_OV5695 is not set # CONFIG_VIDEO_OV7251 is not set # CONFIG_VIDEO_OV772X is not set # CONFIG_VIDEO_OV7640 is not set # CONFIG_VIDEO_OV7670 is not set # CONFIG_VIDEO_OV7740 is not set # CONFIG_VIDEO_OV8856 is not set # CONFIG_VIDEO_OV8865 is not set # CONFIG_VIDEO_OV9640 is not set # CONFIG_VIDEO_OV9650 is not set # CONFIG_VIDEO_OV9734 is not set # CONFIG_VIDEO_OV13858 is not set # CONFIG_VIDEO_VS6624 is not set # CONFIG_VIDEO_MT9M001 is not set # CONFIG_VIDEO_MT9M032 is not set # CONFIG_VIDEO_MT9M111 is not set # CONFIG_VIDEO_MT9P031 is not set # CONFIG_VIDEO_MT9T001 is not set # CONFIG_VIDEO_MT9T112 is not set # CONFIG_VIDEO_MT9V011 is not set # CONFIG_VIDEO_MT9V032 is not set # CONFIG_VIDEO_MT9V111 is not set # CONFIG_VIDEO_SR030PC30 is not set # CONFIG_VIDEO_NOON010PC30 is not set # CONFIG_VIDEO_M5MOLS is not set # CONFIG_VIDEO_RDACM20 is not set # CONFIG_VIDEO_RDACM21 is not set # CONFIG_VIDEO_RJ54N1 is not set # CONFIG_VIDEO_S5K6AA is not set # CONFIG_VIDEO_S5K6A3 is not set # CONFIG_VIDEO_S5K4ECGX is not set # CONFIG_VIDEO_S5K5BAF is not set # CONFIG_VIDEO_CCS is not set # CONFIG_VIDEO_ET8EK8 is not set # CONFIG_VIDEO_S5C73M3 is not set # end of Camera sensor devices # # Lens drivers # # CONFIG_VIDEO_AD5820 is not set # CONFIG_VIDEO_AK7375 is not set # CONFIG_VIDEO_DW9714 is not set # CONFIG_VIDEO_DW9768 is not set # CONFIG_VIDEO_DW9807_VCM is not set # end of Lens drivers # # Flash devices # # CONFIG_VIDEO_ADP1653 is not set # CONFIG_VIDEO_LM3560 is not set # CONFIG_VIDEO_LM3646 is not set # end of Flash devices # # SPI helper chips # # CONFIG_VIDEO_GS1662 is not set # end of SPI helper chips # # Media SPI Adapters # CONFIG_CXD2880_SPI_DRV=m # end of Media SPI Adapters CONFIG_MEDIA_TUNER=m # # Customize TV tuners # CONFIG_MEDIA_TUNER_SIMPLE=m CONFIG_MEDIA_TUNER_TDA18250=m CONFIG_MEDIA_TUNER_TDA8290=m CONFIG_MEDIA_TUNER_TDA827X=m CONFIG_MEDIA_TUNER_TDA18271=m CONFIG_MEDIA_TUNER_TDA9887=m CONFIG_MEDIA_TUNER_TEA5761=m CONFIG_MEDIA_TUNER_TEA5767=m CONFIG_MEDIA_TUNER_MSI001=m CONFIG_MEDIA_TUNER_MT20XX=m CONFIG_MEDIA_TUNER_MT2060=m CONFIG_MEDIA_TUNER_MT2063=m CONFIG_MEDIA_TUNER_MT2266=m CONFIG_MEDIA_TUNER_MT2131=m CONFIG_MEDIA_TUNER_QT1010=m CONFIG_MEDIA_TUNER_XC2028=m CONFIG_MEDIA_TUNER_XC5000=m CONFIG_MEDIA_TUNER_XC4000=m CONFIG_MEDIA_TUNER_MXL5005S=m CONFIG_MEDIA_TUNER_MXL5007T=m CONFIG_MEDIA_TUNER_MC44S803=m CONFIG_MEDIA_TUNER_MAX2165=m CONFIG_MEDIA_TUNER_TDA18218=m CONFIG_MEDIA_TUNER_FC0011=m CONFIG_MEDIA_TUNER_FC0012=m CONFIG_MEDIA_TUNER_FC0013=m CONFIG_MEDIA_TUNER_TDA18212=m CONFIG_MEDIA_TUNER_E4000=m CONFIG_MEDIA_TUNER_FC2580=m CONFIG_MEDIA_TUNER_M88RS6000T=m CONFIG_MEDIA_TUNER_TUA9001=m CONFIG_MEDIA_TUNER_SI2157=m CONFIG_MEDIA_TUNER_IT913X=m CONFIG_MEDIA_TUNER_R820T=m CONFIG_MEDIA_TUNER_MXL301RF=m CONFIG_MEDIA_TUNER_QM1D1C0042=m CONFIG_MEDIA_TUNER_QM1D1B0004=m # end of Customize TV tuners # # Customise DVB Frontends # # # Multistandard (satellite) frontends # CONFIG_DVB_STB0899=m CONFIG_DVB_STB6100=m CONFIG_DVB_STV090x=m CONFIG_DVB_STV0910=m CONFIG_DVB_STV6110x=m CONFIG_DVB_STV6111=m CONFIG_DVB_MXL5XX=m CONFIG_DVB_M88DS3103=m # # Multistandard (cable + terrestrial) frontends # CONFIG_DVB_DRXK=m CONFIG_DVB_TDA18271C2DD=m CONFIG_DVB_SI2165=m CONFIG_DVB_MN88472=m CONFIG_DVB_MN88473=m # # DVB-S (satellite) frontends # CONFIG_DVB_CX24110=m CONFIG_DVB_CX24123=m CONFIG_DVB_MT312=m CONFIG_DVB_ZL10036=m CONFIG_DVB_ZL10039=m CONFIG_DVB_S5H1420=m CONFIG_DVB_STV0288=m CONFIG_DVB_STB6000=m CONFIG_DVB_STV0299=m CONFIG_DVB_STV6110=m CONFIG_DVB_STV0900=m CONFIG_DVB_TDA8083=m CONFIG_DVB_TDA10086=m CONFIG_DVB_TDA8261=m CONFIG_DVB_VES1X93=m CONFIG_DVB_TUNER_ITD1000=m CONFIG_DVB_TUNER_CX24113=m CONFIG_DVB_TDA826X=m CONFIG_DVB_TUA6100=m CONFIG_DVB_CX24116=m CONFIG_DVB_CX24117=m CONFIG_DVB_CX24120=m CONFIG_DVB_SI21XX=m CONFIG_DVB_TS2020=m CONFIG_DVB_DS3000=m CONFIG_DVB_MB86A16=m CONFIG_DVB_TDA10071=m # # DVB-T (terrestrial) frontends # CONFIG_DVB_SP8870=m CONFIG_DVB_SP887X=m CONFIG_DVB_CX22700=m CONFIG_DVB_CX22702=m CONFIG_DVB_S5H1432=m CONFIG_DVB_DRXD=m CONFIG_DVB_L64781=m CONFIG_DVB_TDA1004X=m CONFIG_DVB_NXT6000=m CONFIG_DVB_MT352=m CONFIG_DVB_ZL10353=m CONFIG_DVB_DIB3000MB=m CONFIG_DVB_DIB3000MC=m CONFIG_DVB_DIB7000M=m CONFIG_DVB_DIB7000P=m CONFIG_DVB_DIB9000=m CONFIG_DVB_TDA10048=m CONFIG_DVB_AF9013=m CONFIG_DVB_EC100=m CONFIG_DVB_STV0367=m CONFIG_DVB_CXD2820R=m CONFIG_DVB_CXD2841ER=m CONFIG_DVB_RTL2830=m CONFIG_DVB_RTL2832=m CONFIG_DVB_RTL2832_SDR=m CONFIG_DVB_SI2168=m CONFIG_DVB_ZD1301_DEMOD=m CONFIG_DVB_CXD2880=m # # DVB-C (cable) frontends # CONFIG_DVB_VES1820=m CONFIG_DVB_TDA10021=m CONFIG_DVB_TDA10023=m CONFIG_DVB_STV0297=m # # ATSC (North American/Korean Terrestrial/Cable DTV) frontends # CONFIG_DVB_NXT200X=m CONFIG_DVB_OR51211=m CONFIG_DVB_OR51132=m CONFIG_DVB_BCM3510=m CONFIG_DVB_LGDT330X=m CONFIG_DVB_LGDT3305=m CONFIG_DVB_LGDT3306A=m CONFIG_DVB_LG2160=m CONFIG_DVB_S5H1409=m CONFIG_DVB_AU8522=m CONFIG_DVB_AU8522_DTV=m CONFIG_DVB_AU8522_V4L=m CONFIG_DVB_S5H1411=m CONFIG_DVB_MXL692=m # # ISDB-T (terrestrial) frontends # CONFIG_DVB_S921=m CONFIG_DVB_DIB8000=m CONFIG_DVB_MB86A20S=m # # ISDB-S (satellite) & ISDB-T (terrestrial) frontends # CONFIG_DVB_TC90522=m CONFIG_DVB_MN88443X=m # # Digital terrestrial only tuners/PLL # CONFIG_DVB_PLL=m CONFIG_DVB_TUNER_DIB0070=m CONFIG_DVB_TUNER_DIB0090=m # # SEC control devices for DVB-S # CONFIG_DVB_DRX39XYJ=m CONFIG_DVB_LNBH25=m CONFIG_DVB_LNBH29=m CONFIG_DVB_LNBP21=m CONFIG_DVB_LNBP22=m CONFIG_DVB_ISL6405=m CONFIG_DVB_ISL6421=m CONFIG_DVB_ISL6423=m CONFIG_DVB_A8293=m CONFIG_DVB_LGS8GL5=m CONFIG_DVB_LGS8GXX=m CONFIG_DVB_ATBM8830=m CONFIG_DVB_TDA665x=m CONFIG_DVB_IX2505V=m CONFIG_DVB_M88RS2000=m CONFIG_DVB_AF9033=m CONFIG_DVB_HORUS3A=m CONFIG_DVB_ASCOT2E=m CONFIG_DVB_HELENE=m # # Common Interface (EN50221) controller drivers # CONFIG_DVB_CXD2099=m CONFIG_DVB_SP2=m # end of Customise DVB Frontends # # Tools to develop new frontends # # CONFIG_DVB_DUMMY_FE is not set # end of Media ancillary drivers # # Graphics support # # CONFIG_AGP is not set CONFIG_INTEL_GTT=m CONFIG_VGA_ARB=y CONFIG_VGA_ARB_MAX_GPUS=64 CONFIG_VGA_SWITCHEROO=y CONFIG_DRM=m CONFIG_DRM_MIPI_DSI=y CONFIG_DRM_DP_AUX_CHARDEV=y # CONFIG_DRM_DEBUG_SELFTEST is not set CONFIG_DRM_KMS_HELPER=m CONFIG_DRM_KMS_FB_HELPER=y CONFIG_DRM_FBDEV_EMULATION=y CONFIG_DRM_FBDEV_OVERALLOC=100 CONFIG_DRM_LOAD_EDID_FIRMWARE=y # CONFIG_DRM_DP_CEC is not set CONFIG_DRM_TTM=m CONFIG_DRM_VRAM_HELPER=m CONFIG_DRM_TTM_HELPER=m CONFIG_DRM_GEM_SHMEM_HELPER=y # # I2C encoder or helper chips # CONFIG_DRM_I2C_CH7006=m CONFIG_DRM_I2C_SIL164=m # CONFIG_DRM_I2C_NXP_TDA998X is not set # CONFIG_DRM_I2C_NXP_TDA9950 is not set # end of I2C encoder or helper chips # # ARM devices # # end of ARM devices # CONFIG_DRM_RADEON is not set # CONFIG_DRM_AMDGPU is not set # CONFIG_DRM_NOUVEAU is not set CONFIG_DRM_I915=m CONFIG_DRM_I915_FORCE_PROBE="" CONFIG_DRM_I915_CAPTURE_ERROR=y CONFIG_DRM_I915_COMPRESS_ERROR=y CONFIG_DRM_I915_USERPTR=y CONFIG_DRM_I915_GVT=y CONFIG_DRM_I915_GVT_KVMGT=m CONFIG_DRM_I915_FENCE_TIMEOUT=10000 CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND=250 CONFIG_DRM_I915_HEARTBEAT_INTERVAL=2500 CONFIG_DRM_I915_PREEMPT_TIMEOUT=640 CONFIG_DRM_I915_MAX_REQUEST_BUSYWAIT=8000 CONFIG_DRM_I915_STOP_TIMEOUT=100 CONFIG_DRM_I915_TIMESLICE_DURATION=1 # CONFIG_DRM_VGEM is not set # CONFIG_DRM_VKMS is not set CONFIG_DRM_VMWGFX=m CONFIG_DRM_VMWGFX_FBCON=y CONFIG_DRM_GMA500=m CONFIG_DRM_GMA600=y # CONFIG_DRM_UDL is not set CONFIG_DRM_AST=m CONFIG_DRM_MGAG200=m CONFIG_DRM_QXL=m CONFIG_DRM_BOCHS=m CONFIG_DRM_VIRTIO_GPU=m CONFIG_DRM_PANEL=y # # Display Panels # # CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN is not set # end of Display Panels CONFIG_DRM_BRIDGE=y CONFIG_DRM_PANEL_BRIDGE=y # # Display Interface Bridges # # CONFIG_DRM_ANALOGIX_ANX78XX is not set # end of Display Interface Bridges # CONFIG_DRM_ETNAVIV is not set CONFIG_DRM_CIRRUS_QEMU=m # CONFIG_DRM_GM12U320 is not set # CONFIG_TINYDRM_HX8357D is not set # CONFIG_TINYDRM_ILI9225 is not set # CONFIG_TINYDRM_ILI9341 is not set # CONFIG_TINYDRM_ILI9486 is not set # CONFIG_TINYDRM_MI0283QT is not set # CONFIG_TINYDRM_REPAPER is not set # CONFIG_TINYDRM_ST7586 is not set # CONFIG_TINYDRM_ST7735R is not set # CONFIG_DRM_XEN is not set # CONFIG_DRM_VBOXVIDEO is not set # CONFIG_DRM_LEGACY is not set CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y # # Frame buffer Devices # CONFIG_FB_CMDLINE=y CONFIG_FB_NOTIFY=y CONFIG_FB=y # CONFIG_FIRMWARE_EDID is not set CONFIG_FB_BOOT_VESA_SUPPORT=y CONFIG_FB_CFB_FILLRECT=y CONFIG_FB_CFB_COPYAREA=y CONFIG_FB_CFB_IMAGEBLIT=y CONFIG_FB_SYS_FILLRECT=m CONFIG_FB_SYS_COPYAREA=m CONFIG_FB_SYS_IMAGEBLIT=m # CONFIG_FB_FOREIGN_ENDIAN is not set CONFIG_FB_SYS_FOPS=m CONFIG_FB_DEFERRED_IO=y # CONFIG_FB_MODE_HELPERS is not set CONFIG_FB_TILEBLITTING=y # # Frame buffer hardware drivers # # CONFIG_FB_CIRRUS is not set # CONFIG_FB_PM2 is not set # CONFIG_FB_CYBER2000 is not set # CONFIG_FB_ARC is not set # CONFIG_FB_ASILIANT is not set # CONFIG_FB_IMSTT is not set # CONFIG_FB_VGA16 is not set # CONFIG_FB_UVESA is not set CONFIG_FB_VESA=y CONFIG_FB_EFI=y # CONFIG_FB_N411 is not set # CONFIG_FB_HGA is not set # CONFIG_FB_OPENCORES is not set # CONFIG_FB_S1D13XXX is not set # CONFIG_FB_NVIDIA is not set # CONFIG_FB_RIVA is not set # CONFIG_FB_I740 is not set # CONFIG_FB_LE80578 is not set # CONFIG_FB_MATROX is not set # CONFIG_FB_RADEON is not set # CONFIG_FB_ATY128 is not set # CONFIG_FB_ATY is not set # CONFIG_FB_S3 is not set # CONFIG_FB_SAVAGE is not set # CONFIG_FB_SIS is not set # CONFIG_FB_VIA is not set # CONFIG_FB_NEOMAGIC is not set # CONFIG_FB_KYRO is not set # CONFIG_FB_3DFX is not set # CONFIG_FB_VOODOO1 is not set # CONFIG_FB_VT8623 is not set # CONFIG_FB_TRIDENT is not set # CONFIG_FB_ARK is not set # CONFIG_FB_PM3 is not set # CONFIG_FB_CARMINE is not set # CONFIG_FB_SM501 is not set # CONFIG_FB_SMSCUFX is not set # CONFIG_FB_UDL is not set # CONFIG_FB_IBM_GXT4500 is not set # CONFIG_FB_VIRTUAL is not set # CONFIG_XEN_FBDEV_FRONTEND is not set # CONFIG_FB_METRONOME is not set # CONFIG_FB_MB862XX is not set CONFIG_FB_HYPERV=m # CONFIG_FB_SIMPLE is not set # CONFIG_FB_SM712 is not set # end of Frame buffer Devices # # Backlight & LCD device support # CONFIG_LCD_CLASS_DEVICE=m # CONFIG_LCD_L4F00242T03 is not set # CONFIG_LCD_LMS283GF05 is not set # CONFIG_LCD_LTV350QV is not set # CONFIG_LCD_ILI922X is not set # CONFIG_LCD_ILI9320 is not set # CONFIG_LCD_TDO24M is not set # CONFIG_LCD_VGG2432A4 is not set CONFIG_LCD_PLATFORM=m # CONFIG_LCD_AMS369FG06 is not set # CONFIG_LCD_LMS501KF03 is not set # CONFIG_LCD_HX8357 is not set # CONFIG_LCD_OTM3225A is not set CONFIG_BACKLIGHT_CLASS_DEVICE=y # CONFIG_BACKLIGHT_KTD253 is not set # CONFIG_BACKLIGHT_PWM is not set CONFIG_BACKLIGHT_APPLE=m # CONFIG_BACKLIGHT_QCOM_WLED is not set # CONFIG_BACKLIGHT_SAHARA is not set # CONFIG_BACKLIGHT_ADP8860 is not set # CONFIG_BACKLIGHT_ADP8870 is not set # CONFIG_BACKLIGHT_LM3630A is not set # CONFIG_BACKLIGHT_LM3639 is not set CONFIG_BACKLIGHT_LP855X=m # CONFIG_BACKLIGHT_GPIO is not set # CONFIG_BACKLIGHT_LV5207LP is not set # CONFIG_BACKLIGHT_BD6107 is not set # CONFIG_BACKLIGHT_ARCXCNN is not set # end of Backlight & LCD device support CONFIG_HDMI=y # # Console display driver support # CONFIG_VGA_CONSOLE=y CONFIG_DUMMY_CONSOLE=y CONFIG_DUMMY_CONSOLE_COLUMNS=80 CONFIG_DUMMY_CONSOLE_ROWS=25 CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y # CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set # end of Console display driver support CONFIG_LOGO=y # CONFIG_LOGO_LINUX_MONO is not set # CONFIG_LOGO_LINUX_VGA16 is not set CONFIG_LOGO_LINUX_CLUT224=y # end of Graphics support # CONFIG_SOUND is not set # # HID support # CONFIG_HID=y CONFIG_HID_BATTERY_STRENGTH=y CONFIG_HIDRAW=y CONFIG_UHID=m CONFIG_HID_GENERIC=y # # Special HID drivers # CONFIG_HID_A4TECH=m # CONFIG_HID_ACCUTOUCH is not set CONFIG_HID_ACRUX=m # CONFIG_HID_ACRUX_FF is not set CONFIG_HID_APPLE=m # CONFIG_HID_APPLEIR is not set CONFIG_HID_ASUS=m CONFIG_HID_AUREAL=m CONFIG_HID_BELKIN=m # CONFIG_HID_BETOP_FF is not set # CONFIG_HID_BIGBEN_FF is not set CONFIG_HID_CHERRY=m CONFIG_HID_CHICONY=m # CONFIG_HID_CORSAIR is not set # CONFIG_HID_COUGAR is not set # CONFIG_HID_MACALLY is not set CONFIG_HID_CMEDIA=m # CONFIG_HID_CP2112 is not set # CONFIG_HID_CREATIVE_SB0540 is not set CONFIG_HID_CYPRESS=m CONFIG_HID_DRAGONRISE=m # CONFIG_DRAGONRISE_FF is not set # CONFIG_HID_EMS_FF is not set # CONFIG_HID_ELAN is not set CONFIG_HID_ELECOM=m # CONFIG_HID_ELO is not set CONFIG_HID_EZKEY=m CONFIG_HID_GEMBIRD=m CONFIG_HID_GFRM=m # CONFIG_HID_GLORIOUS is not set # CONFIG_HID_HOLTEK is not set # CONFIG_HID_VIVALDI is not set # CONFIG_HID_GT683R is not set CONFIG_HID_KEYTOUCH=m CONFIG_HID_KYE=m # CONFIG_HID_UCLOGIC is not set CONFIG_HID_WALTOP=m # CONFIG_HID_VIEWSONIC is not set CONFIG_HID_GYRATION=m CONFIG_HID_ICADE=m CONFIG_HID_ITE=m CONFIG_HID_JABRA=m CONFIG_HID_TWINHAN=m CONFIG_HID_KENSINGTON=m CONFIG_HID_LCPOWER=m CONFIG_HID_LED=m CONFIG_HID_LENOVO=m CONFIG_HID_LOGITECH=m CONFIG_HID_LOGITECH_DJ=m CONFIG_HID_LOGITECH_HIDPP=m # CONFIG_LOGITECH_FF is not set # CONFIG_LOGIRUMBLEPAD2_FF is not set # CONFIG_LOGIG940_FF is not set # CONFIG_LOGIWHEELS_FF is not set CONFIG_HID_MAGICMOUSE=y # CONFIG_HID_MALTRON is not set # CONFIG_HID_MAYFLASH is not set # CONFIG_HID_REDRAGON is not set CONFIG_HID_MICROSOFT=m CONFIG_HID_MONTEREY=m CONFIG_HID_MULTITOUCH=m CONFIG_HID_NTI=m # CONFIG_HID_NTRIG is not set CONFIG_HID_ORTEK=m CONFIG_HID_PANTHERLORD=m # CONFIG_PANTHERLORD_FF is not set # CONFIG_HID_PENMOUNT is not set CONFIG_HID_PETALYNX=m CONFIG_HID_PICOLCD=m CONFIG_HID_PICOLCD_FB=y CONFIG_HID_PICOLCD_BACKLIGHT=y CONFIG_HID_PICOLCD_LCD=y CONFIG_HID_PICOLCD_LEDS=y CONFIG_HID_PICOLCD_CIR=y CONFIG_HID_PLANTRONICS=m # CONFIG_HID_PLAYSTATION is not set CONFIG_HID_PRIMAX=m # CONFIG_HID_RETRODE is not set # CONFIG_HID_ROCCAT is not set CONFIG_HID_SAITEK=m CONFIG_HID_SAMSUNG=m # CONFIG_HID_SONY is not set CONFIG_HID_SPEEDLINK=m # CONFIG_HID_STEAM is not set CONFIG_HID_STEELSERIES=m CONFIG_HID_SUNPLUS=m CONFIG_HID_RMI=m CONFIG_HID_GREENASIA=m # CONFIG_GREENASIA_FF is not set CONFIG_HID_HYPERV_MOUSE=m CONFIG_HID_SMARTJOYPLUS=m # CONFIG_SMARTJOYPLUS_FF is not set CONFIG_HID_TIVO=m CONFIG_HID_TOPSEED=m CONFIG_HID_THINGM=m CONFIG_HID_THRUSTMASTER=m # CONFIG_THRUSTMASTER_FF is not set # CONFIG_HID_UDRAW_PS3 is not set # CONFIG_HID_U2FZERO is not set # CONFIG_HID_WACOM is not set CONFIG_HID_WIIMOTE=m CONFIG_HID_XINMO=m CONFIG_HID_ZEROPLUS=m # CONFIG_ZEROPLUS_FF is not set CONFIG_HID_ZYDACRON=m CONFIG_HID_SENSOR_HUB=y CONFIG_HID_SENSOR_CUSTOM_SENSOR=m CONFIG_HID_ALPS=m # CONFIG_HID_MCP2221 is not set # end of Special HID drivers # # USB HID support # CONFIG_USB_HID=y # CONFIG_HID_PID is not set # CONFIG_USB_HIDDEV is not set # end of USB HID support # # I2C HID support # # CONFIG_I2C_HID_ACPI is not set # end of I2C HID support # # Intel ISH HID support # CONFIG_INTEL_ISH_HID=m # CONFIG_INTEL_ISH_FIRMWARE_DOWNLOADER is not set # end of Intel ISH HID support # # AMD SFH HID Support # # CONFIG_AMD_SFH_HID is not set # end of AMD SFH HID Support # end of HID support CONFIG_USB_OHCI_LITTLE_ENDIAN=y CONFIG_USB_SUPPORT=y CONFIG_USB_COMMON=y # CONFIG_USB_LED_TRIG is not set # CONFIG_USB_ULPI_BUS is not set # CONFIG_USB_CONN_GPIO is not set CONFIG_USB_ARCH_HAS_HCD=y CONFIG_USB=y CONFIG_USB_PCI=y CONFIG_USB_ANNOUNCE_NEW_DEVICES=y # # Miscellaneous USB options # CONFIG_USB_DEFAULT_PERSIST=y # CONFIG_USB_FEW_INIT_RETRIES is not set # CONFIG_USB_DYNAMIC_MINORS is not set # CONFIG_USB_OTG is not set # CONFIG_USB_OTG_PRODUCTLIST is not set CONFIG_USB_LEDS_TRIGGER_USBPORT=y CONFIG_USB_AUTOSUSPEND_DELAY=2 CONFIG_USB_MON=y # # USB Host Controller Drivers # # CONFIG_USB_C67X00_HCD is not set CONFIG_USB_XHCI_HCD=y # CONFIG_USB_XHCI_DBGCAP is not set CONFIG_USB_XHCI_PCI=y # CONFIG_USB_XHCI_PCI_RENESAS is not set # CONFIG_USB_XHCI_PLATFORM is not set CONFIG_USB_EHCI_HCD=y CONFIG_USB_EHCI_ROOT_HUB_TT=y CONFIG_USB_EHCI_TT_NEWSCHED=y CONFIG_USB_EHCI_PCI=y # CONFIG_USB_EHCI_FSL is not set # CONFIG_USB_EHCI_HCD_PLATFORM is not set # CONFIG_USB_OXU210HP_HCD is not set # CONFIG_USB_ISP116X_HCD is not set # CONFIG_USB_FOTG210_HCD is not set # CONFIG_USB_MAX3421_HCD is not set CONFIG_USB_OHCI_HCD=y CONFIG_USB_OHCI_HCD_PCI=y # CONFIG_USB_OHCI_HCD_PLATFORM is not set CONFIG_USB_UHCI_HCD=y # CONFIG_USB_SL811_HCD is not set # CONFIG_USB_R8A66597_HCD is not set # CONFIG_USB_HCD_BCMA is not set # CONFIG_USB_HCD_TEST_MODE is not set # # USB Device Class drivers # # CONFIG_USB_ACM is not set # CONFIG_USB_PRINTER is not set # CONFIG_USB_WDM is not set # CONFIG_USB_TMC is not set # # NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may # # # also be needed; see USB_STORAGE Help for more info # CONFIG_USB_STORAGE=m # CONFIG_USB_STORAGE_DEBUG is not set # CONFIG_USB_STORAGE_REALTEK is not set # CONFIG_USB_STORAGE_DATAFAB is not set # CONFIG_USB_STORAGE_FREECOM is not set # CONFIG_USB_STORAGE_ISD200 is not set # CONFIG_USB_STORAGE_USBAT is not set # CONFIG_USB_STORAGE_SDDR09 is not set # CONFIG_USB_STORAGE_SDDR55 is not set # CONFIG_USB_STORAGE_JUMPSHOT is not set # CONFIG_USB_STORAGE_ALAUDA is not set # CONFIG_USB_STORAGE_ONETOUCH is not set # CONFIG_USB_STORAGE_KARMA is not set # CONFIG_USB_STORAGE_CYPRESS_ATACB is not set # CONFIG_USB_STORAGE_ENE_UB6250 is not set # CONFIG_USB_UAS is not set # # USB Imaging devices # # CONFIG_USB_MDC800 is not set # CONFIG_USB_MICROTEK is not set # CONFIG_USBIP_CORE is not set # CONFIG_USB_CDNS_SUPPORT is not set # CONFIG_USB_MUSB_HDRC is not set # CONFIG_USB_DWC3 is not set # CONFIG_USB_DWC2 is not set # CONFIG_USB_CHIPIDEA is not set # CONFIG_USB_ISP1760 is not set # # USB port drivers # # CONFIG_USB_USS720 is not set CONFIG_USB_SERIAL=m CONFIG_USB_SERIAL_GENERIC=y # CONFIG_USB_SERIAL_SIMPLE is not set # CONFIG_USB_SERIAL_AIRCABLE is not set # CONFIG_USB_SERIAL_ARK3116 is not set # CONFIG_USB_SERIAL_BELKIN is not set # CONFIG_USB_SERIAL_CH341 is not set # CONFIG_USB_SERIAL_WHITEHEAT is not set # CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set # CONFIG_USB_SERIAL_CP210X is not set # CONFIG_USB_SERIAL_CYPRESS_M8 is not set # CONFIG_USB_SERIAL_EMPEG is not set # CONFIG_USB_SERIAL_FTDI_SIO is not set # CONFIG_USB_SERIAL_VISOR is not set # CONFIG_USB_SERIAL_IPAQ is not set # CONFIG_USB_SERIAL_IR is not set # CONFIG_USB_SERIAL_EDGEPORT is not set # CONFIG_USB_SERIAL_EDGEPORT_TI is not set # CONFIG_USB_SERIAL_F81232 is not set # CONFIG_USB_SERIAL_F8153X is not set # CONFIG_USB_SERIAL_GARMIN is not set # CONFIG_USB_SERIAL_IPW is not set # CONFIG_USB_SERIAL_IUU is not set # CONFIG_USB_SERIAL_KEYSPAN_PDA is not set # CONFIG_USB_SERIAL_KEYSPAN is not set # CONFIG_USB_SERIAL_KLSI is not set # CONFIG_USB_SERIAL_KOBIL_SCT is not set # CONFIG_USB_SERIAL_MCT_U232 is not set # CONFIG_USB_SERIAL_METRO is not set # CONFIG_USB_SERIAL_MOS7720 is not set # CONFIG_USB_SERIAL_MOS7840 is not set # CONFIG_USB_SERIAL_MXUPORT is not set # CONFIG_USB_SERIAL_NAVMAN is not set # CONFIG_USB_SERIAL_PL2303 is not set # CONFIG_USB_SERIAL_OTI6858 is not set # CONFIG_USB_SERIAL_QCAUX is not set # CONFIG_USB_SERIAL_QUALCOMM is not set # CONFIG_USB_SERIAL_SPCP8X5 is not set # CONFIG_USB_SERIAL_SAFE is not set # CONFIG_USB_SERIAL_SIERRAWIRELESS is not set # CONFIG_USB_SERIAL_SYMBOL is not set # CONFIG_USB_SERIAL_TI is not set # CONFIG_USB_SERIAL_CYBERJACK is not set # CONFIG_USB_SERIAL_OPTION is not set # CONFIG_USB_SERIAL_OMNINET is not set # CONFIG_USB_SERIAL_OPTICON is not set # CONFIG_USB_SERIAL_XSENS_MT is not set # CONFIG_USB_SERIAL_WISHBONE is not set # CONFIG_USB_SERIAL_SSU100 is not set # CONFIG_USB_SERIAL_QT2 is not set # CONFIG_USB_SERIAL_UPD78F0730 is not set # CONFIG_USB_SERIAL_XR is not set CONFIG_USB_SERIAL_DEBUG=m # # USB Miscellaneous drivers # # CONFIG_USB_EMI62 is not set # CONFIG_USB_EMI26 is not set # CONFIG_USB_ADUTUX is not set # CONFIG_USB_SEVSEG is not set # CONFIG_USB_LEGOTOWER is not set # CONFIG_USB_LCD is not set # CONFIG_USB_CYPRESS_CY7C63 is not set # CONFIG_USB_CYTHERM is not set # CONFIG_USB_IDMOUSE is not set # CONFIG_USB_FTDI_ELAN is not set # CONFIG_USB_APPLEDISPLAY is not set # CONFIG_APPLE_MFI_FASTCHARGE is not set # CONFIG_USB_SISUSBVGA is not set # CONFIG_USB_LD is not set # CONFIG_USB_TRANCEVIBRATOR is not set # CONFIG_USB_IOWARRIOR is not set # CONFIG_USB_TEST is not set # CONFIG_USB_EHSET_TEST_FIXTURE is not set # CONFIG_USB_ISIGHTFW is not set # CONFIG_USB_YUREX is not set # CONFIG_USB_EZUSB_FX2 is not set # CONFIG_USB_HUB_USB251XB is not set # CONFIG_USB_HSIC_USB3503 is not set # CONFIG_USB_HSIC_USB4604 is not set # CONFIG_USB_LINK_LAYER_TEST is not set # CONFIG_USB_CHAOSKEY is not set # CONFIG_USB_ATM is not set # # USB Physical Layer drivers # # CONFIG_NOP_USB_XCEIV is not set # CONFIG_USB_GPIO_VBUS is not set # CONFIG_USB_ISP1301 is not set # end of USB Physical Layer drivers # CONFIG_USB_GADGET is not set CONFIG_TYPEC=y # CONFIG_TYPEC_TCPM is not set CONFIG_TYPEC_UCSI=y # CONFIG_UCSI_CCG is not set CONFIG_UCSI_ACPI=y # CONFIG_TYPEC_TPS6598X is not set # CONFIG_TYPEC_STUSB160X is not set # # USB Type-C Multiplexer/DeMultiplexer Switch support # # CONFIG_TYPEC_MUX_PI3USB30532 is not set # end of USB Type-C Multiplexer/DeMultiplexer Switch support # # USB Type-C Alternate Mode drivers # # CONFIG_TYPEC_DP_ALTMODE is not set # end of USB Type-C Alternate Mode drivers # CONFIG_USB_ROLE_SWITCH is not set CONFIG_MMC=m CONFIG_MMC_BLOCK=m CONFIG_MMC_BLOCK_MINORS=8 CONFIG_SDIO_UART=m # CONFIG_MMC_TEST is not set # # MMC/SD/SDIO Host Controller Drivers # # CONFIG_MMC_DEBUG is not set CONFIG_MMC_SDHCI=m CONFIG_MMC_SDHCI_IO_ACCESSORS=y CONFIG_MMC_SDHCI_PCI=m CONFIG_MMC_RICOH_MMC=y CONFIG_MMC_SDHCI_ACPI=m CONFIG_MMC_SDHCI_PLTFM=m # CONFIG_MMC_SDHCI_F_SDH30 is not set # CONFIG_MMC_WBSD is not set # CONFIG_MMC_TIFM_SD is not set # CONFIG_MMC_SPI is not set # CONFIG_MMC_CB710 is not set # CONFIG_MMC_VIA_SDMMC is not set # CONFIG_MMC_VUB300 is not set # CONFIG_MMC_USHC is not set # CONFIG_MMC_USDHI6ROL0 is not set # CONFIG_MMC_REALTEK_PCI is not set CONFIG_MMC_CQHCI=m # CONFIG_MMC_HSQ is not set # CONFIG_MMC_TOSHIBA_PCI is not set # CONFIG_MMC_MTK is not set # CONFIG_MMC_SDHCI_XENON is not set # CONFIG_MEMSTICK is not set CONFIG_NEW_LEDS=y CONFIG_LEDS_CLASS=y # CONFIG_LEDS_CLASS_FLASH is not set # CONFIG_LEDS_CLASS_MULTICOLOR is not set # CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set # # LED drivers # # CONFIG_LEDS_APU is not set CONFIG_LEDS_LM3530=m # CONFIG_LEDS_LM3532 is not set # CONFIG_LEDS_LM3642 is not set # CONFIG_LEDS_PCA9532 is not set # CONFIG_LEDS_GPIO is not set CONFIG_LEDS_LP3944=m # CONFIG_LEDS_LP3952 is not set # CONFIG_LEDS_LP50XX is not set CONFIG_LEDS_CLEVO_MAIL=m # CONFIG_LEDS_PCA955X is not set # CONFIG_LEDS_PCA963X is not set # CONFIG_LEDS_DAC124S085 is not set # CONFIG_LEDS_PWM is not set # CONFIG_LEDS_BD2802 is not set CONFIG_LEDS_INTEL_SS4200=m # CONFIG_LEDS_TCA6507 is not set # CONFIG_LEDS_TLC591XX is not set # CONFIG_LEDS_LM355x is not set # # LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM) # CONFIG_LEDS_BLINKM=m CONFIG_LEDS_MLXCPLD=m # CONFIG_LEDS_MLXREG is not set # CONFIG_LEDS_USER is not set # CONFIG_LEDS_NIC78BX is not set # CONFIG_LEDS_TI_LMU_COMMON is not set # # Flash and Torch LED drivers # # # LED Triggers # CONFIG_LEDS_TRIGGERS=y CONFIG_LEDS_TRIGGER_TIMER=m CONFIG_LEDS_TRIGGER_ONESHOT=m # CONFIG_LEDS_TRIGGER_DISK is not set CONFIG_LEDS_TRIGGER_HEARTBEAT=m CONFIG_LEDS_TRIGGER_BACKLIGHT=m # CONFIG_LEDS_TRIGGER_CPU is not set # CONFIG_LEDS_TRIGGER_ACTIVITY is not set CONFIG_LEDS_TRIGGER_GPIO=m CONFIG_LEDS_TRIGGER_DEFAULT_ON=m # # iptables trigger is under Netfilter config (LED target) # CONFIG_LEDS_TRIGGER_TRANSIENT=m CONFIG_LEDS_TRIGGER_CAMERA=m # CONFIG_LEDS_TRIGGER_PANIC is not set # CONFIG_LEDS_TRIGGER_NETDEV is not set # CONFIG_LEDS_TRIGGER_PATTERN is not set CONFIG_LEDS_TRIGGER_AUDIO=m # CONFIG_LEDS_TRIGGER_TTY is not set # # LED Blink # # CONFIG_LEDS_BLINK is not set # CONFIG_ACCESSIBILITY is not set CONFIG_INFINIBAND=m CONFIG_INFINIBAND_USER_MAD=m CONFIG_INFINIBAND_USER_ACCESS=m CONFIG_INFINIBAND_USER_MEM=y CONFIG_INFINIBAND_ON_DEMAND_PAGING=y CONFIG_INFINIBAND_ADDR_TRANS=y CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS=y CONFIG_INFINIBAND_VIRT_DMA=y # CONFIG_INFINIBAND_MTHCA is not set # CONFIG_INFINIBAND_EFA is not set # CONFIG_INFINIBAND_I40IW is not set # CONFIG_MLX4_INFINIBAND is not set # CONFIG_INFINIBAND_OCRDMA is not set # CONFIG_INFINIBAND_USNIC is not set # CONFIG_INFINIBAND_BNXT_RE is not set # CONFIG_INFINIBAND_RDMAVT is not set CONFIG_RDMA_RXE=m CONFIG_RDMA_SIW=m CONFIG_INFINIBAND_IPOIB=m # CONFIG_INFINIBAND_IPOIB_CM is not set CONFIG_INFINIBAND_IPOIB_DEBUG=y # CONFIG_INFINIBAND_IPOIB_DEBUG_DATA is not set CONFIG_INFINIBAND_SRP=m CONFIG_INFINIBAND_SRPT=m # CONFIG_INFINIBAND_ISER is not set # CONFIG_INFINIBAND_ISERT is not set # CONFIG_INFINIBAND_RTRS_CLIENT is not set # CONFIG_INFINIBAND_RTRS_SERVER is not set # CONFIG_INFINIBAND_OPA_VNIC is not set CONFIG_EDAC_ATOMIC_SCRUB=y CONFIG_EDAC_SUPPORT=y CONFIG_EDAC=y CONFIG_EDAC_LEGACY_SYSFS=y # CONFIG_EDAC_DEBUG is not set CONFIG_EDAC_DECODE_MCE=m CONFIG_EDAC_GHES=y CONFIG_EDAC_AMD64=m CONFIG_EDAC_E752X=m CONFIG_EDAC_I82975X=m CONFIG_EDAC_I3000=m CONFIG_EDAC_I3200=m CONFIG_EDAC_IE31200=m CONFIG_EDAC_X38=m CONFIG_EDAC_I5400=m CONFIG_EDAC_I7CORE=m CONFIG_EDAC_I5000=m CONFIG_EDAC_I5100=m CONFIG_EDAC_I7300=m CONFIG_EDAC_SBRIDGE=m CONFIG_EDAC_SKX=m # CONFIG_EDAC_I10NM is not set CONFIG_EDAC_PND2=m # CONFIG_EDAC_IGEN6 is not set CONFIG_RTC_LIB=y CONFIG_RTC_MC146818_LIB=y CONFIG_RTC_CLASS=y CONFIG_RTC_HCTOSYS=y CONFIG_RTC_HCTOSYS_DEVICE="rtc0" # CONFIG_RTC_SYSTOHC is not set # CONFIG_RTC_DEBUG is not set CONFIG_RTC_NVMEM=y # # RTC interfaces # CONFIG_RTC_INTF_SYSFS=y CONFIG_RTC_INTF_PROC=y CONFIG_RTC_INTF_DEV=y # CONFIG_RTC_INTF_DEV_UIE_EMUL is not set # CONFIG_RTC_DRV_TEST is not set # # I2C RTC drivers # # CONFIG_RTC_DRV_ABB5ZES3 is not set # CONFIG_RTC_DRV_ABEOZ9 is not set # CONFIG_RTC_DRV_ABX80X is not set CONFIG_RTC_DRV_DS1307=m # CONFIG_RTC_DRV_DS1307_CENTURY is not set CONFIG_RTC_DRV_DS1374=m # CONFIG_RTC_DRV_DS1374_WDT is not set CONFIG_RTC_DRV_DS1672=m CONFIG_RTC_DRV_MAX6900=m CONFIG_RTC_DRV_RS5C372=m CONFIG_RTC_DRV_ISL1208=m CONFIG_RTC_DRV_ISL12022=m CONFIG_RTC_DRV_X1205=m CONFIG_RTC_DRV_PCF8523=m # CONFIG_RTC_DRV_PCF85063 is not set # CONFIG_RTC_DRV_PCF85363 is not set CONFIG_RTC_DRV_PCF8563=m CONFIG_RTC_DRV_PCF8583=m CONFIG_RTC_DRV_M41T80=m CONFIG_RTC_DRV_M41T80_WDT=y CONFIG_RTC_DRV_BQ32K=m # CONFIG_RTC_DRV_S35390A is not set CONFIG_RTC_DRV_FM3130=m # CONFIG_RTC_DRV_RX8010 is not set CONFIG_RTC_DRV_RX8581=m CONFIG_RTC_DRV_RX8025=m CONFIG_RTC_DRV_EM3027=m # CONFIG_RTC_DRV_RV3028 is not set # CONFIG_RTC_DRV_RV3032 is not set # CONFIG_RTC_DRV_RV8803 is not set # CONFIG_RTC_DRV_SD3078 is not set # # SPI RTC drivers # # CONFIG_RTC_DRV_M41T93 is not set # CONFIG_RTC_DRV_M41T94 is not set # CONFIG_RTC_DRV_DS1302 is not set # CONFIG_RTC_DRV_DS1305 is not set # CONFIG_RTC_DRV_DS1343 is not set # CONFIG_RTC_DRV_DS1347 is not set # CONFIG_RTC_DRV_DS1390 is not set # CONFIG_RTC_DRV_MAX6916 is not set # CONFIG_RTC_DRV_R9701 is not set CONFIG_RTC_DRV_RX4581=m # CONFIG_RTC_DRV_RS5C348 is not set # CONFIG_RTC_DRV_MAX6902 is not set # CONFIG_RTC_DRV_PCF2123 is not set # CONFIG_RTC_DRV_MCP795 is not set CONFIG_RTC_I2C_AND_SPI=y # # SPI and I2C RTC drivers # CONFIG_RTC_DRV_DS3232=m CONFIG_RTC_DRV_DS3232_HWMON=y # CONFIG_RTC_DRV_PCF2127 is not set CONFIG_RTC_DRV_RV3029C2=m # CONFIG_RTC_DRV_RV3029_HWMON is not set # CONFIG_RTC_DRV_RX6110 is not set # # Platform RTC drivers # CONFIG_RTC_DRV_CMOS=y CONFIG_RTC_DRV_DS1286=m CONFIG_RTC_DRV_DS1511=m CONFIG_RTC_DRV_DS1553=m # CONFIG_RTC_DRV_DS1685_FAMILY is not set CONFIG_RTC_DRV_DS1742=m CONFIG_RTC_DRV_DS2404=m CONFIG_RTC_DRV_STK17TA8=m # CONFIG_RTC_DRV_M48T86 is not set CONFIG_RTC_DRV_M48T35=m CONFIG_RTC_DRV_M48T59=m CONFIG_RTC_DRV_MSM6242=m CONFIG_RTC_DRV_BQ4802=m CONFIG_RTC_DRV_RP5C01=m CONFIG_RTC_DRV_V3020=m # # on-CPU RTC drivers # # CONFIG_RTC_DRV_FTRTC010 is not set # # HID Sensor RTC drivers # CONFIG_DMADEVICES=y # CONFIG_DMADEVICES_DEBUG is not set # # DMA Devices # CONFIG_DMA_ENGINE=y CONFIG_DMA_VIRTUAL_CHANNELS=y CONFIG_DMA_ACPI=y # CONFIG_ALTERA_MSGDMA is not set CONFIG_INTEL_IDMA64=m # CONFIG_INTEL_IDXD is not set CONFIG_INTEL_IOATDMA=m # CONFIG_PLX_DMA is not set # CONFIG_XILINX_ZYNQMP_DPDMA is not set # CONFIG_QCOM_HIDMA_MGMT is not set # CONFIG_QCOM_HIDMA is not set CONFIG_DW_DMAC_CORE=y CONFIG_DW_DMAC=m CONFIG_DW_DMAC_PCI=y # CONFIG_DW_EDMA is not set # CONFIG_DW_EDMA_PCIE is not set CONFIG_HSU_DMA=y # CONFIG_SF_PDMA is not set # CONFIG_INTEL_LDMA is not set # # DMA Clients # CONFIG_ASYNC_TX_DMA=y CONFIG_DMATEST=m CONFIG_DMA_ENGINE_RAID=y # # DMABUF options # CONFIG_SYNC_FILE=y # CONFIG_SW_SYNC is not set # CONFIG_UDMABUF is not set # CONFIG_DMABUF_MOVE_NOTIFY is not set # CONFIG_DMABUF_DEBUG is not set # CONFIG_DMABUF_SELFTESTS is not set # CONFIG_DMABUF_HEAPS is not set # end of DMABUF options CONFIG_DCA=m # CONFIG_AUXDISPLAY is not set # CONFIG_PANEL is not set CONFIG_UIO=m CONFIG_UIO_CIF=m CONFIG_UIO_PDRV_GENIRQ=m # CONFIG_UIO_DMEM_GENIRQ is not set CONFIG_UIO_AEC=m CONFIG_UIO_SERCOS3=m CONFIG_UIO_PCI_GENERIC=m # CONFIG_UIO_NETX is not set # CONFIG_UIO_PRUSS is not set # CONFIG_UIO_MF624 is not set CONFIG_UIO_HV_GENERIC=m CONFIG_VFIO_IOMMU_TYPE1=m CONFIG_VFIO_VIRQFD=m CONFIG_VFIO=m CONFIG_VFIO_NOIOMMU=y CONFIG_VFIO_PCI=m # CONFIG_VFIO_PCI_VGA is not set CONFIG_VFIO_PCI_MMAP=y CONFIG_VFIO_PCI_INTX=y # CONFIG_VFIO_PCI_IGD is not set CONFIG_VFIO_MDEV=m CONFIG_VFIO_MDEV_DEVICE=m CONFIG_IRQ_BYPASS_MANAGER=m # CONFIG_VIRT_DRIVERS is not set CONFIG_VIRTIO=y CONFIG_VIRTIO_PCI_LIB=y CONFIG_VIRTIO_MENU=y CONFIG_VIRTIO_PCI=y CONFIG_VIRTIO_PCI_LEGACY=y # CONFIG_VIRTIO_PMEM is not set CONFIG_VIRTIO_BALLOON=m CONFIG_VIRTIO_MEM=m CONFIG_VIRTIO_INPUT=m # CONFIG_VIRTIO_MMIO is not set CONFIG_VIRTIO_DMA_SHARED_BUFFER=m # CONFIG_VDPA is not set CONFIG_VHOST_IOTLB=m CONFIG_VHOST=m CONFIG_VHOST_MENU=y CONFIG_VHOST_NET=m # CONFIG_VHOST_SCSI is not set CONFIG_VHOST_VSOCK=m # CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set # # Microsoft Hyper-V guest support # CONFIG_HYPERV=m CONFIG_HYPERV_TIMER=y CONFIG_HYPERV_UTILS=m CONFIG_HYPERV_BALLOON=m # end of Microsoft Hyper-V guest support # # Xen driver support # # CONFIG_XEN_BALLOON is not set CONFIG_XEN_DEV_EVTCHN=m # CONFIG_XEN_BACKEND is not set CONFIG_XENFS=m CONFIG_XEN_COMPAT_XENFS=y CONFIG_XEN_SYS_HYPERVISOR=y CONFIG_XEN_XENBUS_FRONTEND=y # CONFIG_XEN_GNTDEV is not set # CONFIG_XEN_GRANT_DEV_ALLOC is not set # CONFIG_XEN_GRANT_DMA_ALLOC is not set CONFIG_SWIOTLB_XEN=y # CONFIG_XEN_PVCALLS_FRONTEND is not set CONFIG_XEN_PRIVCMD=m CONFIG_XEN_EFI=y CONFIG_XEN_AUTO_XLATE=y CONFIG_XEN_ACPI=y # CONFIG_XEN_UNPOPULATED_ALLOC is not set # end of Xen driver support # CONFIG_GREYBUS is not set # CONFIG_STAGING is not set CONFIG_X86_PLATFORM_DEVICES=y CONFIG_ACPI_WMI=m CONFIG_WMI_BMOF=m # CONFIG_HUAWEI_WMI is not set # CONFIG_UV_SYSFS is not set # CONFIG_INTEL_WMI_SBL_FW_UPDATE is not set CONFIG_INTEL_WMI_THUNDERBOLT=m CONFIG_MXM_WMI=m # CONFIG_PEAQ_WMI is not set # CONFIG_XIAOMI_WMI is not set CONFIG_ACERHDF=m # CONFIG_ACER_WIRELESS is not set CONFIG_ACER_WMI=m # CONFIG_AMD_PMC is not set CONFIG_APPLE_GMUX=m CONFIG_ASUS_LAPTOP=m # CONFIG_ASUS_WIRELESS is not set CONFIG_ASUS_WMI=m CONFIG_ASUS_NB_WMI=m CONFIG_EEEPC_LAPTOP=m CONFIG_EEEPC_WMI=m # CONFIG_X86_PLATFORM_DRIVERS_DELL is not set CONFIG_AMILO_RFKILL=m CONFIG_FUJITSU_LAPTOP=m CONFIG_FUJITSU_TABLET=m # CONFIG_GPD_POCKET_FAN is not set CONFIG_HP_ACCEL=m CONFIG_HP_WIRELESS=m CONFIG_HP_WMI=m # CONFIG_IBM_RTL is not set CONFIG_IDEAPAD_LAPTOP=m CONFIG_SENSORS_HDAPS=m CONFIG_THINKPAD_ACPI=m # CONFIG_THINKPAD_ACPI_DEBUGFACILITIES is not set # CONFIG_THINKPAD_ACPI_DEBUG is not set # CONFIG_THINKPAD_ACPI_UNSAFE_LEDS is not set CONFIG_THINKPAD_ACPI_VIDEO=y CONFIG_THINKPAD_ACPI_HOTKEY_POLL=y # CONFIG_INTEL_ATOMISP2_PM is not set CONFIG_INTEL_HID_EVENT=m # CONFIG_INTEL_INT0002_VGPIO is not set # CONFIG_INTEL_MENLOW is not set CONFIG_INTEL_OAKTRAIL=m CONFIG_INTEL_VBTN=m CONFIG_MSI_LAPTOP=m CONFIG_MSI_WMI=m # CONFIG_PCENGINES_APU2 is not set CONFIG_SAMSUNG_LAPTOP=m CONFIG_SAMSUNG_Q10=m CONFIG_TOSHIBA_BT_RFKILL=m # CONFIG_TOSHIBA_HAPS is not set # CONFIG_TOSHIBA_WMI is not set CONFIG_ACPI_CMPC=m CONFIG_COMPAL_LAPTOP=m # CONFIG_LG_LAPTOP is not set CONFIG_PANASONIC_LAPTOP=m CONFIG_SONY_LAPTOP=m CONFIG_SONYPI_COMPAT=y # CONFIG_SYSTEM76_ACPI is not set CONFIG_TOPSTAR_LAPTOP=m # CONFIG_I2C_MULTI_INSTANTIATE is not set CONFIG_MLX_PLATFORM=m CONFIG_INTEL_IPS=m CONFIG_INTEL_RST=m # CONFIG_INTEL_SMARTCONNECT is not set # # Intel Speed Select Technology interface support # # CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set # end of Intel Speed Select Technology interface support CONFIG_INTEL_TURBO_MAX_3=y # CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set CONFIG_INTEL_PMC_CORE=m # CONFIG_INTEL_PUNIT_IPC is not set # CONFIG_INTEL_SCU_PCI is not set # CONFIG_INTEL_SCU_PLATFORM is not set CONFIG_PMC_ATOM=y # CONFIG_CHROME_PLATFORMS is not set CONFIG_MELLANOX_PLATFORM=y CONFIG_MLXREG_HOTPLUG=m # CONFIG_MLXREG_IO is not set CONFIG_SURFACE_PLATFORMS=y # CONFIG_SURFACE3_WMI is not set # CONFIG_SURFACE_3_POWER_OPREGION is not set # CONFIG_SURFACE_GPE is not set # CONFIG_SURFACE_HOTPLUG is not set # CONFIG_SURFACE_PRO3_BUTTON is not set CONFIG_HAVE_CLK=y CONFIG_CLKDEV_LOOKUP=y CONFIG_HAVE_CLK_PREPARE=y CONFIG_COMMON_CLK=y # CONFIG_COMMON_CLK_MAX9485 is not set # CONFIG_COMMON_CLK_SI5341 is not set # CONFIG_COMMON_CLK_SI5351 is not set # CONFIG_COMMON_CLK_SI544 is not set # CONFIG_COMMON_CLK_CDCE706 is not set # CONFIG_COMMON_CLK_CS2000_CP is not set # CONFIG_COMMON_CLK_PWM is not set # CONFIG_XILINX_VCU is not set CONFIG_HWSPINLOCK=y # # Clock Source drivers # CONFIG_CLKEVT_I8253=y CONFIG_I8253_LOCK=y CONFIG_CLKBLD_I8253=y # end of Clock Source drivers CONFIG_MAILBOX=y CONFIG_PCC=y # CONFIG_ALTERA_MBOX is not set CONFIG_IOMMU_IOVA=y CONFIG_IOASID=y CONFIG_IOMMU_API=y CONFIG_IOMMU_SUPPORT=y # # Generic IOMMU Pagetable Support # CONFIG_IOMMU_IO_PGTABLE=y # end of Generic IOMMU Pagetable Support # CONFIG_IOMMU_DEBUGFS is not set # CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set CONFIG_IOMMU_DMA=y CONFIG_AMD_IOMMU=y CONFIG_AMD_IOMMU_V2=m CONFIG_DMAR_TABLE=y CONFIG_INTEL_IOMMU=y # CONFIG_INTEL_IOMMU_SVM is not set # CONFIG_INTEL_IOMMU_DEFAULT_ON is not set CONFIG_INTEL_IOMMU_FLOPPY_WA=y # CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON is not set CONFIG_IRQ_REMAP=y CONFIG_HYPERV_IOMMU=y # # Remoteproc drivers # # CONFIG_REMOTEPROC is not set # end of Remoteproc drivers # # Rpmsg drivers # # CONFIG_RPMSG_QCOM_GLINK_RPM is not set # CONFIG_RPMSG_VIRTIO is not set # end of Rpmsg drivers # CONFIG_SOUNDWIRE is not set # # SOC (System On Chip) specific Drivers # # # Amlogic SoC drivers # # end of Amlogic SoC drivers # # Broadcom SoC drivers # # end of Broadcom SoC drivers # # NXP/Freescale QorIQ SoC drivers # # end of NXP/Freescale QorIQ SoC drivers # # i.MX SoC drivers # # end of i.MX SoC drivers # # Enable LiteX SoC Builder specific drivers # # end of Enable LiteX SoC Builder specific drivers # # Qualcomm SoC drivers # # end of Qualcomm SoC drivers # CONFIG_SOC_TI is not set # # Xilinx SoC drivers # # end of Xilinx SoC drivers # end of SOC (System On Chip) specific Drivers # CONFIG_PM_DEVFREQ is not set # CONFIG_EXTCON is not set # CONFIG_MEMORY is not set # CONFIG_IIO is not set CONFIG_NTB=m # CONFIG_NTB_MSI is not set # CONFIG_NTB_AMD is not set # CONFIG_NTB_IDT is not set # CONFIG_NTB_INTEL is not set # CONFIG_NTB_EPF is not set # CONFIG_NTB_SWITCHTEC is not set # CONFIG_NTB_PINGPONG is not set # CONFIG_NTB_TOOL is not set # CONFIG_NTB_PERF is not set # CONFIG_NTB_TRANSPORT is not set # CONFIG_VME_BUS is not set CONFIG_PWM=y CONFIG_PWM_SYSFS=y # CONFIG_PWM_DEBUG is not set # CONFIG_PWM_DWC is not set CONFIG_PWM_LPSS=m CONFIG_PWM_LPSS_PCI=m CONFIG_PWM_LPSS_PLATFORM=m # CONFIG_PWM_PCA9685 is not set # # IRQ chip support # # end of IRQ chip support # CONFIG_IPACK_BUS is not set # CONFIG_RESET_CONTROLLER is not set # # PHY Subsystem # # CONFIG_GENERIC_PHY is not set # CONFIG_USB_LGM_PHY is not set # CONFIG_BCM_KONA_USB2_PHY is not set # CONFIG_PHY_PXA_28NM_HSIC is not set # CONFIG_PHY_PXA_28NM_USB2 is not set # CONFIG_PHY_INTEL_LGM_EMMC is not set # end of PHY Subsystem CONFIG_POWERCAP=y CONFIG_INTEL_RAPL_CORE=m CONFIG_INTEL_RAPL=m # CONFIG_IDLE_INJECT is not set # CONFIG_DTPM is not set # CONFIG_MCB is not set # # Performance monitor support # # end of Performance monitor support CONFIG_RAS=y # CONFIG_RAS_CEC is not set # CONFIG_USB4 is not set # # Android # # CONFIG_ANDROID is not set # end of Android CONFIG_LIBNVDIMM=m CONFIG_BLK_DEV_PMEM=m CONFIG_ND_BLK=m CONFIG_ND_CLAIM=y CONFIG_ND_BTT=m CONFIG_BTT=y CONFIG_ND_PFN=m CONFIG_NVDIMM_PFN=y CONFIG_NVDIMM_DAX=y CONFIG_NVDIMM_KEYS=y CONFIG_DAX_DRIVER=y CONFIG_DAX=y CONFIG_DEV_DAX=m CONFIG_DEV_DAX_PMEM=m CONFIG_DEV_DAX_KMEM=m CONFIG_DEV_DAX_PMEM_COMPAT=m CONFIG_NVMEM=y CONFIG_NVMEM_SYSFS=y # CONFIG_NVMEM_RMEM is not set # # HW tracing support # CONFIG_STM=m # CONFIG_STM_PROTO_BASIC is not set # CONFIG_STM_PROTO_SYS_T is not set CONFIG_STM_DUMMY=m CONFIG_STM_SOURCE_CONSOLE=m CONFIG_STM_SOURCE_HEARTBEAT=m CONFIG_STM_SOURCE_FTRACE=m CONFIG_INTEL_TH=m CONFIG_INTEL_TH_PCI=m CONFIG_INTEL_TH_ACPI=m CONFIG_INTEL_TH_GTH=m CONFIG_INTEL_TH_STH=m CONFIG_INTEL_TH_MSU=m CONFIG_INTEL_TH_PTI=m # CONFIG_INTEL_TH_DEBUG is not set # end of HW tracing support # CONFIG_FPGA is not set # CONFIG_TEE is not set # CONFIG_UNISYS_VISORBUS is not set # CONFIG_SIOX is not set # CONFIG_SLIMBUS is not set # CONFIG_INTERCONNECT is not set # CONFIG_COUNTER is not set # CONFIG_MOST is not set # end of Device Drivers # # File systems # CONFIG_DCACHE_WORD_ACCESS=y # CONFIG_VALIDATE_FS_PARSER is not set CONFIG_FS_IOMAP=y CONFIG_EXT2_FS=m CONFIG_EXT2_FS_XATTR=y CONFIG_EXT2_FS_POSIX_ACL=y CONFIG_EXT2_FS_SECURITY=y # CONFIG_EXT3_FS is not set CONFIG_EXT4_FS=y CONFIG_EXT4_FS_POSIX_ACL=y CONFIG_EXT4_FS_SECURITY=y # CONFIG_EXT4_DEBUG is not set CONFIG_EXT4_KUNIT_TESTS=m CONFIG_JBD2=y # CONFIG_JBD2_DEBUG is not set CONFIG_FS_MBCACHE=y # CONFIG_REISERFS_FS is not set # CONFIG_JFS_FS is not set CONFIG_XFS_FS=m CONFIG_XFS_SUPPORT_V4=y CONFIG_XFS_QUOTA=y CONFIG_XFS_POSIX_ACL=y CONFIG_XFS_RT=y CONFIG_XFS_ONLINE_SCRUB=y CONFIG_XFS_ONLINE_REPAIR=y CONFIG_XFS_DEBUG=y CONFIG_XFS_ASSERT_FATAL=y CONFIG_GFS2_FS=m CONFIG_GFS2_FS_LOCKING_DLM=y CONFIG_OCFS2_FS=m CONFIG_OCFS2_FS_O2CB=m CONFIG_OCFS2_FS_USERSPACE_CLUSTER=m CONFIG_OCFS2_FS_STATS=y CONFIG_OCFS2_DEBUG_MASKLOG=y # CONFIG_OCFS2_DEBUG_FS is not set CONFIG_BTRFS_FS=m CONFIG_BTRFS_FS_POSIX_ACL=y # CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set # CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set # CONFIG_BTRFS_DEBUG is not set # CONFIG_BTRFS_ASSERT is not set # CONFIG_BTRFS_FS_REF_VERIFY is not set # CONFIG_NILFS2_FS is not set CONFIG_F2FS_FS=m CONFIG_F2FS_STAT_FS=y CONFIG_F2FS_FS_XATTR=y CONFIG_F2FS_FS_POSIX_ACL=y CONFIG_F2FS_FS_SECURITY=y # CONFIG_F2FS_CHECK_FS is not set # CONFIG_F2FS_FAULT_INJECTION is not set # CONFIG_F2FS_FS_COMPRESSION is not set # CONFIG_ZONEFS_FS is not set CONFIG_FS_DAX=y CONFIG_FS_DAX_PMD=y CONFIG_FS_POSIX_ACL=y CONFIG_EXPORTFS=y CONFIG_EXPORTFS_BLOCK_OPS=y CONFIG_FILE_LOCKING=y CONFIG_MANDATORY_FILE_LOCKING=y CONFIG_FS_ENCRYPTION=y CONFIG_FS_ENCRYPTION_ALGS=y # CONFIG_FS_VERITY is not set CONFIG_FSNOTIFY=y CONFIG_DNOTIFY=y CONFIG_INOTIFY_USER=y CONFIG_FANOTIFY=y CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y CONFIG_QUOTA=y CONFIG_QUOTA_NETLINK_INTERFACE=y CONFIG_PRINT_QUOTA_WARNING=y # CONFIG_QUOTA_DEBUG is not set CONFIG_QUOTA_TREE=y # CONFIG_QFMT_V1 is not set CONFIG_QFMT_V2=y CONFIG_QUOTACTL=y CONFIG_AUTOFS4_FS=y CONFIG_AUTOFS_FS=y CONFIG_FUSE_FS=m CONFIG_CUSE=m # CONFIG_VIRTIO_FS is not set CONFIG_OVERLAY_FS=m # CONFIG_OVERLAY_FS_REDIRECT_DIR is not set # CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set # CONFIG_OVERLAY_FS_INDEX is not set # CONFIG_OVERLAY_FS_XINO_AUTO is not set # CONFIG_OVERLAY_FS_METACOPY is not set # # Caches # CONFIG_FSCACHE=m CONFIG_FSCACHE_STATS=y # CONFIG_FSCACHE_HISTOGRAM is not set # CONFIG_FSCACHE_DEBUG is not set # CONFIG_FSCACHE_OBJECT_LIST is not set CONFIG_CACHEFILES=m # CONFIG_CACHEFILES_DEBUG is not set # CONFIG_CACHEFILES_HISTOGRAM is not set # end of Caches # # CD-ROM/DVD Filesystems # CONFIG_ISO9660_FS=m CONFIG_JOLIET=y CONFIG_ZISOFS=y CONFIG_UDF_FS=m # end of CD-ROM/DVD Filesystems # # DOS/FAT/EXFAT/NT Filesystems # CONFIG_FAT_FS=m CONFIG_MSDOS_FS=m CONFIG_VFAT_FS=m CONFIG_FAT_DEFAULT_CODEPAGE=437 CONFIG_FAT_DEFAULT_IOCHARSET="ascii" # CONFIG_FAT_DEFAULT_UTF8 is not set # CONFIG_EXFAT_FS is not set # CONFIG_NTFS_FS is not set # end of DOS/FAT/EXFAT/NT Filesystems # # Pseudo filesystems # CONFIG_PROC_FS=y CONFIG_PROC_KCORE=y CONFIG_PROC_VMCORE=y CONFIG_PROC_VMCORE_DEVICE_DUMP=y CONFIG_PROC_SYSCTL=y CONFIG_PROC_PAGE_MONITOR=y CONFIG_PROC_CHILDREN=y CONFIG_PROC_PID_ARCH_STATUS=y CONFIG_PROC_CPU_RESCTRL=y CONFIG_KERNFS=y CONFIG_SYSFS=y CONFIG_TMPFS=y CONFIG_TMPFS_POSIX_ACL=y CONFIG_TMPFS_XATTR=y # CONFIG_TMPFS_INODE64 is not set CONFIG_HUGETLBFS=y CONFIG_HUGETLB_PAGE=y CONFIG_MEMFD_CREATE=y CONFIG_ARCH_HAS_GIGANTIC_PAGE=y CONFIG_CONFIGFS_FS=y CONFIG_EFIVAR_FS=y # end of Pseudo filesystems CONFIG_MISC_FILESYSTEMS=y # CONFIG_ORANGEFS_FS is not set # CONFIG_ADFS_FS is not set # CONFIG_AFFS_FS is not set # CONFIG_ECRYPT_FS is not set # CONFIG_HFS_FS is not set # CONFIG_HFSPLUS_FS is not set # CONFIG_BEFS_FS is not set # CONFIG_BFS_FS is not set # CONFIG_EFS_FS is not set CONFIG_CRAMFS=m CONFIG_CRAMFS_BLOCKDEV=y CONFIG_SQUASHFS=m # CONFIG_SQUASHFS_FILE_CACHE is not set CONFIG_SQUASHFS_FILE_DIRECT=y # CONFIG_SQUASHFS_DECOMP_SINGLE is not set # CONFIG_SQUASHFS_DECOMP_MULTI is not set CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU=y CONFIG_SQUASHFS_XATTR=y CONFIG_SQUASHFS_ZLIB=y # CONFIG_SQUASHFS_LZ4 is not set CONFIG_SQUASHFS_LZO=y CONFIG_SQUASHFS_XZ=y # CONFIG_SQUASHFS_ZSTD is not set # CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set # CONFIG_SQUASHFS_EMBEDDED is not set CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3 # CONFIG_VXFS_FS is not set CONFIG_MINIX_FS=m # CONFIG_OMFS_FS is not set # CONFIG_HPFS_FS is not set # CONFIG_QNX4FS_FS is not set # CONFIG_QNX6FS_FS is not set # CONFIG_ROMFS_FS is not set CONFIG_PSTORE=y CONFIG_PSTORE_DEFAULT_KMSG_BYTES=10240 CONFIG_PSTORE_DEFLATE_COMPRESS=y # CONFIG_PSTORE_LZO_COMPRESS is not set # CONFIG_PSTORE_LZ4_COMPRESS is not set # CONFIG_PSTORE_LZ4HC_COMPRESS is not set # CONFIG_PSTORE_842_COMPRESS is not set # CONFIG_PSTORE_ZSTD_COMPRESS is not set CONFIG_PSTORE_COMPRESS=y CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=y CONFIG_PSTORE_COMPRESS_DEFAULT="deflate" # CONFIG_PSTORE_CONSOLE is not set # CONFIG_PSTORE_PMSG is not set # CONFIG_PSTORE_FTRACE is not set CONFIG_PSTORE_RAM=m # CONFIG_PSTORE_BLK is not set # CONFIG_SYSV_FS is not set # CONFIG_UFS_FS is not set # CONFIG_EROFS_FS is not set CONFIG_NETWORK_FILESYSTEMS=y CONFIG_NFS_FS=y # CONFIG_NFS_V2 is not set CONFIG_NFS_V3=y CONFIG_NFS_V3_ACL=y CONFIG_NFS_V4=m # CONFIG_NFS_SWAP is not set CONFIG_NFS_V4_1=y CONFIG_NFS_V4_2=y CONFIG_PNFS_FILE_LAYOUT=m CONFIG_PNFS_BLOCK=m CONFIG_PNFS_FLEXFILE_LAYOUT=m CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org" # CONFIG_NFS_V4_1_MIGRATION is not set CONFIG_NFS_V4_SECURITY_LABEL=y CONFIG_ROOT_NFS=y # CONFIG_NFS_USE_LEGACY_DNS is not set CONFIG_NFS_USE_KERNEL_DNS=y CONFIG_NFS_DEBUG=y CONFIG_NFS_DISABLE_UDP_SUPPORT=y # CONFIG_NFS_V4_2_READ_PLUS is not set CONFIG_NFSD=m CONFIG_NFSD_V2_ACL=y CONFIG_NFSD_V3=y CONFIG_NFSD_V3_ACL=y CONFIG_NFSD_V4=y CONFIG_NFSD_PNFS=y # CONFIG_NFSD_BLOCKLAYOUT is not set CONFIG_NFSD_SCSILAYOUT=y # CONFIG_NFSD_FLEXFILELAYOUT is not set # CONFIG_NFSD_V4_2_INTER_SSC is not set CONFIG_NFSD_V4_SECURITY_LABEL=y CONFIG_GRACE_PERIOD=y CONFIG_LOCKD=y CONFIG_LOCKD_V4=y CONFIG_NFS_ACL_SUPPORT=y CONFIG_NFS_COMMON=y CONFIG_NFS_V4_2_SSC_HELPER=y CONFIG_SUNRPC=y CONFIG_SUNRPC_GSS=m CONFIG_SUNRPC_BACKCHANNEL=y CONFIG_RPCSEC_GSS_KRB5=m # CONFIG_SUNRPC_DISABLE_INSECURE_ENCTYPES is not set CONFIG_SUNRPC_DEBUG=y CONFIG_SUNRPC_XPRT_RDMA=m CONFIG_CEPH_FS=m # CONFIG_CEPH_FSCACHE is not set CONFIG_CEPH_FS_POSIX_ACL=y # CONFIG_CEPH_FS_SECURITY_LABEL is not set CONFIG_CIFS=m # CONFIG_CIFS_STATS2 is not set CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y CONFIG_CIFS_WEAK_PW_HASH=y CONFIG_CIFS_UPCALL=y CONFIG_CIFS_XATTR=y CONFIG_CIFS_POSIX=y CONFIG_CIFS_DEBUG=y # CONFIG_CIFS_DEBUG2 is not set # CONFIG_CIFS_DEBUG_DUMP_KEYS is not set CONFIG_CIFS_DFS_UPCALL=y # CONFIG_CIFS_SWN_UPCALL is not set # CONFIG_CIFS_SMB_DIRECT is not set # CONFIG_CIFS_FSCACHE is not set # CONFIG_CODA_FS is not set # CONFIG_AFS_FS is not set # CONFIG_9P_FS is not set CONFIG_NLS=y CONFIG_NLS_DEFAULT="utf8" CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_CODEPAGE_737=m CONFIG_NLS_CODEPAGE_775=m CONFIG_NLS_CODEPAGE_850=m CONFIG_NLS_CODEPAGE_852=m CONFIG_NLS_CODEPAGE_855=m CONFIG_NLS_CODEPAGE_857=m CONFIG_NLS_CODEPAGE_860=m CONFIG_NLS_CODEPAGE_861=m CONFIG_NLS_CODEPAGE_862=m CONFIG_NLS_CODEPAGE_863=m CONFIG_NLS_CODEPAGE_864=m CONFIG_NLS_CODEPAGE_865=m CONFIG_NLS_CODEPAGE_866=m CONFIG_NLS_CODEPAGE_869=m CONFIG_NLS_CODEPAGE_936=m CONFIG_NLS_CODEPAGE_950=m CONFIG_NLS_CODEPAGE_932=m CONFIG_NLS_CODEPAGE_949=m CONFIG_NLS_CODEPAGE_874=m CONFIG_NLS_ISO8859_8=m CONFIG_NLS_CODEPAGE_1250=m CONFIG_NLS_CODEPAGE_1251=m CONFIG_NLS_ASCII=y CONFIG_NLS_ISO8859_1=m CONFIG_NLS_ISO8859_2=m CONFIG_NLS_ISO8859_3=m CONFIG_NLS_ISO8859_4=m CONFIG_NLS_ISO8859_5=m CONFIG_NLS_ISO8859_6=m CONFIG_NLS_ISO8859_7=m CONFIG_NLS_ISO8859_9=m CONFIG_NLS_ISO8859_13=m CONFIG_NLS_ISO8859_14=m CONFIG_NLS_ISO8859_15=m CONFIG_NLS_KOI8_R=m CONFIG_NLS_KOI8_U=m CONFIG_NLS_MAC_ROMAN=m CONFIG_NLS_MAC_CELTIC=m CONFIG_NLS_MAC_CENTEURO=m CONFIG_NLS_MAC_CROATIAN=m CONFIG_NLS_MAC_CYRILLIC=m CONFIG_NLS_MAC_GAELIC=m CONFIG_NLS_MAC_GREEK=m CONFIG_NLS_MAC_ICELAND=m CONFIG_NLS_MAC_INUIT=m CONFIG_NLS_MAC_ROMANIAN=m CONFIG_NLS_MAC_TURKISH=m CONFIG_NLS_UTF8=m CONFIG_DLM=m CONFIG_DLM_DEBUG=y # CONFIG_UNICODE is not set CONFIG_IO_WQ=y # end of File systems # # Security options # CONFIG_KEYS=y # CONFIG_KEYS_REQUEST_CACHE is not set CONFIG_PERSISTENT_KEYRINGS=y CONFIG_TRUSTED_KEYS=y CONFIG_ENCRYPTED_KEYS=y # CONFIG_KEY_DH_OPERATIONS is not set # CONFIG_SECURITY_DMESG_RESTRICT is not set CONFIG_SECURITY=y CONFIG_SECURITY_WRITABLE_HOOKS=y CONFIG_SECURITYFS=y CONFIG_SECURITY_NETWORK=y CONFIG_PAGE_TABLE_ISOLATION=y # CONFIG_SECURITY_INFINIBAND is not set CONFIG_SECURITY_NETWORK_XFRM=y CONFIG_SECURITY_PATH=y CONFIG_INTEL_TXT=y CONFIG_LSM_MMAP_MIN_ADDR=65535 CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y CONFIG_HARDENED_USERCOPY=y CONFIG_HARDENED_USERCOPY_FALLBACK=y CONFIG_FORTIFY_SOURCE=y # CONFIG_STATIC_USERMODEHELPER is not set CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX_BOOTPARAM=y CONFIG_SECURITY_SELINUX_DISABLE=y CONFIG_SECURITY_SELINUX_DEVELOP=y CONFIG_SECURITY_SELINUX_AVC_STATS=y CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1 CONFIG_SECURITY_SELINUX_SIDTAB_HASH_BITS=9 CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE=256 # CONFIG_SECURITY_SMACK is not set # CONFIG_SECURITY_TOMOYO is not set CONFIG_SECURITY_APPARMOR=y CONFIG_SECURITY_APPARMOR_HASH=y CONFIG_SECURITY_APPARMOR_HASH_DEFAULT=y # CONFIG_SECURITY_APPARMOR_DEBUG is not set # CONFIG_SECURITY_APPARMOR_KUNIT_TEST is not set # CONFIG_SECURITY_LOADPIN is not set CONFIG_SECURITY_YAMA=y # CONFIG_SECURITY_SAFESETID is not set # CONFIG_SECURITY_LOCKDOWN_LSM is not set CONFIG_INTEGRITY=y CONFIG_INTEGRITY_SIGNATURE=y CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y CONFIG_INTEGRITY_TRUSTED_KEYRING=y # CONFIG_INTEGRITY_PLATFORM_KEYRING is not set CONFIG_INTEGRITY_AUDIT=y CONFIG_IMA=y CONFIG_IMA_MEASURE_PCR_IDX=10 CONFIG_IMA_LSM_RULES=y # CONFIG_IMA_TEMPLATE is not set CONFIG_IMA_NG_TEMPLATE=y # CONFIG_IMA_SIG_TEMPLATE is not set CONFIG_IMA_DEFAULT_TEMPLATE="ima-ng" CONFIG_IMA_DEFAULT_HASH_SHA1=y # CONFIG_IMA_DEFAULT_HASH_SHA256 is not set # CONFIG_IMA_DEFAULT_HASH_SHA512 is not set CONFIG_IMA_DEFAULT_HASH="sha1" # CONFIG_IMA_WRITE_POLICY is not set # CONFIG_IMA_READ_POLICY is not set CONFIG_IMA_APPRAISE=y # CONFIG_IMA_ARCH_POLICY is not set # CONFIG_IMA_APPRAISE_BUILD_POLICY is not set CONFIG_IMA_APPRAISE_BOOTPARAM=y # CONFIG_IMA_APPRAISE_MODSIG is not set CONFIG_IMA_TRUSTED_KEYRING=y # CONFIG_IMA_BLACKLIST_KEYRING is not set # CONFIG_IMA_LOAD_X509 is not set CONFIG_IMA_MEASURE_ASYMMETRIC_KEYS=y CONFIG_IMA_QUEUE_EARLY_BOOT_KEYS=y # CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set CONFIG_EVM=y CONFIG_EVM_ATTR_FSUUID=y # CONFIG_EVM_ADD_XATTRS is not set # CONFIG_EVM_LOAD_X509 is not set CONFIG_DEFAULT_SECURITY_SELINUX=y # CONFIG_DEFAULT_SECURITY_APPARMOR is not set # CONFIG_DEFAULT_SECURITY_DAC is not set CONFIG_LSM="lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor,bpf" # # Kernel hardening options # # # Memory initialization # CONFIG_INIT_STACK_NONE=y # CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set # CONFIG_INIT_ON_FREE_DEFAULT_ON is not set # end of Memory initialization # end of Kernel hardening options # end of Security options CONFIG_XOR_BLOCKS=m CONFIG_ASYNC_CORE=m CONFIG_ASYNC_MEMCPY=m CONFIG_ASYNC_XOR=m CONFIG_ASYNC_PQ=m CONFIG_ASYNC_RAID6_RECOV=m CONFIG_CRYPTO=y # # Crypto core or helper # CONFIG_CRYPTO_ALGAPI=y CONFIG_CRYPTO_ALGAPI2=y CONFIG_CRYPTO_AEAD=y CONFIG_CRYPTO_AEAD2=y CONFIG_CRYPTO_SKCIPHER=y CONFIG_CRYPTO_SKCIPHER2=y CONFIG_CRYPTO_HASH=y CONFIG_CRYPTO_HASH2=y CONFIG_CRYPTO_RNG=y CONFIG_CRYPTO_RNG2=y CONFIG_CRYPTO_RNG_DEFAULT=y CONFIG_CRYPTO_AKCIPHER2=y CONFIG_CRYPTO_AKCIPHER=y CONFIG_CRYPTO_KPP2=y CONFIG_CRYPTO_KPP=m CONFIG_CRYPTO_ACOMP2=y CONFIG_CRYPTO_MANAGER=y CONFIG_CRYPTO_MANAGER2=y CONFIG_CRYPTO_USER=m CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y CONFIG_CRYPTO_GF128MUL=y CONFIG_CRYPTO_NULL=y CONFIG_CRYPTO_NULL2=y CONFIG_CRYPTO_PCRYPT=m CONFIG_CRYPTO_CRYPTD=y CONFIG_CRYPTO_AUTHENC=m CONFIG_CRYPTO_TEST=m CONFIG_CRYPTO_SIMD=y # # Public-key cryptography # CONFIG_CRYPTO_RSA=y CONFIG_CRYPTO_DH=m CONFIG_CRYPTO_ECC=m CONFIG_CRYPTO_ECDH=m # CONFIG_CRYPTO_ECRDSA is not set # CONFIG_CRYPTO_SM2 is not set # CONFIG_CRYPTO_CURVE25519 is not set # CONFIG_CRYPTO_CURVE25519_X86 is not set # # Authenticated Encryption with Associated Data # CONFIG_CRYPTO_CCM=m CONFIG_CRYPTO_GCM=y CONFIG_CRYPTO_CHACHA20POLY1305=m # CONFIG_CRYPTO_AEGIS128 is not set # CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set CONFIG_CRYPTO_SEQIV=y CONFIG_CRYPTO_ECHAINIV=m # # Block modes # CONFIG_CRYPTO_CBC=y CONFIG_CRYPTO_CFB=y CONFIG_CRYPTO_CTR=y CONFIG_CRYPTO_CTS=y CONFIG_CRYPTO_ECB=y CONFIG_CRYPTO_LRW=m # CONFIG_CRYPTO_OFB is not set CONFIG_CRYPTO_PCBC=m CONFIG_CRYPTO_XTS=y # CONFIG_CRYPTO_KEYWRAP is not set # CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set # CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set # CONFIG_CRYPTO_ADIANTUM is not set CONFIG_CRYPTO_ESSIV=m # # Hash modes # CONFIG_CRYPTO_CMAC=m CONFIG_CRYPTO_HMAC=y CONFIG_CRYPTO_XCBC=m CONFIG_CRYPTO_VMAC=m # # Digest # CONFIG_CRYPTO_CRC32C=y CONFIG_CRYPTO_CRC32C_INTEL=m CONFIG_CRYPTO_CRC32=m CONFIG_CRYPTO_CRC32_PCLMUL=m CONFIG_CRYPTO_XXHASH=m CONFIG_CRYPTO_BLAKE2B=m # CONFIG_CRYPTO_BLAKE2S is not set # CONFIG_CRYPTO_BLAKE2S_X86 is not set CONFIG_CRYPTO_CRCT10DIF=y CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m CONFIG_CRYPTO_GHASH=y CONFIG_CRYPTO_POLY1305=m CONFIG_CRYPTO_POLY1305_X86_64=m CONFIG_CRYPTO_MD4=m CONFIG_CRYPTO_MD5=y CONFIG_CRYPTO_MICHAEL_MIC=m CONFIG_CRYPTO_RMD160=m CONFIG_CRYPTO_SHA1=y CONFIG_CRYPTO_SHA1_SSSE3=y CONFIG_CRYPTO_SHA256_SSSE3=y CONFIG_CRYPTO_SHA512_SSSE3=m CONFIG_CRYPTO_SHA256=y CONFIG_CRYPTO_SHA512=y CONFIG_CRYPTO_SHA3=m # CONFIG_CRYPTO_SM3 is not set # CONFIG_CRYPTO_STREEBOG is not set CONFIG_CRYPTO_WP512=m CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m # # Ciphers # CONFIG_CRYPTO_AES=y # CONFIG_CRYPTO_AES_TI is not set CONFIG_CRYPTO_AES_NI_INTEL=y CONFIG_CRYPTO_ANUBIS=m CONFIG_CRYPTO_ARC4=m CONFIG_CRYPTO_BLOWFISH=m CONFIG_CRYPTO_BLOWFISH_COMMON=m CONFIG_CRYPTO_BLOWFISH_X86_64=m CONFIG_CRYPTO_CAMELLIA=m CONFIG_CRYPTO_CAMELLIA_X86_64=m CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m CONFIG_CRYPTO_CAST_COMMON=m CONFIG_CRYPTO_CAST5=m CONFIG_CRYPTO_CAST5_AVX_X86_64=m CONFIG_CRYPTO_CAST6=m CONFIG_CRYPTO_CAST6_AVX_X86_64=m CONFIG_CRYPTO_DES=m CONFIG_CRYPTO_DES3_EDE_X86_64=m CONFIG_CRYPTO_FCRYPT=m CONFIG_CRYPTO_KHAZAD=m CONFIG_CRYPTO_CHACHA20=m CONFIG_CRYPTO_CHACHA20_X86_64=m CONFIG_CRYPTO_SEED=m CONFIG_CRYPTO_SERPENT=m CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m CONFIG_CRYPTO_SERPENT_AVX_X86_64=m CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m # CONFIG_CRYPTO_SM4 is not set CONFIG_CRYPTO_TEA=m CONFIG_CRYPTO_TWOFISH=m CONFIG_CRYPTO_TWOFISH_COMMON=m CONFIG_CRYPTO_TWOFISH_X86_64=m CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m # # Compression # CONFIG_CRYPTO_DEFLATE=y CONFIG_CRYPTO_LZO=y # CONFIG_CRYPTO_842 is not set # CONFIG_CRYPTO_LZ4 is not set # CONFIG_CRYPTO_LZ4HC is not set # CONFIG_CRYPTO_ZSTD is not set # # Random Number Generation # CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_DRBG_MENU=y CONFIG_CRYPTO_DRBG_HMAC=y CONFIG_CRYPTO_DRBG_HASH=y CONFIG_CRYPTO_DRBG_CTR=y CONFIG_CRYPTO_DRBG=y CONFIG_CRYPTO_JITTERENTROPY=y CONFIG_CRYPTO_USER_API=y CONFIG_CRYPTO_USER_API_HASH=y CONFIG_CRYPTO_USER_API_SKCIPHER=y CONFIG_CRYPTO_USER_API_RNG=y # CONFIG_CRYPTO_USER_API_RNG_CAVP is not set CONFIG_CRYPTO_USER_API_AEAD=y CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y # CONFIG_CRYPTO_STATS is not set CONFIG_CRYPTO_HASH_INFO=y # # Crypto library routines # CONFIG_CRYPTO_LIB_AES=y CONFIG_CRYPTO_LIB_ARC4=m # CONFIG_CRYPTO_LIB_BLAKE2S is not set CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=m CONFIG_CRYPTO_LIB_CHACHA_GENERIC=m # CONFIG_CRYPTO_LIB_CHACHA is not set # CONFIG_CRYPTO_LIB_CURVE25519 is not set CONFIG_CRYPTO_LIB_DES=m CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11 CONFIG_CRYPTO_ARCH_HAVE_LIB_POLY1305=m CONFIG_CRYPTO_LIB_POLY1305_GENERIC=m # CONFIG_CRYPTO_LIB_POLY1305 is not set # CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set CONFIG_CRYPTO_LIB_SHA256=y CONFIG_CRYPTO_HW=y CONFIG_CRYPTO_DEV_PADLOCK=m CONFIG_CRYPTO_DEV_PADLOCK_AES=m CONFIG_CRYPTO_DEV_PADLOCK_SHA=m # CONFIG_CRYPTO_DEV_ATMEL_ECC is not set # CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set CONFIG_CRYPTO_DEV_CCP=y CONFIG_CRYPTO_DEV_CCP_DD=m CONFIG_CRYPTO_DEV_SP_CCP=y CONFIG_CRYPTO_DEV_CCP_CRYPTO=m CONFIG_CRYPTO_DEV_SP_PSP=y # CONFIG_CRYPTO_DEV_CCP_DEBUGFS is not set CONFIG_CRYPTO_DEV_QAT=m CONFIG_CRYPTO_DEV_QAT_DH895xCC=m CONFIG_CRYPTO_DEV_QAT_C3XXX=m CONFIG_CRYPTO_DEV_QAT_C62X=m # CONFIG_CRYPTO_DEV_QAT_4XXX is not set CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m CONFIG_CRYPTO_DEV_QAT_C3XXXVF=m CONFIG_CRYPTO_DEV_QAT_C62XVF=m CONFIG_CRYPTO_DEV_NITROX=m CONFIG_CRYPTO_DEV_NITROX_CNN55XX=m # CONFIG_CRYPTO_DEV_VIRTIO is not set # CONFIG_CRYPTO_DEV_SAFEXCEL is not set # CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set CONFIG_ASYMMETRIC_KEY_TYPE=y CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y # CONFIG_ASYMMETRIC_TPM_KEY_SUBTYPE is not set CONFIG_X509_CERTIFICATE_PARSER=y # CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set CONFIG_PKCS7_MESSAGE_PARSER=y # CONFIG_PKCS7_TEST_KEY is not set CONFIG_SIGNED_PE_FILE_VERIFICATION=y # # Certificates for signature checking # CONFIG_MODULE_SIG_KEY="certs/signing_key.pem" CONFIG_SYSTEM_TRUSTED_KEYRING=y CONFIG_SYSTEM_TRUSTED_KEYS="" # CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set # CONFIG_SECONDARY_TRUSTED_KEYRING is not set CONFIG_SYSTEM_BLACKLIST_KEYRING=y CONFIG_SYSTEM_BLACKLIST_HASH_LIST="" # end of Certificates for signature checking CONFIG_BINARY_PRINTF=y # # Library routines # CONFIG_RAID6_PQ=m CONFIG_RAID6_PQ_BENCHMARK=y # CONFIG_PACKING is not set CONFIG_BITREVERSE=y CONFIG_GENERIC_STRNCPY_FROM_USER=y CONFIG_GENERIC_STRNLEN_USER=y CONFIG_GENERIC_NET_UTILS=y CONFIG_GENERIC_FIND_FIRST_BIT=y CONFIG_CORDIC=m # CONFIG_PRIME_NUMBERS is not set CONFIG_RATIONAL=y CONFIG_GENERIC_PCI_IOMAP=y CONFIG_GENERIC_IOMAP=y CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y CONFIG_ARCH_HAS_FAST_MULTIPLIER=y CONFIG_ARCH_USE_SYM_ANNOTATIONS=y CONFIG_CRC_CCITT=y CONFIG_CRC16=y CONFIG_CRC_T10DIF=y CONFIG_CRC_ITU_T=m CONFIG_CRC32=y # CONFIG_CRC32_SELFTEST is not set CONFIG_CRC32_SLICEBY8=y # CONFIG_CRC32_SLICEBY4 is not set # CONFIG_CRC32_SARWATE is not set # CONFIG_CRC32_BIT is not set # CONFIG_CRC64 is not set # CONFIG_CRC4 is not set CONFIG_CRC7=m CONFIG_LIBCRC32C=m CONFIG_CRC8=m CONFIG_XXHASH=y # CONFIG_RANDOM32_SELFTEST is not set CONFIG_ZLIB_INFLATE=y CONFIG_ZLIB_DEFLATE=y CONFIG_LZO_COMPRESS=y CONFIG_LZO_DECOMPRESS=y CONFIG_LZ4_DECOMPRESS=y CONFIG_ZSTD_COMPRESS=m CONFIG_ZSTD_DECOMPRESS=y CONFIG_XZ_DEC=y CONFIG_XZ_DEC_X86=y CONFIG_XZ_DEC_POWERPC=y CONFIG_XZ_DEC_IA64=y CONFIG_XZ_DEC_ARM=y CONFIG_XZ_DEC_ARMTHUMB=y CONFIG_XZ_DEC_SPARC=y CONFIG_XZ_DEC_BCJ=y # CONFIG_XZ_DEC_TEST is not set CONFIG_DECOMPRESS_GZIP=y CONFIG_DECOMPRESS_BZIP2=y CONFIG_DECOMPRESS_LZMA=y CONFIG_DECOMPRESS_XZ=y CONFIG_DECOMPRESS_LZO=y CONFIG_DECOMPRESS_LZ4=y CONFIG_DECOMPRESS_ZSTD=y CONFIG_GENERIC_ALLOCATOR=y CONFIG_REED_SOLOMON=m CONFIG_REED_SOLOMON_ENC8=y CONFIG_REED_SOLOMON_DEC8=y CONFIG_TEXTSEARCH=y CONFIG_TEXTSEARCH_KMP=m CONFIG_TEXTSEARCH_BM=m CONFIG_TEXTSEARCH_FSM=m CONFIG_INTERVAL_TREE=y CONFIG_XARRAY_MULTI=y CONFIG_ASSOCIATIVE_ARRAY=y CONFIG_HAS_IOMEM=y CONFIG_HAS_IOPORT_MAP=y CONFIG_HAS_DMA=y CONFIG_DMA_OPS=y CONFIG_NEED_SG_DMA_LENGTH=y CONFIG_NEED_DMA_MAP_STATE=y CONFIG_ARCH_DMA_ADDR_T_64BIT=y CONFIG_ARCH_HAS_FORCE_DMA_UNENCRYPTED=y CONFIG_SWIOTLB=y CONFIG_DMA_COHERENT_POOL=y CONFIG_DMA_CMA=y # CONFIG_DMA_PERNUMA_CMA is not set # # Default contiguous memory area size: # CONFIG_CMA_SIZE_MBYTES=200 CONFIG_CMA_SIZE_SEL_MBYTES=y # CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set # CONFIG_CMA_SIZE_SEL_MIN is not set # CONFIG_CMA_SIZE_SEL_MAX is not set CONFIG_CMA_ALIGNMENT=8 # CONFIG_DMA_API_DEBUG is not set # CONFIG_DMA_MAP_BENCHMARK is not set CONFIG_SGL_ALLOC=y CONFIG_CHECK_SIGNATURE=y CONFIG_CPUMASK_OFFSTACK=y CONFIG_CPU_RMAP=y CONFIG_DQL=y CONFIG_GLOB=y # CONFIG_GLOB_SELFTEST is not set CONFIG_NLATTR=y CONFIG_CLZ_TAB=y CONFIG_IRQ_POLL=y CONFIG_MPILIB=y CONFIG_SIGNATURE=y CONFIG_DIMLIB=y CONFIG_OID_REGISTRY=y CONFIG_UCS2_STRING=y CONFIG_HAVE_GENERIC_VDSO=y CONFIG_GENERIC_GETTIMEOFDAY=y CONFIG_GENERIC_VDSO_TIME_NS=y CONFIG_FONT_SUPPORT=y # CONFIG_FONTS is not set CONFIG_FONT_8x8=y CONFIG_FONT_8x16=y CONFIG_SG_POOL=y CONFIG_ARCH_HAS_PMEM_API=y CONFIG_MEMREGION=y CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y CONFIG_ARCH_HAS_COPY_MC=y CONFIG_ARCH_STACKWALK=y CONFIG_SBITMAP=y # CONFIG_STRING_SELFTEST is not set # end of Library routines # # Kernel hacking # # # printk and dmesg options # CONFIG_PRINTK_TIME=y # CONFIG_PRINTK_CALLER is not set CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7 CONFIG_CONSOLE_LOGLEVEL_QUIET=4 CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4 CONFIG_BOOT_PRINTK_DELAY=y CONFIG_DYNAMIC_DEBUG=y CONFIG_DYNAMIC_DEBUG_CORE=y CONFIG_SYMBOLIC_ERRNAME=y CONFIG_DEBUG_BUGVERBOSE=y # end of printk and dmesg options # # Compile-time checks and compiler options # CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO_REDUCED=y # CONFIG_DEBUG_INFO_COMPRESSED is not set # CONFIG_DEBUG_INFO_SPLIT is not set # CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set CONFIG_DEBUG_INFO_DWARF4=y # CONFIG_DEBUG_INFO_DWARF5 is not set # CONFIG_GDB_SCRIPTS is not set CONFIG_FRAME_WARN=2048 CONFIG_STRIP_ASM_SYMS=y # CONFIG_READABLE_ASM is not set # CONFIG_HEADERS_INSTALL is not set CONFIG_DEBUG_SECTION_MISMATCH=y CONFIG_SECTION_MISMATCH_WARN_ONLY=y CONFIG_STACK_VALIDATION=y # CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set # end of Compile-time checks and compiler options # # Generic Kernel Debugging Instruments # CONFIG_MAGIC_SYSRQ=y CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1 CONFIG_MAGIC_SYSRQ_SERIAL=y CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE="" CONFIG_DEBUG_FS=y CONFIG_DEBUG_FS_ALLOW_ALL=y # CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set # CONFIG_DEBUG_FS_ALLOW_NONE is not set CONFIG_HAVE_ARCH_KGDB=y # CONFIG_KGDB is not set CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y # CONFIG_UBSAN is not set CONFIG_HAVE_ARCH_KCSAN=y # end of Generic Kernel Debugging Instruments CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_MISC=y # # Memory Debugging # # CONFIG_PAGE_EXTENSION is not set # CONFIG_DEBUG_PAGEALLOC is not set # CONFIG_PAGE_OWNER is not set # CONFIG_PAGE_POISONING is not set # CONFIG_DEBUG_PAGE_REF is not set # CONFIG_DEBUG_RODATA_TEST is not set CONFIG_ARCH_HAS_DEBUG_WX=y # CONFIG_DEBUG_WX is not set CONFIG_GENERIC_PTDUMP=y # CONFIG_PTDUMP_DEBUGFS is not set # CONFIG_DEBUG_OBJECTS is not set # CONFIG_SLUB_DEBUG_ON is not set # CONFIG_SLUB_STATS is not set CONFIG_HAVE_DEBUG_KMEMLEAK=y # CONFIG_DEBUG_KMEMLEAK is not set # CONFIG_DEBUG_STACK_USAGE is not set # CONFIG_SCHED_STACK_END_CHECK is not set CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y # CONFIG_DEBUG_VM is not set # CONFIG_DEBUG_VM_PGTABLE is not set CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y # CONFIG_DEBUG_VIRTUAL is not set CONFIG_DEBUG_MEMORY_INIT=y # CONFIG_DEBUG_PER_CPU_MAPS is not set CONFIG_HAVE_ARCH_KASAN=y CONFIG_HAVE_ARCH_KASAN_VMALLOC=y CONFIG_CC_HAS_KASAN_GENERIC=y CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y # CONFIG_KASAN is not set CONFIG_HAVE_ARCH_KFENCE=y # CONFIG_KFENCE is not set # end of Memory Debugging CONFIG_DEBUG_SHIRQ=y # # Debug Oops, Lockups and Hangs # CONFIG_PANIC_ON_OOPS=y CONFIG_PANIC_ON_OOPS_VALUE=1 CONFIG_PANIC_TIMEOUT=0 CONFIG_LOCKUP_DETECTOR=y CONFIG_SOFTLOCKUP_DETECTOR=y # CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0 CONFIG_HARDLOCKUP_DETECTOR_PERF=y CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y CONFIG_HARDLOCKUP_DETECTOR=y CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1 # CONFIG_DETECT_HUNG_TASK is not set # CONFIG_WQ_WATCHDOG is not set # CONFIG_TEST_LOCKUP is not set # end of Debug Oops, Lockups and Hangs # # Scheduler Debugging # CONFIG_SCHED_DEBUG=y CONFIG_SCHED_INFO=y CONFIG_SCHEDSTATS=y # end of Scheduler Debugging # CONFIG_DEBUG_TIMEKEEPING is not set # # Lock Debugging (spinlocks, mutexes, etc...) # CONFIG_LOCK_DEBUGGING_SUPPORT=y # CONFIG_PROVE_LOCKING is not set # CONFIG_LOCK_STAT is not set # CONFIG_DEBUG_RT_MUTEXES is not set # CONFIG_DEBUG_SPINLOCK is not set # CONFIG_DEBUG_MUTEXES is not set # CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set # CONFIG_DEBUG_RWSEMS is not set # CONFIG_DEBUG_LOCK_ALLOC is not set CONFIG_DEBUG_ATOMIC_SLEEP=y # CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set CONFIG_LOCK_TORTURE_TEST=m # CONFIG_WW_MUTEX_SELFTEST is not set # CONFIG_SCF_TORTURE_TEST is not set # CONFIG_CSD_LOCK_WAIT_DEBUG is not set # end of Lock Debugging (spinlocks, mutexes, etc...) # CONFIG_DEBUG_IRQFLAGS is not set CONFIG_STACKTRACE=y # CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set # CONFIG_DEBUG_KOBJECT is not set # # Debug kernel data structures # CONFIG_DEBUG_LIST=y # CONFIG_DEBUG_PLIST is not set # CONFIG_DEBUG_SG is not set # CONFIG_DEBUG_NOTIFIERS is not set CONFIG_BUG_ON_DATA_CORRUPTION=y # end of Debug kernel data structures # CONFIG_DEBUG_CREDENTIALS is not set # # RCU Debugging # CONFIG_TORTURE_TEST=m CONFIG_RCU_SCALE_TEST=m CONFIG_RCU_TORTURE_TEST=m # CONFIG_RCU_REF_SCALE_TEST is not set CONFIG_RCU_CPU_STALL_TIMEOUT=60 # CONFIG_RCU_TRACE is not set # CONFIG_RCU_EQS_DEBUG is not set # end of RCU Debugging # CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set # CONFIG_DEBUG_BLOCK_EXT_DEVT is not set # CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set CONFIG_LATENCYTOP=y CONFIG_USER_STACKTRACE_SUPPORT=y CONFIG_NOP_TRACER=y CONFIG_HAVE_FUNCTION_TRACER=y CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y CONFIG_HAVE_DYNAMIC_FTRACE=y CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y CONFIG_HAVE_SYSCALL_TRACEPOINTS=y CONFIG_HAVE_FENTRY=y CONFIG_HAVE_OBJTOOL_MCOUNT=y CONFIG_HAVE_C_RECORDMCOUNT=y CONFIG_TRACER_MAX_TRACE=y CONFIG_TRACE_CLOCK=y CONFIG_RING_BUFFER=y CONFIG_EVENT_TRACING=y CONFIG_CONTEXT_SWITCH_TRACER=y CONFIG_TRACING=y CONFIG_GENERIC_TRACER=y CONFIG_TRACING_SUPPORT=y CONFIG_FTRACE=y # CONFIG_BOOTTIME_TRACING is not set CONFIG_FUNCTION_TRACER=y CONFIG_FUNCTION_GRAPH_TRACER=y CONFIG_DYNAMIC_FTRACE=y CONFIG_DYNAMIC_FTRACE_WITH_REGS=y CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y CONFIG_FUNCTION_PROFILER=y CONFIG_STACK_TRACER=y # CONFIG_IRQSOFF_TRACER is not set CONFIG_SCHED_TRACER=y CONFIG_HWLAT_TRACER=y # CONFIG_MMIOTRACE is not set CONFIG_FTRACE_SYSCALLS=y CONFIG_TRACER_SNAPSHOT=y # CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP is not set CONFIG_BRANCH_PROFILE_NONE=y # CONFIG_PROFILE_ANNOTATED_BRANCHES is not set CONFIG_BLK_DEV_IO_TRACE=y CONFIG_KPROBE_EVENTS=y # CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set CONFIG_UPROBE_EVENTS=y CONFIG_BPF_EVENTS=y CONFIG_DYNAMIC_EVENTS=y CONFIG_PROBE_EVENTS=y # CONFIG_BPF_KPROBE_OVERRIDE is not set CONFIG_FTRACE_MCOUNT_RECORD=y CONFIG_FTRACE_MCOUNT_USE_CC=y CONFIG_TRACING_MAP=y CONFIG_SYNTH_EVENTS=y CONFIG_HIST_TRIGGERS=y # CONFIG_TRACE_EVENT_INJECT is not set # CONFIG_TRACEPOINT_BENCHMARK is not set CONFIG_RING_BUFFER_BENCHMARK=m # CONFIG_TRACE_EVAL_MAP_FILE is not set # CONFIG_FTRACE_RECORD_RECURSION is not set # CONFIG_FTRACE_STARTUP_TEST is not set # CONFIG_RING_BUFFER_STARTUP_TEST is not set # CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS is not set # CONFIG_PREEMPTIRQ_DELAY_TEST is not set # CONFIG_SYNTH_EVENT_GEN_TEST is not set # CONFIG_KPROBE_EVENT_GEN_TEST is not set # CONFIG_HIST_TRIGGERS_DEBUG is not set CONFIG_PROVIDE_OHCI1394_DMA_INIT=y # CONFIG_SAMPLES is not set CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y CONFIG_STRICT_DEVMEM=y # CONFIG_IO_STRICT_DEVMEM is not set # # x86 Debugging # CONFIG_TRACE_IRQFLAGS_SUPPORT=y CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y CONFIG_EARLY_PRINTK_USB=y CONFIG_X86_VERBOSE_BOOTUP=y CONFIG_EARLY_PRINTK=y CONFIG_EARLY_PRINTK_DBGP=y CONFIG_EARLY_PRINTK_USB_XDBC=y # CONFIG_EFI_PGT_DUMP is not set # CONFIG_DEBUG_TLBFLUSH is not set CONFIG_HAVE_MMIOTRACE_SUPPORT=y CONFIG_X86_DECODER_SELFTEST=y CONFIG_IO_DELAY_0X80=y # CONFIG_IO_DELAY_0XED is not set # CONFIG_IO_DELAY_UDELAY is not set # CONFIG_IO_DELAY_NONE is not set CONFIG_DEBUG_BOOT_PARAMS=y # CONFIG_CPA_DEBUG is not set # CONFIG_DEBUG_ENTRY is not set # CONFIG_DEBUG_NMI_SELFTEST is not set # CONFIG_X86_DEBUG_FPU is not set # CONFIG_PUNIT_ATOM_DEBUG is not set CONFIG_UNWINDER_ORC=y # CONFIG_UNWINDER_FRAME_POINTER is not set # end of x86 Debugging # # Kernel Testing and Coverage # CONFIG_KUNIT=y # CONFIG_KUNIT_DEBUGFS is not set CONFIG_KUNIT_TEST=m CONFIG_KUNIT_EXAMPLE_TEST=m # CONFIG_KUNIT_ALL_TESTS is not set # CONFIG_NOTIFIER_ERROR_INJECTION is not set CONFIG_FUNCTION_ERROR_INJECTION=y CONFIG_FAULT_INJECTION=y # CONFIG_FAILSLAB is not set # CONFIG_FAIL_PAGE_ALLOC is not set # CONFIG_FAULT_INJECTION_USERCOPY is not set CONFIG_FAIL_MAKE_REQUEST=y # CONFIG_FAIL_IO_TIMEOUT is not set # CONFIG_FAIL_FUTEX is not set CONFIG_FAULT_INJECTION_DEBUG_FS=y # CONFIG_FAIL_FUNCTION is not set # CONFIG_FAIL_MMC_REQUEST is not set CONFIG_ARCH_HAS_KCOV=y CONFIG_CC_HAS_SANCOV_TRACE_PC=y # CONFIG_KCOV is not set CONFIG_RUNTIME_TESTING_MENU=y # CONFIG_LKDTM is not set # CONFIG_TEST_LIST_SORT is not set # CONFIG_TEST_MIN_HEAP is not set # CONFIG_TEST_SORT is not set # CONFIG_KPROBES_SANITY_TEST is not set # CONFIG_BACKTRACE_SELF_TEST is not set # CONFIG_RBTREE_TEST is not set # CONFIG_REED_SOLOMON_TEST is not set # CONFIG_INTERVAL_TREE_TEST is not set # CONFIG_PERCPU_TEST is not set CONFIG_ATOMIC64_SELFTEST=y # CONFIG_ASYNC_RAID6_TEST is not set # CONFIG_TEST_HEXDUMP is not set # CONFIG_TEST_STRING_HELPERS is not set # CONFIG_TEST_STRSCPY is not set # CONFIG_TEST_KSTRTOX is not set # CONFIG_TEST_PRINTF is not set # CONFIG_TEST_BITMAP is not set # CONFIG_TEST_UUID is not set # CONFIG_TEST_XARRAY is not set # CONFIG_TEST_OVERFLOW is not set # CONFIG_TEST_RHASHTABLE is not set # CONFIG_TEST_HASH is not set # CONFIG_TEST_IDA is not set # CONFIG_TEST_LKM is not set # CONFIG_TEST_BITOPS is not set # CONFIG_TEST_VMALLOC is not set # CONFIG_TEST_USER_COPY is not set CONFIG_TEST_BPF=m # CONFIG_TEST_BLACKHOLE_DEV is not set # CONFIG_FIND_BIT_BENCHMARK is not set # CONFIG_TEST_FIRMWARE is not set # CONFIG_TEST_SYSCTL is not set # CONFIG_BITFIELD_KUNIT is not set # CONFIG_RESOURCE_KUNIT_TEST is not set CONFIG_SYSCTL_KUNIT_TEST=m CONFIG_LIST_KUNIT_TEST=m # CONFIG_LINEAR_RANGES_TEST is not set # CONFIG_CMDLINE_KUNIT_TEST is not set # CONFIG_BITS_TEST is not set # CONFIG_TEST_UDELAY is not set # CONFIG_TEST_STATIC_KEYS is not set # CONFIG_TEST_KMOD is not set # CONFIG_TEST_MEMCAT_P is not set # CONFIG_TEST_LIVEPATCH is not set # CONFIG_TEST_STACKINIT is not set # CONFIG_TEST_MEMINIT is not set # CONFIG_TEST_HMM is not set # CONFIG_TEST_FREE_PAGES is not set # CONFIG_TEST_FPU is not set # CONFIG_MEMTEST is not set # CONFIG_HYPERV_TESTING is not set # end of Kernel Testing and Coverage # end of Kernel hacking [-- Attachment #3: job-script --] [-- Type: text/plain, Size: 7854 bytes --] #!/bin/sh export_top_env() { export suite='will-it-scale' export testcase='will-it-scale' export category='benchmark' export nr_task=16 export job_origin='will-it-scale-part1.yaml' export queue_cmdline_keys='branch commit queue_at_least_once' export queue='validate' export testbox='lkp-csl-2ap2' export tbox_group='lkp-csl-2ap2' export kconfig='x86_64-rhel-8.3' export submit_id='605da8b32dfd4fd161472adc' export job_file='/lkp/jobs/scheduled/lkp-csl-2ap2/will-it-scale-performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718-debian-10.4-x86_64-20200603.cgz-43b2a76b1a5abcc98-20210326-53601-iellhf-4.yaml' export id='6759afa599b90f3acded768e1afcb27dc10200d6' export queuer_version='/lkp-src' export model='Cascade Lake' export nr_node=4 export nr_cpu=192 export memory='192G' export nr_ssd_partitions=1 export hdd_partitions= export ssd_partitions='/dev/disk/by-id/nvme-INTEL_SSDPE2KX010T8_BTLJ910200B21P0FGN-part4' export rootfs_partition='LABEL=LKP-ROOTFS' export kernel_cmdline_hw='acpi_rsdp=0x67f44014' export brand='Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz' export commit='43b2a76b1a5abcc9833b463bef137d35cbb85cdd' export need_kconfig_hw='CONFIG_IGB=y CONFIG_BLK_DEV_NVME' export ucode='0x5003006' export enqueue_time='2021-03-26 17:26:11 +0800' export _id='605da8b72dfd4fd161472add' export _rt='/result/will-it-scale/performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718/lkp-csl-2ap2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd' export user='lkp' export compiler='gcc-9' export LKP_SERVER='internal-lkp-server' export head_commit='3a0766dd8f176ac602c8e48bebb4874537ffc570' export base_commit='0d02ec6b3136c73c09e7859f0d0e4e2c4c07b49b' export branch='linux-review/Jens-Axboe/Don-t-show-PF_IO_WORKER-in-proc-pid-task/20210326-004554' export rootfs='debian-10.4-x86_64-20200603.cgz' export monitor_sha='70d6d718' export result_root='/result/will-it-scale/performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718/lkp-csl-2ap2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/3' export scheduler_version='/lkp/lkp/.src-20210326-165251' export arch='x86_64' export max_uptime=2100 export initrd='/osimage/debian/debian-10.4-x86_64-20200603.cgz' export bootloader_append='root=/dev/ram0 user=lkp job=/lkp/jobs/scheduled/lkp-csl-2ap2/will-it-scale-performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718-debian-10.4-x86_64-20200603.cgz-43b2a76b1a5abcc98-20210326-53601-iellhf-4.yaml ARCH=x86_64 kconfig=x86_64-rhel-8.3 branch=linux-review/Jens-Axboe/Don-t-show-PF_IO_WORKER-in-proc-pid-task/20210326-004554 commit=43b2a76b1a5abcc9833b463bef137d35cbb85cdd BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/vmlinuz-5.12.0-rc2-00298-g43b2a76b1a5a acpi_rsdp=0x67f44014 max_uptime=2100 RESULT_ROOT=/result/will-it-scale/performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718/lkp-csl-2ap2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/3 LKP_SERVER=internal-lkp-server nokaslr selinux=0 debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw' export modules_initrd='/pkg/linux/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/modules.cgz' export bm_initrd='/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20201211.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/will-it-scale_20210108.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/will-it-scale-x86_64-6b6f1f6-1_20210108.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/mpstat_20200714.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/perf_20201126.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/perf-x86_64-e71ba9452f0b-1_20210106.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/sar-x86_64-34c92ae-1_20200702.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz' export ucode_initrd='/osimage/ucode/intel-ucode-20210222.cgz' export lkp_initrd='/osimage/user/lkp/lkp-x86_64.cgz' export site='inn' export LKP_CGI_PORT=80 export LKP_CIFS_PORT=139 export last_kernel='5.12.0-rc4' export repeat_to=6 export queue_at_least_once=1 export kernel='/pkg/linux/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/vmlinuz-5.12.0-rc2-00298-g43b2a76b1a5a' export dequeue_time='2021-03-26 17:27:47 +0800' export job_initrd='/lkp/jobs/scheduled/lkp-csl-2ap2/will-it-scale-performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718-debian-10.4-x86_64-20200603.cgz-43b2a76b1a5abcc98-20210326-53601-iellhf-4.cgz' [ -n "$LKP_SRC" ] || export LKP_SRC=/lkp/${user:-lkp}/src } run_job() { echo $$ > $TMP/run-job.pid . $LKP_SRC/lib/http.sh . $LKP_SRC/lib/job.sh . $LKP_SRC/lib/env.sh export_top_env run_setup $LKP_SRC/setup/cpufreq_governor 'performance' run_monitor $LKP_SRC/monitors/wrapper kmsg run_monitor $LKP_SRC/monitors/no-stdout/wrapper boot-time run_monitor $LKP_SRC/monitors/wrapper uptime run_monitor $LKP_SRC/monitors/wrapper iostat run_monitor $LKP_SRC/monitors/wrapper heartbeat run_monitor $LKP_SRC/monitors/wrapper vmstat run_monitor $LKP_SRC/monitors/wrapper numa-numastat run_monitor $LKP_SRC/monitors/wrapper numa-vmstat run_monitor $LKP_SRC/monitors/wrapper numa-meminfo run_monitor $LKP_SRC/monitors/wrapper proc-vmstat run_monitor $LKP_SRC/monitors/wrapper proc-stat run_monitor $LKP_SRC/monitors/wrapper meminfo run_monitor $LKP_SRC/monitors/wrapper slabinfo run_monitor $LKP_SRC/monitors/wrapper interrupts run_monitor $LKP_SRC/monitors/wrapper lock_stat run_monitor lite_mode=1 $LKP_SRC/monitors/wrapper perf-sched run_monitor $LKP_SRC/monitors/wrapper softirqs run_monitor $LKP_SRC/monitors/one-shot/wrapper bdi_dev_mapping run_monitor $LKP_SRC/monitors/wrapper diskstats run_monitor $LKP_SRC/monitors/wrapper nfsstat run_monitor $LKP_SRC/monitors/wrapper cpuidle run_monitor $LKP_SRC/monitors/wrapper cpufreq-stats run_monitor $LKP_SRC/monitors/wrapper sched_debug run_monitor $LKP_SRC/monitors/wrapper perf-stat run_monitor $LKP_SRC/monitors/wrapper mpstat run_monitor $LKP_SRC/monitors/no-stdout/wrapper perf-profile run_monitor $LKP_SRC/monitors/wrapper oom-killer run_monitor $LKP_SRC/monitors/plain/watchdog run_test mode='process' test='eventfd1' $LKP_SRC/tests/wrapper will-it-scale } extract_stats() { export stats_part_begin= export stats_part_end= env mode='process' test='eventfd1' $LKP_SRC/stats/wrapper will-it-scale $LKP_SRC/stats/wrapper kmsg $LKP_SRC/stats/wrapper boot-time $LKP_SRC/stats/wrapper uptime $LKP_SRC/stats/wrapper iostat $LKP_SRC/stats/wrapper vmstat $LKP_SRC/stats/wrapper numa-numastat $LKP_SRC/stats/wrapper numa-vmstat $LKP_SRC/stats/wrapper numa-meminfo $LKP_SRC/stats/wrapper proc-vmstat $LKP_SRC/stats/wrapper meminfo $LKP_SRC/stats/wrapper slabinfo $LKP_SRC/stats/wrapper interrupts $LKP_SRC/stats/wrapper lock_stat env lite_mode=1 $LKP_SRC/stats/wrapper perf-sched $LKP_SRC/stats/wrapper softirqs $LKP_SRC/stats/wrapper diskstats $LKP_SRC/stats/wrapper nfsstat $LKP_SRC/stats/wrapper cpuidle $LKP_SRC/stats/wrapper sched_debug $LKP_SRC/stats/wrapper perf-stat $LKP_SRC/stats/wrapper mpstat $LKP_SRC/stats/wrapper perf-profile $LKP_SRC/stats/wrapper time will-it-scale.time $LKP_SRC/stats/wrapper dmesg $LKP_SRC/stats/wrapper kmsg $LKP_SRC/stats/wrapper last_state $LKP_SRC/stats/wrapper stderr $LKP_SRC/stats/wrapper time } "$@" [-- Attachment #4: job.yaml --] [-- Type: text/plain, Size: 5239 bytes --] --- #! jobs/will-it-scale-part1.yaml suite: will-it-scale testcase: will-it-scale category: benchmark nr_task: 16 will-it-scale: mode: process test: eventfd1 job_origin: will-it-scale-part1.yaml #! queue options queue_cmdline_keys: - branch - commit - queue_at_least_once queue: bisect testbox: lkp-csl-2ap2 tbox_group: lkp-csl-2ap2 kconfig: x86_64-rhel-8.3 submit_id: 605d82422dfd4fc3edae6490 job_file: "/lkp/jobs/scheduled/lkp-csl-2ap2/will-it-scale-performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718-debian-10.4-x86_64-20200603.cgz-43b2a76b1a5abcc98-20210326-50157-1p2y29d-2.yaml" id: 9571265938dc509c6313b0cdbd8b098695048604 queuer_version: "/lkp-src" #! hosts/lkp-csl-2ap2 model: Cascade Lake nr_node: 4 nr_cpu: 192 memory: 192G nr_ssd_partitions: 1 hdd_partitions: ssd_partitions: "/dev/disk/by-id/nvme-INTEL_SSDPE2KX010T8_BTLJ910200B21P0FGN-part4" rootfs_partition: LABEL=LKP-ROOTFS kernel_cmdline_hw: acpi_rsdp=0x67f44014 brand: Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz #! include/category/benchmark kmsg: boot-time: uptime: iostat: heartbeat: vmstat: numa-numastat: numa-vmstat: numa-meminfo: proc-vmstat: proc-stat: meminfo: slabinfo: interrupts: lock_stat: perf-sched: lite_mode: 1 softirqs: bdi_dev_mapping: diskstats: nfsstat: cpuidle: cpufreq-stats: sched_debug: perf-stat: mpstat: perf-profile: #! include/category/ALL cpufreq_governor: performance #! include/queue/cyclic commit: 43b2a76b1a5abcc9833b463bef137d35cbb85cdd #! include/testbox/lkp-csl-2ap2 need_kconfig_hw: - CONFIG_IGB=y - CONFIG_BLK_DEV_NVME ucode: '0x5003006' enqueue_time: 2021-03-26 14:42:11.116571201 +08:00 _id: 605d82462dfd4fc3edae6492 _rt: "/result/will-it-scale/performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718/lkp-csl-2ap2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd" #! schedule options user: lkp compiler: gcc-9 LKP_SERVER: internal-lkp-server head_commit: 3a0766dd8f176ac602c8e48bebb4874537ffc570 base_commit: 0d02ec6b3136c73c09e7859f0d0e4e2c4c07b49b branch: linux-devel/devel-hourly-20210326-010219 rootfs: debian-10.4-x86_64-20200603.cgz monitor_sha: 70d6d718 result_root: "/result/will-it-scale/performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718/lkp-csl-2ap2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/0" scheduler_version: "/lkp/lkp/.src-20210325-154047" arch: x86_64 max_uptime: 2100 initrd: "/osimage/debian/debian-10.4-x86_64-20200603.cgz" bootloader_append: - root=/dev/ram0 - user=lkp - job=/lkp/jobs/scheduled/lkp-csl-2ap2/will-it-scale-performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718-debian-10.4-x86_64-20200603.cgz-43b2a76b1a5abcc98-20210326-50157-1p2y29d-2.yaml - ARCH=x86_64 - kconfig=x86_64-rhel-8.3 - branch=linux-devel/devel-hourly-20210326-010219 - commit=43b2a76b1a5abcc9833b463bef137d35cbb85cdd - BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/vmlinuz-5.12.0-rc2-00298-g43b2a76b1a5a - acpi_rsdp=0x67f44014 - max_uptime=2100 - RESULT_ROOT=/result/will-it-scale/performance-process-16-eventfd1-ucode=0x5003006-monitor=70d6d718/lkp-csl-2ap2/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/0 - LKP_SERVER=internal-lkp-server - nokaslr - selinux=0 - debug - apic=debug - sysrq_always_enabled - rcupdate.rcu_cpu_stall_timeout=100 - net.ifnames=0 - printk.devkmsg=on - panic=-1 - softlockup_panic=1 - nmi_watchdog=panic - oops=panic - load_ramdisk=2 - prompt_ramdisk=0 - drbd.minor_count=8 - systemd.log_level=err - ignore_loglevel - console=tty0 - earlyprintk=ttyS0,115200 - console=ttyS0,115200 - vga=normal - rw modules_initrd: "/pkg/linux/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/modules.cgz" bm_initrd: "/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20201211.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/will-it-scale_20210108.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/will-it-scale-x86_64-6b6f1f6-1_20210108.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/mpstat_20200714.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/perf_20201126.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/perf-x86_64-e71ba9452f0b-1_20210106.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/sar-x86_64-34c92ae-1_20200702.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz" ucode_initrd: "/osimage/ucode/intel-ucode-20210222.cgz" lkp_initrd: "/osimage/user/lkp/lkp-x86_64.cgz" site: inn #! /lkp/lkp/.src-20210325-154047/include/site/inn LKP_CGI_PORT: 80 LKP_CIFS_PORT: 139 oom-killer: watchdog: #! runtime status last_kernel: 5.12.0-rc4-04823-g7114033ed278 repeat_to: 3 #! user overrides queue_at_least_once: 0 kernel: "/pkg/linux/x86_64-rhel-8.3/gcc-9/43b2a76b1a5abcc9833b463bef137d35cbb85cdd/vmlinuz-5.12.0-rc2-00298-g43b2a76b1a5a" dequeue_time: 2021-03-26 14:43:36.638385139 +08:00 job_state: finished loadavg: 13.46 11.04 5.06 3/1357 13045 start_time: '1616741078' end_time: '1616741379' version: "/lkp/lkp/.src-20210325-154117:8248bbcb:8c5341e8d" [-- Attachment #5: reproduce --] [-- Type: text/plain, Size: 339 bytes --] for cpu_dir in /sys/devices/system/cpu/cpu[0-9]* do online_file="$cpu_dir"/online [ -f "$online_file" ] && [ "$(cat "$online_file")" -eq 0 ] && continue file="$cpu_dir"/cpufreq/scaling_governor [ -f "$file" ] && echo "performance" > "$file" done "/lkp/benchmarks/python3/bin/python3" "./runtest.py" "eventfd1" "295" "process" "16" ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 16:43 [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ Jens Axboe 2021-03-25 16:43 ` [PATCH 1/2] kernel: don't include PF_IO_WORKERs as part of same_thread_group() Jens Axboe 2021-03-25 16:43 ` [PATCH 2/2] proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/ Jens Axboe @ 2021-03-25 19:33 ` Eric W. Biederman 2021-03-25 19:38 ` Linus Torvalds 2021-03-25 19:40 ` Jens Axboe 2 siblings, 2 replies; 31+ messages in thread From: Eric W. Biederman @ 2021-03-25 19:33 UTC (permalink / raw) To: Jens Axboe; +Cc: io-uring, torvalds, linux-kernel, oleg, metze Jens Axboe <axboe@kernel.dk> writes: > Hi, > > Stefan reports that attaching to a task with io_uring will leave gdb > very confused and just repeatedly attempting to attach to the IO threads, > even though it receives an -EPERM every time. This patchset proposes to > skip PF_IO_WORKER threads as same_thread_group(), except for accounting > purposes which we still desire. > > We also skip listing the IO threads in /proc/<pid>/task/ so that gdb > doesn't think it should stop and attach to them. This makes us consistent > with earlier kernels, where these async threads were not related to the > ring owning task, and hence gdb (and others) ignored them anyway. > > Seems to me that this is the right approach, but open to comments on if > others agree with this. Oleg, I did see your messages as well on SIGSTOP, > and as was discussed with Eric as well, this is something we most > certainly can revisit. I do think that the visibility of these threads > is a separate issue. Even with SIGSTOP implemented (which I did try as > well), we're never going to allow ptrace attach and hence gdb would still > be broken. Hence I'd rather treat them as separate issues to attack. A quick skim shows that these threads are not showing up anywhere in proc which appears to be a problem, as it hides them from top. Sysadmins need the ability to dig into a system and find out where all their cpu usage or io's have gone when there is a problem. I general I think this argues that these threads should show up as threads of the process so I am not even certain this is the right fix to deal with gdb. Eric ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 19:33 ` [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ Eric W. Biederman @ 2021-03-25 19:38 ` Linus Torvalds 2021-03-25 19:40 ` Jens Axboe 2021-03-25 19:42 ` Linus Torvalds 2021-03-25 19:40 ` Jens Axboe 1 sibling, 2 replies; 31+ messages in thread From: Linus Torvalds @ 2021-03-25 19:38 UTC (permalink / raw) To: Eric W. Biederman Cc: Jens Axboe, io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On Thu, Mar 25, 2021 at 12:34 PM Eric W. Biederman <ebiederm@xmission.com> wrote: > > A quick skim shows that these threads are not showing up anywhere in > proc which appears to be a problem, as it hides them from top. > > Sysadmins need the ability to dig into a system and find out where all > their cpu usage or io's have gone when there is a problem. I general I > think this argues that these threads should show up as threads of the > process so I am not even certain this is the right fix to deal with gdb. Yeah, I do think that hiding them is the wrong model, because it also hides them from "ps" etc, which is very wrong. I don't know what the gdb logic is, but maybe there's some other option that makes gdb not react to them? Linus ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 19:38 ` Linus Torvalds @ 2021-03-25 19:40 ` Jens Axboe 2021-03-25 19:42 ` Linus Torvalds 1 sibling, 0 replies; 31+ messages in thread From: Jens Axboe @ 2021-03-25 19:40 UTC (permalink / raw) To: Linus Torvalds, Eric W. Biederman Cc: io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On 3/25/21 1:38 PM, Linus Torvalds wrote: > On Thu, Mar 25, 2021 at 12:34 PM Eric W. Biederman > <ebiederm@xmission.com> wrote: >> >> A quick skim shows that these threads are not showing up anywhere in >> proc which appears to be a problem, as it hides them from top. >> >> Sysadmins need the ability to dig into a system and find out where all >> their cpu usage or io's have gone when there is a problem. I general I >> think this argues that these threads should show up as threads of the >> process so I am not even certain this is the right fix to deal with gdb. > > Yeah, I do think that hiding them is the wrong model, because it also > hides them from "ps" etc, which is very wrong. Totally agree. > I don't know what the gdb logic is, but maybe there's some other > option that makes gdb not react to them? Guess it's time to dig out the gdb source... I'll take a look. -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 19:38 ` Linus Torvalds 2021-03-25 19:40 ` Jens Axboe @ 2021-03-25 19:42 ` Linus Torvalds 2021-03-25 19:46 ` Jens Axboe 2021-03-25 20:12 ` Linus Torvalds 1 sibling, 2 replies; 31+ messages in thread From: Linus Torvalds @ 2021-03-25 19:42 UTC (permalink / raw) To: Eric W. Biederman Cc: Jens Axboe, io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > I don't know what the gdb logic is, but maybe there's some other > option that makes gdb not react to them? .. maybe we could have a different name for them under the task/ subdirectory, for example (not just the pid)? Although that probably messes up 'ps' too.. Linus ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 19:42 ` Linus Torvalds @ 2021-03-25 19:46 ` Jens Axboe 2021-03-25 20:21 ` Eric W. Biederman 2021-03-25 20:12 ` Linus Torvalds 1 sibling, 1 reply; 31+ messages in thread From: Jens Axboe @ 2021-03-25 19:46 UTC (permalink / raw) To: Linus Torvalds, Eric W. Biederman Cc: io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On 3/25/21 1:42 PM, Linus Torvalds wrote: > On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds > <torvalds@linux-foundation.org> wrote: >> >> I don't know what the gdb logic is, but maybe there's some other >> option that makes gdb not react to them? > > .. maybe we could have a different name for them under the task/ > subdirectory, for example (not just the pid)? Although that probably > messes up 'ps' too.. Heh, I can try, but my guess is that it would mess up _something_, if not ps/top. -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 19:46 ` Jens Axboe @ 2021-03-25 20:21 ` Eric W. Biederman 2021-03-25 20:40 ` Oleg Nesterov 2021-03-25 20:42 ` Jens Axboe 0 siblings, 2 replies; 31+ messages in thread From: Eric W. Biederman @ 2021-03-25 20:21 UTC (permalink / raw) To: Jens Axboe Cc: Linus Torvalds, io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher Jens Axboe <axboe@kernel.dk> writes: > On 3/25/21 1:42 PM, Linus Torvalds wrote: >> On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds >> <torvalds@linux-foundation.org> wrote: >>> >>> I don't know what the gdb logic is, but maybe there's some other >>> option that makes gdb not react to them? >> >> .. maybe we could have a different name for them under the task/ >> subdirectory, for example (not just the pid)? Although that probably >> messes up 'ps' too.. > > Heh, I can try, but my guess is that it would mess up _something_, if > not ps/top. Hmm. So looking quickly the flip side of the coin is gdb (and other debuggers) needs a way to know these threads are special, so it can know not to attach. I suspect getting -EPERM (or possibly a different error code) when attempting attach is the right was to know that a thread is not available to be debugged. Eric ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:21 ` Eric W. Biederman @ 2021-03-25 20:40 ` Oleg Nesterov 2021-03-25 20:43 ` Jens Axboe 2021-03-25 20:48 ` Eric W. Biederman 2021-03-25 20:42 ` Jens Axboe 1 sibling, 2 replies; 31+ messages in thread From: Oleg Nesterov @ 2021-03-25 20:40 UTC (permalink / raw) To: Eric W. Biederman Cc: Jens Axboe, Linus Torvalds, io-uring, Linux Kernel Mailing List, Stefan Metzmacher On 03/25, Eric W. Biederman wrote: > > So looking quickly the flip side of the coin is gdb (and other > debuggers) needs a way to know these threads are special, so it can know > not to attach. may be, > I suspect getting -EPERM (or possibly a different error code) when > attempting attach is the right was to know that a thread is not > available to be debugged. may be. But I don't think we can blame gdb. The kernel changed the rules, and this broke gdb. IOW, I don't agree this is gdb bug. Oleg. ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:40 ` Oleg Nesterov @ 2021-03-25 20:43 ` Jens Axboe 2021-03-25 20:48 ` Eric W. Biederman 1 sibling, 0 replies; 31+ messages in thread From: Jens Axboe @ 2021-03-25 20:43 UTC (permalink / raw) To: Oleg Nesterov, Eric W. Biederman Cc: Linus Torvalds, io-uring, Linux Kernel Mailing List, Stefan Metzmacher On 3/25/21 2:40 PM, Oleg Nesterov wrote: > On 03/25, Eric W. Biederman wrote: >> >> So looking quickly the flip side of the coin is gdb (and other >> debuggers) needs a way to know these threads are special, so it can know >> not to attach. > > may be, > >> I suspect getting -EPERM (or possibly a different error code) when >> attempting attach is the right was to know that a thread is not >> available to be debugged. > > may be. > > But I don't think we can blame gdb. The kernel changed the rules, and this > broke gdb. IOW, I don't agree this is gdb bug. Right, that's what I was getting at too - and it's likely not just gdb. We have to ensure that we don't break this use case, which seems to imply that we: 1) Just make it work, or 2) Make them hidden in such a way that gdb doesn't see them, but regular tooling does #2 seems fraught with peril, and maybe not even possible. -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:40 ` Oleg Nesterov 2021-03-25 20:43 ` Jens Axboe @ 2021-03-25 20:48 ` Eric W. Biederman 1 sibling, 0 replies; 31+ messages in thread From: Eric W. Biederman @ 2021-03-25 20:48 UTC (permalink / raw) To: Oleg Nesterov Cc: Jens Axboe, Linus Torvalds, io-uring, Linux Kernel Mailing List, Stefan Metzmacher Oleg Nesterov <oleg@redhat.com> writes: > On 03/25, Eric W. Biederman wrote: >> >> So looking quickly the flip side of the coin is gdb (and other >> debuggers) needs a way to know these threads are special, so it can know >> not to attach. > > may be, > >> I suspect getting -EPERM (or possibly a different error code) when >> attempting attach is the right was to know that a thread is not >> available to be debugged. > > may be. > > But I don't think we can blame gdb. The kernel changed the rules, and this > broke gdb. IOW, I don't agree this is gdb bug. My point would be it is not strictly a regression either. It is gdb not handling new functionality. If we can be backwards compatible and make ptrace_attach work that is preferable. If we can't saying the handful of ptrace using applications need an upgrade to support processes that use io_uring may be acceptable. I don't see any easy to implement path that is guaranteed to work. Eric ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:21 ` Eric W. Biederman 2021-03-25 20:40 ` Oleg Nesterov @ 2021-03-25 20:42 ` Jens Axboe 1 sibling, 0 replies; 31+ messages in thread From: Jens Axboe @ 2021-03-25 20:42 UTC (permalink / raw) To: Eric W. Biederman Cc: Linus Torvalds, io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On 3/25/21 2:21 PM, Eric W. Biederman wrote: > Jens Axboe <axboe@kernel.dk> writes: > >> On 3/25/21 1:42 PM, Linus Torvalds wrote: >>> On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds >>> <torvalds@linux-foundation.org> wrote: >>>> >>>> I don't know what the gdb logic is, but maybe there's some other >>>> option that makes gdb not react to them? >>> >>> .. maybe we could have a different name for them under the task/ >>> subdirectory, for example (not just the pid)? Although that probably >>> messes up 'ps' too.. >> >> Heh, I can try, but my guess is that it would mess up _something_, if >> not ps/top. > > Hmm. > > So looking quickly the flip side of the coin is gdb (and other > debuggers) needs a way to know these threads are special, so it can know > not to attach. > > I suspect getting -EPERM (or possibly a different error code) when > attempting attach is the right was to know that a thread is not > available to be debugged. But that's what's being returned right now, and gdb seemingly doesn't really handle that. And even if it was just a gdb issue that could be fixed it (it definitely is), I'd still greatly prefer not having to do that. It just takes too long for packages to get updated in distros, and it'd be years until it got fixed widely. Secondly, I'm even more worried about cases that we haven't seen yet. I doubt that gdb is the only thing that'd fall over, not expecting threads in there that it cannot attach to. -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 19:42 ` Linus Torvalds 2021-03-25 19:46 ` Jens Axboe @ 2021-03-25 20:12 ` Linus Torvalds 2021-03-25 20:40 ` Jens Axboe ` (2 more replies) 1 sibling, 3 replies; 31+ messages in thread From: Linus Torvalds @ 2021-03-25 20:12 UTC (permalink / raw) To: Eric W. Biederman Cc: Jens Axboe, io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On Thu, Mar 25, 2021 at 12:42 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds > <torvalds@linux-foundation.org> wrote: > > > > I don't know what the gdb logic is, but maybe there's some other > > option that makes gdb not react to them? > > .. maybe we could have a different name for them under the task/ > subdirectory, for example (not just the pid)? Although that probably > messes up 'ps' too.. Actually, maybe the right model is to simply make all the io threads take signals, and get rid of all the special cases. Sure, the signals will never be delivered to user space, but if we - just made the thread loop do "get_signal()" when there are pending signals - allowed ptrace_attach on them they'd look pretty much like regular threads that just never do the user-space part of signal handling. The whole "signals are very special for IO threads" thing has caused so many problems, that maybe the solution is simply to _not_ make them special? Linus ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:12 ` Linus Torvalds @ 2021-03-25 20:40 ` Jens Axboe 2021-03-25 21:44 ` Jens Axboe 2021-03-25 20:43 ` Eric W. Biederman 2021-03-25 20:44 ` Oleg Nesterov 2 siblings, 1 reply; 31+ messages in thread From: Jens Axboe @ 2021-03-25 20:40 UTC (permalink / raw) To: Linus Torvalds, Eric W. Biederman Cc: io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On 3/25/21 2:12 PM, Linus Torvalds wrote: > On Thu, Mar 25, 2021 at 12:42 PM Linus Torvalds > <torvalds@linux-foundation.org> wrote: >> >> On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds >> <torvalds@linux-foundation.org> wrote: >>> >>> I don't know what the gdb logic is, but maybe there's some other >>> option that makes gdb not react to them? >> >> .. maybe we could have a different name for them under the task/ >> subdirectory, for example (not just the pid)? Although that probably >> messes up 'ps' too.. > > Actually, maybe the right model is to simply make all the io threads > take signals, and get rid of all the special cases. > > Sure, the signals will never be delivered to user space, but if we > > - just made the thread loop do "get_signal()" when there are pending signals > > - allowed ptrace_attach on them > > they'd look pretty much like regular threads that just never do the > user-space part of signal handling. > > The whole "signals are very special for IO threads" thing has caused > so many problems, that maybe the solution is simply to _not_ make them > special? Just to wrap up the previous one, yes it broke all sorts of things to make the 'tid' directory different. They just end up being hidden anyway through that, for both ps and top. Yes, I do think that maybe it's better to just embrace maybe just embrace the signals, and have everything just work by default. It's better than continually trying to make the threads special. I'll see if there are some demons lurking down that path. -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:40 ` Jens Axboe @ 2021-03-25 21:44 ` Jens Axboe 2021-03-25 21:57 ` Stefan Metzmacher 2021-03-25 22:37 ` Linus Torvalds 0 siblings, 2 replies; 31+ messages in thread From: Jens Axboe @ 2021-03-25 21:44 UTC (permalink / raw) To: Linus Torvalds, Eric W. Biederman Cc: io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On 3/25/21 2:40 PM, Jens Axboe wrote: > On 3/25/21 2:12 PM, Linus Torvalds wrote: >> On Thu, Mar 25, 2021 at 12:42 PM Linus Torvalds >> <torvalds@linux-foundation.org> wrote: >>> >>> On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds >>> <torvalds@linux-foundation.org> wrote: >>>> >>>> I don't know what the gdb logic is, but maybe there's some other >>>> option that makes gdb not react to them? >>> >>> .. maybe we could have a different name for them under the task/ >>> subdirectory, for example (not just the pid)? Although that probably >>> messes up 'ps' too.. >> >> Actually, maybe the right model is to simply make all the io threads >> take signals, and get rid of all the special cases. >> >> Sure, the signals will never be delivered to user space, but if we >> >> - just made the thread loop do "get_signal()" when there are pending signals >> >> - allowed ptrace_attach on them >> >> they'd look pretty much like regular threads that just never do the >> user-space part of signal handling. >> >> The whole "signals are very special for IO threads" thing has caused >> so many problems, that maybe the solution is simply to _not_ make them >> special? > > Just to wrap up the previous one, yes it broke all sorts of things to > make the 'tid' directory different. They just end up being hidden anyway > through that, for both ps and top. > > Yes, I do think that maybe it's better to just embrace maybe just > embrace the signals, and have everything just work by default. It's > better than continually trying to make the threads special. I'll see > if there are some demons lurking down that path. In the spirit of "let's just try it", I ran with the below patch. With that, I can gdb attach just fine to a test case that creates an io_uring and a regular thread with pthread_create(). The regular thread uses the ring, so you end up with two iou-mgr threads. Attach: [root@archlinux ~]# gdb -p 360 [snip gdb noise] Attaching to process 360 [New LWP 361] [New LWP 362] [New LWP 363] warning: Selected architecture i386:x86-64 is not compatible with reported target architecture i386 warning: Architecture rejected target-supplied description Error while reading shared library symbols for /usr/lib/libpthread.so.0: Cannot find user-level thread for LWP 363: generic error 0x00007f7aa526e125 in clock_nanosleep@GLIBC_2.2.5 () from /usr/lib/libc.so.6 (gdb) info threads Id Target Id Frame * 1 LWP 360 "io_uring" 0x00007f7aa526e125 in clock_nanosleep@GLIBC_2.2.5 () from /usr/lib/libc.so.6 2 LWP 361 "iou-mgr-360" 0x0000000000000000 in ?? () 3 LWP 362 "io_uring" 0x00007f7aa52a0a9d in syscall () from /usr/lib/libc.so.6 4 LWP 363 "iou-mgr-362" 0x0000000000000000 in ?? () (gdb) thread 2 [Switching to thread 2 (LWP 361)] #0 0x0000000000000000 in ?? () (gdb) bt #0 0x0000000000000000 in ?? () Backtrace stopped: Cannot access memory at address 0x0 (gdb) cont Continuing. ^C Thread 1 "io_uring" received signal SIGINT, Interrupt. [Switching to LWP 360] 0x00007f7aa526e125 in clock_nanosleep@GLIBC_2.2.5 () from /usr/lib/libc.so.6 (gdb) q A debugging session is active. Inferior 1 [process 360] will be detached. Quit anyway? (y or n) y Detaching from program: /root/git/fio/t/io_uring, process 360 [Inferior 1 (process 360) detached] The iou-mgr-x threads are stopped just fine, gdb obviously can't get any real info out of them. But it works... Regular test cases work fine too, just a sanity check. Didn't expect them not to. Only thing that I dislike a bit, but I guess that's just a Linuxism, is that if can now kill an io_uring owning task by sending a signal to one of its IO thread workers. diff --git a/fs/io-wq.c b/fs/io-wq.c index b7c1fa932cb3..2dbdc552f3ba 100644 --- a/fs/io-wq.c +++ b/fs/io-wq.c @@ -505,8 +505,14 @@ static int io_wqe_worker(void *data) ret = schedule_timeout(WORKER_IDLE_TIMEOUT); if (try_to_freeze() || ret) continue; - if (fatal_signal_pending(current)) - break; + if (signal_pending(current)) { + struct ksignal ksig; + + if (fatal_signal_pending(current)) + break; + get_signal(&ksig); + continue; + } /* timed out, exit unless we're the fixed worker */ if (test_bit(IO_WQ_BIT_EXIT, &wq->state) || !(worker->flags & IO_WORKER_F_FIXED)) @@ -715,8 +721,15 @@ static int io_wq_manager(void *data) io_wq_check_workers(wq); schedule_timeout(HZ); try_to_freeze(); - if (fatal_signal_pending(current)) - set_bit(IO_WQ_BIT_EXIT, &wq->state); + if (signal_pending(current)) { + struct ksignal ksig; + + if (fatal_signal_pending(current)) + set_bit(IO_WQ_BIT_EXIT, &wq->state); + else + get_signal(&ksig); + continue; + } } while (!test_bit(IO_WQ_BIT_EXIT, &wq->state)); io_wq_check_workers(wq); diff --git a/fs/io_uring.c b/fs/io_uring.c index 54ea561db4a5..3a9d021db328 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -6765,8 +6765,14 @@ static int io_sq_thread(void *data) timeout = jiffies + sqd->sq_thread_idle; continue; } - if (fatal_signal_pending(current)) - break; + if (signal_pending(current)) { + struct ksignal ksig; + + if (fatal_signal_pending(current)) + break; + get_signal(&ksig); + continue; + } sqt_spin = false; cap_entries = !list_is_singular(&sqd->ctx_list); list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) { diff --git a/kernel/fork.c b/kernel/fork.c index d3171e8e88e5..3b45d0f04044 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2436,6 +2436,7 @@ struct task_struct *create_io_thread(int (*fn)(void *), void *arg, int node) if (!IS_ERR(tsk)) { sigfillset(&tsk->blocked); sigdelsetmask(&tsk->blocked, sigmask(SIGKILL)); + sigdelsetmask(&tsk->blocked, sigmask(SIGSTOP)); } return tsk; } diff --git a/kernel/ptrace.c b/kernel/ptrace.c index 821cf1723814..61db50f7ca86 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -375,7 +375,7 @@ static int ptrace_attach(struct task_struct *task, long request, audit_ptrace(task); retval = -EPERM; - if (unlikely(task->flags & (PF_KTHREAD | PF_IO_WORKER))) + if (unlikely(task->flags & PF_KTHREAD)) goto out; if (same_thread_group(task, current)) goto out; diff --git a/kernel/signal.c b/kernel/signal.c index f2a1b898da29..a5700557eb50 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -91,7 +91,7 @@ static bool sig_task_ignored(struct task_struct *t, int sig, bool force) return true; /* Only allow kernel generated signals to this kthread */ - if (unlikely((t->flags & (PF_KTHREAD | PF_IO_WORKER)) && + if (unlikely((t->flags & PF_KTHREAD) && (handler == SIG_KTHREAD_KERNEL) && !force)) return true; @@ -288,8 +288,7 @@ bool task_set_jobctl_pending(struct task_struct *task, unsigned long mask) JOBCTL_STOP_SIGMASK | JOBCTL_TRAPPING)); BUG_ON((mask & JOBCTL_TRAPPING) && !(mask & JOBCTL_PENDING_MASK)); - if (unlikely(fatal_signal_pending(task) || - (task->flags & (PF_EXITING | PF_IO_WORKER)))) + if (unlikely(fatal_signal_pending(task) || task->flags & PF_EXITING)) return false; if (mask & JOBCTL_STOP_SIGMASK) @@ -834,9 +833,6 @@ static int check_kill_permission(int sig, struct kernel_siginfo *info, if (!valid_signal(sig)) return -EINVAL; - /* PF_IO_WORKER threads don't take any signals */ - if (t->flags & PF_IO_WORKER) - return -ESRCH; if (!si_fromuser(info)) return 0; -- Jens Axboe ^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 21:44 ` Jens Axboe @ 2021-03-25 21:57 ` Stefan Metzmacher 2021-03-26 0:11 ` Jens Axboe 2021-03-25 22:37 ` Linus Torvalds 1 sibling, 1 reply; 31+ messages in thread From: Stefan Metzmacher @ 2021-03-25 21:57 UTC (permalink / raw) To: Jens Axboe, Linus Torvalds, Eric W. Biederman Cc: io-uring, Linux Kernel Mailing List, Oleg Nesterov Am 25.03.21 um 22:44 schrieb Jens Axboe: > On 3/25/21 2:40 PM, Jens Axboe wrote: >> On 3/25/21 2:12 PM, Linus Torvalds wrote: >>> On Thu, Mar 25, 2021 at 12:42 PM Linus Torvalds >>> <torvalds@linux-foundation.org> wrote: >>>> >>>> On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds >>>> <torvalds@linux-foundation.org> wrote: >>>>> >>>>> I don't know what the gdb logic is, but maybe there's some other >>>>> option that makes gdb not react to them? >>>> >>>> .. maybe we could have a different name for them under the task/ >>>> subdirectory, for example (not just the pid)? Although that probably >>>> messes up 'ps' too.. >>> >>> Actually, maybe the right model is to simply make all the io threads >>> take signals, and get rid of all the special cases. >>> >>> Sure, the signals will never be delivered to user space, but if we >>> >>> - just made the thread loop do "get_signal()" when there are pending signals >>> >>> - allowed ptrace_attach on them >>> >>> they'd look pretty much like regular threads that just never do the >>> user-space part of signal handling. >>> >>> The whole "signals are very special for IO threads" thing has caused >>> so many problems, that maybe the solution is simply to _not_ make them >>> special? >> >> Just to wrap up the previous one, yes it broke all sorts of things to >> make the 'tid' directory different. They just end up being hidden anyway >> through that, for both ps and top. >> >> Yes, I do think that maybe it's better to just embrace maybe just >> embrace the signals, and have everything just work by default. It's >> better than continually trying to make the threads special. I'll see >> if there are some demons lurking down that path. > > In the spirit of "let's just try it", I ran with the below patch. With > that, I can gdb attach just fine to a test case that creates an io_uring > and a regular thread with pthread_create(). The regular thread uses > the ring, so you end up with two iou-mgr threads. Attach: > > [root@archlinux ~]# gdb -p 360 > [snip gdb noise] > Attaching to process 360 > [New LWP 361] > [New LWP 362] > [New LWP 363] > > warning: Selected architecture i386:x86-64 is not compatible with reported target architecture i386 > > warning: Architecture rejected target-supplied description > Error while reading shared library symbols for /usr/lib/libpthread.so.0: > Cannot find user-level thread for LWP 363: generic error > 0x00007f7aa526e125 in clock_nanosleep@GLIBC_2.2.5 () from /usr/lib/libc.so.6 > (gdb) info threads > Id Target Id Frame > * 1 LWP 360 "io_uring" 0x00007f7aa526e125 in clock_nanosleep@GLIBC_2.2.5 () > from /usr/lib/libc.so.6 > 2 LWP 361 "iou-mgr-360" 0x0000000000000000 in ?? () > 3 LWP 362 "io_uring" 0x00007f7aa52a0a9d in syscall () from /usr/lib/libc.so.6 > 4 LWP 363 "iou-mgr-362" 0x0000000000000000 in ?? () > (gdb) thread 2 > [Switching to thread 2 (LWP 361)] > #0 0x0000000000000000 in ?? () > (gdb) bt > #0 0x0000000000000000 in ?? () > Backtrace stopped: Cannot access memory at address 0x0 > (gdb) cont > Continuing. > ^C > Thread 1 "io_uring" received signal SIGINT, Interrupt. > [Switching to LWP 360] > 0x00007f7aa526e125 in clock_nanosleep@GLIBC_2.2.5 () from /usr/lib/libc.so.6 > (gdb) q > A debugging session is active. > > Inferior 1 [process 360] will be detached. > > Quit anyway? (y or n) y > Detaching from program: /root/git/fio/t/io_uring, process 360 > [Inferior 1 (process 360) detached] > > The iou-mgr-x threads are stopped just fine, gdb obviously can't get any > real info out of them. But it works... Regular test cases work fine too, > just a sanity check. Didn't expect them not to. I guess that's basically what I tried to describe when I said they should look like a userspace process that is blocked in a syscall forever. > Only thing that I dislike a bit, but I guess that's just a Linuxism, is > that if can now kill an io_uring owning task by sending a signal to one > of its IO thread workers. Can't we just only allow SIGSTOP, which will be only delivered to the iothread itself? And also SIGKILL should not be allowed from userspace. And /proc/$iothread/ should be read only and owned by root with "cmdline" and "exe" being empty. Thanks! metze ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 21:57 ` Stefan Metzmacher @ 2021-03-26 0:11 ` Jens Axboe 2021-03-26 11:59 ` Stefan Metzmacher 0 siblings, 1 reply; 31+ messages in thread From: Jens Axboe @ 2021-03-26 0:11 UTC (permalink / raw) To: Stefan Metzmacher, Linus Torvalds, Eric W. Biederman Cc: io-uring, Linux Kernel Mailing List, Oleg Nesterov On 3/25/21 3:57 PM, Stefan Metzmacher wrote: > > Am 25.03.21 um 22:44 schrieb Jens Axboe: >> On 3/25/21 2:40 PM, Jens Axboe wrote: >>> On 3/25/21 2:12 PM, Linus Torvalds wrote: >>>> On Thu, Mar 25, 2021 at 12:42 PM Linus Torvalds >>>> <torvalds@linux-foundation.org> wrote: >>>>> >>>>> On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds >>>>> <torvalds@linux-foundation.org> wrote: >>>>>> >>>>>> I don't know what the gdb logic is, but maybe there's some other >>>>>> option that makes gdb not react to them? >>>>> >>>>> .. maybe we could have a different name for them under the task/ >>>>> subdirectory, for example (not just the pid)? Although that probably >>>>> messes up 'ps' too.. >>>> >>>> Actually, maybe the right model is to simply make all the io threads >>>> take signals, and get rid of all the special cases. >>>> >>>> Sure, the signals will never be delivered to user space, but if we >>>> >>>> - just made the thread loop do "get_signal()" when there are pending signals >>>> >>>> - allowed ptrace_attach on them >>>> >>>> they'd look pretty much like regular threads that just never do the >>>> user-space part of signal handling. >>>> >>>> The whole "signals are very special for IO threads" thing has caused >>>> so many problems, that maybe the solution is simply to _not_ make them >>>> special? >>> >>> Just to wrap up the previous one, yes it broke all sorts of things to >>> make the 'tid' directory different. They just end up being hidden anyway >>> through that, for both ps and top. >>> >>> Yes, I do think that maybe it's better to just embrace maybe just >>> embrace the signals, and have everything just work by default. It's >>> better than continually trying to make the threads special. I'll see >>> if there are some demons lurking down that path. >> >> In the spirit of "let's just try it", I ran with the below patch. With >> that, I can gdb attach just fine to a test case that creates an io_uring >> and a regular thread with pthread_create(). The regular thread uses >> the ring, so you end up with two iou-mgr threads. Attach: >> >> [root@archlinux ~]# gdb -p 360 >> [snip gdb noise] >> Attaching to process 360 >> [New LWP 361] >> [New LWP 362] >> [New LWP 363] >> >> warning: Selected architecture i386:x86-64 is not compatible with reported target architecture i386 >> >> warning: Architecture rejected target-supplied description >> Error while reading shared library symbols for /usr/lib/libpthread.so.0: >> Cannot find user-level thread for LWP 363: generic error >> 0x00007f7aa526e125 in clock_nanosleep@GLIBC_2.2.5 () from /usr/lib/libc.so.6 >> (gdb) info threads >> Id Target Id Frame >> * 1 LWP 360 "io_uring" 0x00007f7aa526e125 in clock_nanosleep@GLIBC_2.2.5 () >> from /usr/lib/libc.so.6 >> 2 LWP 361 "iou-mgr-360" 0x0000000000000000 in ?? () >> 3 LWP 362 "io_uring" 0x00007f7aa52a0a9d in syscall () from /usr/lib/libc.so.6 >> 4 LWP 363 "iou-mgr-362" 0x0000000000000000 in ?? () >> (gdb) thread 2 >> [Switching to thread 2 (LWP 361)] >> #0 0x0000000000000000 in ?? () >> (gdb) bt >> #0 0x0000000000000000 in ?? () >> Backtrace stopped: Cannot access memory at address 0x0 >> (gdb) cont >> Continuing. >> ^C >> Thread 1 "io_uring" received signal SIGINT, Interrupt. >> [Switching to LWP 360] >> 0x00007f7aa526e125 in clock_nanosleep@GLIBC_2.2.5 () from /usr/lib/libc.so.6 >> (gdb) q >> A debugging session is active. >> >> Inferior 1 [process 360] will be detached. >> >> Quit anyway? (y or n) y >> Detaching from program: /root/git/fio/t/io_uring, process 360 >> [Inferior 1 (process 360) detached] >> >> The iou-mgr-x threads are stopped just fine, gdb obviously can't get any >> real info out of them. But it works... Regular test cases work fine too, >> just a sanity check. Didn't expect them not to. > > I guess that's basically what I tried to describe when I said they > should look like a userspace process that is blocked in a syscall > forever. Right, that's almost what they look like, in practice that is what they look like. >> Only thing that I dislike a bit, but I guess that's just a Linuxism, is >> that if can now kill an io_uring owning task by sending a signal to one >> of its IO thread workers. > > Can't we just only allow SIGSTOP, which will be only delivered to > the iothread itself? And also SIGKILL should not be allowed from userspace. I don't think we can sanely block them, and we to cleanup and teardown normally regardless of who gets the signal (owner or one of the threads). So I'm not _too_ hung up on the "io thread gets signal goes to owner" as that is what happens with normal threads too, though I would prefer if that wasn't the case. But overall I feel better just embracing the thread model, rather than having something that kinda sorta looks like a thread, but differs in odd ways. > And /proc/$iothread/ should be read only and owned by root with > "cmdline" and "exe" being empty. I know you brought this one up as part of your series, not sure I get why you want it owned by root and read-only? cmdline and exe, yeah those could be hidden, but is there really any point? Maybe I'm missing something here, if so, do clue me in! -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-26 0:11 ` Jens Axboe @ 2021-03-26 11:59 ` Stefan Metzmacher 2021-04-01 14:40 ` Stefan Metzmacher 0 siblings, 1 reply; 31+ messages in thread From: Stefan Metzmacher @ 2021-03-26 11:59 UTC (permalink / raw) To: Jens Axboe, Linus Torvalds, Eric W. Biederman Cc: io-uring, Linux Kernel Mailing List, Oleg Nesterov Hi Jens, >> And /proc/$iothread/ should be read only and owned by root with >> "cmdline" and "exe" being empty. > > I know you brought this one up as part of your series, not sure I get > why you want it owned by root and read-only? cmdline and exe, yeah those > could be hidden, but is there really any point? > > Maybe I'm missing something here, if so, do clue me in! I looked through /proc and I think it's mostly similar to the unshare() case, if userspace wants to do stupid things like changing "comm" of iothreads, it gets what was asked for. But the "cmdline" hiding would be very useful. While most tools use "comm", by default. ps -eLf or 'iotop' use "cmdline". Some processes use setproctitle to change "cmdline" in order to identify the process better, without the 15 chars comm restriction, that's why I very often press 'c' in 'top' to see the cmdline, in that case it would be very helpful to see '[iou-wrk-1234]' instead of the seeing the cmdline. So I'd very much prefer if this could be applied: https://lore.kernel.org/io-uring/d4487f959c778d0b1d4c5738b75bcff17d21df5b.1616197787.git.metze@samba.org/T/#u If you want I can add a comment and a more verbose commit message... metze ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-26 11:59 ` Stefan Metzmacher @ 2021-04-01 14:40 ` Stefan Metzmacher 0 siblings, 0 replies; 31+ messages in thread From: Stefan Metzmacher @ 2021-04-01 14:40 UTC (permalink / raw) To: Jens Axboe, Linus Torvalds, Eric W. Biederman Cc: io-uring, Linux Kernel Mailing List, Oleg Nesterov Hi Jens, >> I know you brought this one up as part of your series, not sure I get >> why you want it owned by root and read-only? cmdline and exe, yeah those >> could be hidden, but is there really any point? >> >> Maybe I'm missing something here, if so, do clue me in! > > I looked through /proc and I think it's mostly similar to > the unshare() case, if userspace wants to do stupid things > like changing "comm" of iothreads, it gets what was asked for. > > But the "cmdline" hiding would be very useful. > > While most tools use "comm", by default. > > ps -eLf or 'iotop' use "cmdline". > > Some processes use setproctitle to change "cmdline" in order > to identify the process better, without the 15 chars comm restriction, > that's why I very often press 'c' in 'top' to see the cmdline, > in that case it would be very helpful to see '[iou-wrk-1234]' > instead of the seeing the cmdline. > > So I'd very much prefer if this could be applied: > https://lore.kernel.org/io-uring/d4487f959c778d0b1d4c5738b75bcff17d21df5b.1616197787.git.metze@samba.org/T/#u > > If you want I can add a comment and a more verbose commit message... I noticed that 'iotop' actually appends ' [iou-wrk-1234]' to the cmdline value, so that leaves us with 'ps -eLf' and 'top' (with 'c'). pstree -a -t -p is also fine: │ └─io_uring-cp,1315 /root/kernel/linux-image-5.12.0-rc2+-dbg_5.12.0-rc2+-5_amd64.deb file │ ├─{iou-mgr-1315},1316 │ ├─{iou-wrk-1315},1317 │ ├─{iou-wrk-1315},1318 │ ├─{iou-wrk-1315},1319 │ ├─{iou-wrk-1315},1320 In the spirit of "avoid special PF_IO_WORKER checks" I guess it's ok to leave of as is... metze ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 21:44 ` Jens Axboe 2021-03-25 21:57 ` Stefan Metzmacher @ 2021-03-25 22:37 ` Linus Torvalds 2021-03-26 0:08 ` Jens Axboe 1 sibling, 1 reply; 31+ messages in thread From: Linus Torvalds @ 2021-03-25 22:37 UTC (permalink / raw) To: Jens Axboe Cc: Eric W. Biederman, io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On Thu, Mar 25, 2021 at 2:44 PM Jens Axboe <axboe@kernel.dk> wrote: > > In the spirit of "let's just try it", I ran with the below patch. With > that, I can gdb attach just fine to a test case that creates an io_uring > and a regular thread with pthread_create(). The regular thread uses > the ring, so you end up with two iou-mgr threads. Attach: > > [root@archlinux ~]# gdb -p 360 > [snip gdb noise] > Attaching to process 360 > [New LWP 361] > [New LWP 362] > [New LWP 363] [..] Looks fairly sane to me. I think this ends up being the right approach - just the final part (famous last words) of "io_uring threads act like normal threads". Doing it for VM and FS got rid of all the special cases there, and now doing it for signal handling gets rid of all these ptrace etc issues. And the fact that a noticeable part of the patch was removing the PF_IO_WORKER tests again looks like a very good sign to me. In fact, I think you could now remove all the freezer hacks too - because get_signal() will now do the proper try_to_freeze(), so all those freezer things are stale as well. Yeah, it's still going to be different in that there's no real user space return, and so it will never look _entirely_ like a normal thread, but on the whole I really like how this does seem to get rid of another batch of special cases. Linus ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 22:37 ` Linus Torvalds @ 2021-03-26 0:08 ` Jens Axboe 0 siblings, 0 replies; 31+ messages in thread From: Jens Axboe @ 2021-03-26 0:08 UTC (permalink / raw) To: Linus Torvalds Cc: Eric W. Biederman, io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On 3/25/21 4:37 PM, Linus Torvalds wrote: > On Thu, Mar 25, 2021 at 2:44 PM Jens Axboe <axboe@kernel.dk> wrote: >> >> In the spirit of "let's just try it", I ran with the below patch. With >> that, I can gdb attach just fine to a test case that creates an io_uring >> and a regular thread with pthread_create(). The regular thread uses >> the ring, so you end up with two iou-mgr threads. Attach: >> >> [root@archlinux ~]# gdb -p 360 >> [snip gdb noise] >> Attaching to process 360 >> [New LWP 361] >> [New LWP 362] >> [New LWP 363] > [..] > > Looks fairly sane to me. > > I think this ends up being the right approach - just the final part > (famous last words) of "io_uring threads act like normal threads". > > Doing it for VM and FS got rid of all the special cases there, and now > doing it for signal handling gets rid of all these ptrace etc issues. > > And the fact that a noticeable part of the patch was removing the > PF_IO_WORKER tests again looks like a very good sign to me. I agree, and in fact there are more PF_IO_WORKER checks that can go too. The patch is just the bare minimum. > In fact, I think you could now remove all the freezer hacks too - > because get_signal() will now do the proper try_to_freeze(), so all > those freezer things are stale as well. Yep > Yeah, it's still going to be different in that there's no real user > space return, and so it will never look _entirely_ like a normal > thread, but on the whole I really like how this does seem to get rid > of another batch of special cases. That's what makes me feel better too. I think was so hung up on the "never take signals" that it just didn't occur to me to go this route instead. I'll send out a clean series. -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:12 ` Linus Torvalds 2021-03-25 20:40 ` Jens Axboe @ 2021-03-25 20:43 ` Eric W. Biederman 2021-03-25 21:50 ` Jens Axboe 2021-03-25 20:44 ` Oleg Nesterov 2 siblings, 1 reply; 31+ messages in thread From: Eric W. Biederman @ 2021-03-25 20:43 UTC (permalink / raw) To: Linus Torvalds Cc: Jens Axboe, io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher Linus Torvalds <torvalds@linux-foundation.org> writes: > On Thu, Mar 25, 2021 at 12:42 PM Linus Torvalds > <torvalds@linux-foundation.org> wrote: >> >> On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds >> <torvalds@linux-foundation.org> wrote: >> > >> > I don't know what the gdb logic is, but maybe there's some other >> > option that makes gdb not react to them? >> >> .. maybe we could have a different name for them under the task/ >> subdirectory, for example (not just the pid)? Although that probably >> messes up 'ps' too.. > > Actually, maybe the right model is to simply make all the io threads > take signals, and get rid of all the special cases. > > Sure, the signals will never be delivered to user space, but if we > > - just made the thread loop do "get_signal()" when there are pending signals > > - allowed ptrace_attach on them > > they'd look pretty much like regular threads that just never do the > user-space part of signal handling. > > The whole "signals are very special for IO threads" thing has caused > so many problems, that maybe the solution is simply to _not_ make them > special? The special case in check_kill_permission is certainly unnecessary. Having the signal blocked is enough to prevent signal_pending() from being true. The most straight forward thing I can see is to allow ptrace_attach and to modify ptrace_check_attach to always return -ESRCH for io workers unless ignore_state is set causing none of the other ptrace operations to work. That is what a long running in-kernel thread would do today so user-space aka gdb may actually cope with it. We might be able to support if io workers start supporting SIGSTOP but I am not at all certain. Eric ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:43 ` Eric W. Biederman @ 2021-03-25 21:50 ` Jens Axboe 0 siblings, 0 replies; 31+ messages in thread From: Jens Axboe @ 2021-03-25 21:50 UTC (permalink / raw) To: Eric W. Biederman, Linus Torvalds Cc: io-uring, Linux Kernel Mailing List, Oleg Nesterov, Stefan Metzmacher On 3/25/21 2:43 PM, Eric W. Biederman wrote: > Linus Torvalds <torvalds@linux-foundation.org> writes: > >> On Thu, Mar 25, 2021 at 12:42 PM Linus Torvalds >> <torvalds@linux-foundation.org> wrote: >>> >>> On Thu, Mar 25, 2021 at 12:38 PM Linus Torvalds >>> <torvalds@linux-foundation.org> wrote: >>>> >>>> I don't know what the gdb logic is, but maybe there's some other >>>> option that makes gdb not react to them? >>> >>> .. maybe we could have a different name for them under the task/ >>> subdirectory, for example (not just the pid)? Although that probably >>> messes up 'ps' too.. >> >> Actually, maybe the right model is to simply make all the io threads >> take signals, and get rid of all the special cases. >> >> Sure, the signals will never be delivered to user space, but if we >> >> - just made the thread loop do "get_signal()" when there are pending signals >> >> - allowed ptrace_attach on them >> >> they'd look pretty much like regular threads that just never do the >> user-space part of signal handling. >> >> The whole "signals are very special for IO threads" thing has caused >> so many problems, that maybe the solution is simply to _not_ make them >> special? > > The special case in check_kill_permission is certainly unnecessary. > Having the signal blocked is enough to prevent signal_pending() from > being true. > > > The most straight forward thing I can see is to allow ptrace_attach and > to modify ptrace_check_attach to always return -ESRCH for io workers > unless ignore_state is set causing none of the other ptrace operations > to work. > > That is what a long running in-kernel thread would do today so > user-space aka gdb may actually cope with it. > > > We might be able to support if io workers start supporting SIGSTOP but I > am not at all certain. See patch just send out as a POC, mostly, not fully sanitized yet. But I did try to return -ESRCH from ptrace_check_attach() if it's an IO thread and ignore_state isn't set: if (!ignore_state && child->flags & PF_IO_WORKER) return -ESRCH; and that causes gdb to abort at that thread. For the same test case as in the previous email, you get: Attaching to process 358 [New LWP 359] [New LWP 360] [New LWP 361] Couldn't get CS register: No such process. (gdb) 0x00007ffa58537125 in ?? () (gdb) bt #0 0x00007ffa58537125 in ?? () #1 0x0000000000000000 in ?? () (gdb) info threads Id Target Id Frame * 1 LWP 358 "io_uring" 0x00007ffa58537125 in ?? () 2 LWP 359 "iou-mgr-358" Couldn't get registers: No such process. (gdb) q A debugging session is active. Inferior 1 [process 358] will be detached. Quit anyway? (y or n) y Couldn't write debug register: No such process. where 360 here is a regular pthread created thread, and 361 is another iou-mgr-x task. While gdb behaves better in this case, it does still prevent you from inspecting thread 3 which would be totally valid. -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:12 ` Linus Torvalds 2021-03-25 20:40 ` Jens Axboe 2021-03-25 20:43 ` Eric W. Biederman @ 2021-03-25 20:44 ` Oleg Nesterov 2021-03-25 20:55 ` Eric W. Biederman 2 siblings, 1 reply; 31+ messages in thread From: Oleg Nesterov @ 2021-03-25 20:44 UTC (permalink / raw) To: Linus Torvalds Cc: Eric W. Biederman, Jens Axboe, io-uring, Linux Kernel Mailing List, Stefan Metzmacher On 03/25, Linus Torvalds wrote: > > The whole "signals are very special for IO threads" thing has caused > so many problems, that maybe the solution is simply to _not_ make them > special? Or may be IO threads should not abuse CLONE_THREAD? Why does create_io_thread() abuse CLONE_THREAD ? One reason (I think) is that this implies SIGKILL when the process exits/execs, anything else? Oleg. ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:44 ` Oleg Nesterov @ 2021-03-25 20:55 ` Eric W. Biederman 2021-03-25 21:20 ` Stefan Metzmacher 0 siblings, 1 reply; 31+ messages in thread From: Eric W. Biederman @ 2021-03-25 20:55 UTC (permalink / raw) To: Oleg Nesterov Cc: Linus Torvalds, Jens Axboe, io-uring, Linux Kernel Mailing List, Stefan Metzmacher Oleg Nesterov <oleg@redhat.com> writes: > On 03/25, Linus Torvalds wrote: >> >> The whole "signals are very special for IO threads" thing has caused >> so many problems, that maybe the solution is simply to _not_ make them >> special? > > Or may be IO threads should not abuse CLONE_THREAD? > > Why does create_io_thread() abuse CLONE_THREAD ? > > One reason (I think) is that this implies SIGKILL when the process exits/execs, > anything else? A lot. The io workers perform work on behave of the ordinary userspace threads. Some of that work is opening files. For things like rlimits to work properly you need to share the signal_struct. But odds are if you find anything in signal_struct (not counting signals) there will be an io_uring code path that can exercise it as io_uring can traverse the filesystem, open files and read/write files. So io_uring can exercise all of proc. Using create_io_thread with CLONE_THREAD is the least problematic way (including all of the signal and ptrace problems we are looking at right now) to implement the io worker threads. They _really_ are threads of the process that just never execute any code in userspace. Eric ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 20:55 ` Eric W. Biederman @ 2021-03-25 21:20 ` Stefan Metzmacher 2021-03-25 21:48 ` Stefan Metzmacher 0 siblings, 1 reply; 31+ messages in thread From: Stefan Metzmacher @ 2021-03-25 21:20 UTC (permalink / raw) To: Eric W. Biederman, Oleg Nesterov Cc: Linus Torvalds, Jens Axboe, io-uring, Linux Kernel Mailing List Am 25.03.21 um 21:55 schrieb Eric W. Biederman: > Oleg Nesterov <oleg@redhat.com> writes: > >> On 03/25, Linus Torvalds wrote: >>> >>> The whole "signals are very special for IO threads" thing has caused >>> so many problems, that maybe the solution is simply to _not_ make them >>> special? >> >> Or may be IO threads should not abuse CLONE_THREAD? >> >> Why does create_io_thread() abuse CLONE_THREAD ? >> >> One reason (I think) is that this implies SIGKILL when the process exits/execs, >> anything else? > > A lot. > > The io workers perform work on behave of the ordinary userspace threads. > Some of that work is opening files. For things like rlimits to work > properly you need to share the signal_struct. But odds are if you find > anything in signal_struct (not counting signals) there will be an > io_uring code path that can exercise it as io_uring can traverse the > filesystem, open files and read/write files. So io_uring can exercise > all of proc. > > Using create_io_thread with CLONE_THREAD is the least problematic way > (including all of the signal and ptrace problems we are looking at right > now) to implement the io worker threads. > > They _really_ are threads of the process that just never execute any > code in userspace. So they should look like a userspace thread sitting in something like epoll_pwait() with all signals blocked, which will never return to userspace again? I think that would be useful, but I also think that userspace should see: - /proc/$tidofiothread/cmdline as empty (in order to let ps and top use [iou-wrk-$tidofuserspacethread]) - /proc/$tidofiothread/exe as symlink to that not exists - all of /proc/$tidofiothread/ shows root.root as owner and group and things which still allow write access to /proc/$tidofiothread/comm similar things with rw permissions should still disallow modifications: For the other kernel threads e.g. "[cryptd]" I see the following: LANG=C ls -l /proc/653 | grep rw ls: cannot read symbolic link '/proc/653/exe': No such file or directory -rw-r--r-- 1 root root 0 Mar 25 22:09 autogroup -rw-r--r-- 1 root root 0 Mar 25 22:09 comm -rw-r--r-- 1 root root 0 Mar 25 22:09 coredump_filter lrwxrwxrwx 1 root root 0 Mar 25 22:09 cwd -> / lrwxrwxrwx 1 root root 0 Mar 25 22:09 exe -rw-r--r-- 1 root root 0 Mar 25 22:09 gid_map -rw-r--r-- 1 root root 0 Mar 25 22:09 loginuid -rw------- 1 root root 0 Mar 25 22:09 mem -rw-r--r-- 1 root root 0 Mar 25 22:09 oom_adj -rw-r--r-- 1 root root 0 Mar 25 22:09 oom_score_adj -rw-r--r-- 1 root root 0 Mar 25 22:09 projid_map lrwxrwxrwx 1 root root 0 Mar 25 22:09 root -> / -rw-r--r-- 1 root root 0 Mar 25 22:09 sched -rw-r--r-- 1 root root 0 Mar 25 22:09 setgroups -rw-r--r-- 1 root root 0 Mar 25 22:09 timens_offsets -rw-rw-rw- 1 root root 0 Mar 25 22:09 timerslack_ns -rw-r--r-- 1 root root 0 Mar 25 22:09 uid_map And this: LANG=C echo "bla" > /proc/653/comm -bash: echo: write error: Invalid argument LANG=C echo "bla" > /proc/653/gid_map -bash: echo: write error: Operation not permitted Can't we do the same for iothreads regarding /proc? Just make things read only there and empty "cmdline"/"exe"? Maybe I'm too naive, but that what I'd assume as a userspace developer/admin. Does at least parts of it make any sense? metze ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 21:20 ` Stefan Metzmacher @ 2021-03-25 21:48 ` Stefan Metzmacher 0 siblings, 0 replies; 31+ messages in thread From: Stefan Metzmacher @ 2021-03-25 21:48 UTC (permalink / raw) To: Eric W. Biederman, Oleg Nesterov Cc: Linus Torvalds, Jens Axboe, io-uring, Linux Kernel Mailing List Am 25.03.21 um 22:20 schrieb Stefan Metzmacher: > > Am 25.03.21 um 21:55 schrieb Eric W. Biederman: >> Oleg Nesterov <oleg@redhat.com> writes: >> >>> On 03/25, Linus Torvalds wrote: >>>> >>>> The whole "signals are very special for IO threads" thing has caused >>>> so many problems, that maybe the solution is simply to _not_ make them >>>> special? >>> >>> Or may be IO threads should not abuse CLONE_THREAD? >>> >>> Why does create_io_thread() abuse CLONE_THREAD ? >>> >>> One reason (I think) is that this implies SIGKILL when the process exits/execs, >>> anything else? >> >> A lot. >> >> The io workers perform work on behave of the ordinary userspace threads. >> Some of that work is opening files. For things like rlimits to work >> properly you need to share the signal_struct. But odds are if you find >> anything in signal_struct (not counting signals) there will be an >> io_uring code path that can exercise it as io_uring can traverse the >> filesystem, open files and read/write files. So io_uring can exercise >> all of proc. >> >> Using create_io_thread with CLONE_THREAD is the least problematic way >> (including all of the signal and ptrace problems we are looking at right >> now) to implement the io worker threads. >> >> They _really_ are threads of the process that just never execute any >> code in userspace. > > So they should look like a userspace thread sitting in something like > epoll_pwait() with all signals blocked, which will never return to userspace again? Would gdb work with that? The question is what backtrace gdb would show for that thread. Is it possible to block SIGSTOP/SIGCONT? I also think that all signals to an iothread should not be delivered to other threads and it may only react on a direct SIGSTOP/SIGCONT. I guess even SIGKILL should be ignored as the shutdown should happen via the exit path of the iothread parent only. > I think that would be useful, but I also think that userspace should see: > - /proc/$tidofiothread/cmdline as empty (in order to let ps and top use [iou-wrk-$tidofuserspacethread]) > - /proc/$tidofiothread/exe as symlink to that not exists > - all of /proc/$tidofiothread/ shows root.root as owner and group > and things which still allow write access to /proc/$tidofiothread/comm similar things > with rw permissions should still disallow modifications: > > For the other kernel threads e.g. "[cryptd]" I see the following: > > LANG=C ls -l /proc/653 | grep rw > ls: cannot read symbolic link '/proc/653/exe': No such file or directory > -rw-r--r-- 1 root root 0 Mar 25 22:09 autogroup > -rw-r--r-- 1 root root 0 Mar 25 22:09 comm > -rw-r--r-- 1 root root 0 Mar 25 22:09 coredump_filter > lrwxrwxrwx 1 root root 0 Mar 25 22:09 cwd -> / > lrwxrwxrwx 1 root root 0 Mar 25 22:09 exe > -rw-r--r-- 1 root root 0 Mar 25 22:09 gid_map > -rw-r--r-- 1 root root 0 Mar 25 22:09 loginuid > -rw------- 1 root root 0 Mar 25 22:09 mem > -rw-r--r-- 1 root root 0 Mar 25 22:09 oom_adj > -rw-r--r-- 1 root root 0 Mar 25 22:09 oom_score_adj > -rw-r--r-- 1 root root 0 Mar 25 22:09 projid_map > lrwxrwxrwx 1 root root 0 Mar 25 22:09 root -> / > -rw-r--r-- 1 root root 0 Mar 25 22:09 sched > -rw-r--r-- 1 root root 0 Mar 25 22:09 setgroups > -rw-r--r-- 1 root root 0 Mar 25 22:09 timens_offsets > -rw-rw-rw- 1 root root 0 Mar 25 22:09 timerslack_ns > -rw-r--r-- 1 root root 0 Mar 25 22:09 uid_map > > And this: > > LANG=C echo "bla" > /proc/653/comm > -bash: echo: write error: Invalid argument > > LANG=C echo "bla" > /proc/653/gid_map > -bash: echo: write error: Operation not permitted > > Can't we do the same for iothreads regarding /proc? > Just make things read only there and empty "cmdline"/"exe"? > > Maybe I'm too naive, but that what I'd assume as a userspace developer/admin. > > Does at least parts of it make any sense? I think the strange glibc setuid() behavior should also be tests here, I guess we don't want that to reset the credentials of an iothread! Another idea would be to have the iothreads as a child process with it's threads, but again I'm only looking as an admin to what I'd except to see under /proc via ps and top. metze ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 19:33 ` [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ Eric W. Biederman 2021-03-25 19:38 ` Linus Torvalds @ 2021-03-25 19:40 ` Jens Axboe 2021-03-25 20:32 ` Oleg Nesterov 1 sibling, 1 reply; 31+ messages in thread From: Jens Axboe @ 2021-03-25 19:40 UTC (permalink / raw) To: Eric W. Biederman; +Cc: io-uring, torvalds, linux-kernel, oleg, metze On 3/25/21 1:33 PM, Eric W. Biederman wrote: > Jens Axboe <axboe@kernel.dk> writes: > >> Hi, >> >> Stefan reports that attaching to a task with io_uring will leave gdb >> very confused and just repeatedly attempting to attach to the IO threads, >> even though it receives an -EPERM every time. This patchset proposes to >> skip PF_IO_WORKER threads as same_thread_group(), except for accounting >> purposes which we still desire. >> >> We also skip listing the IO threads in /proc/<pid>/task/ so that gdb >> doesn't think it should stop and attach to them. This makes us consistent >> with earlier kernels, where these async threads were not related to the >> ring owning task, and hence gdb (and others) ignored them anyway. >> >> Seems to me that this is the right approach, but open to comments on if >> others agree with this. Oleg, I did see your messages as well on SIGSTOP, >> and as was discussed with Eric as well, this is something we most >> certainly can revisit. I do think that the visibility of these threads >> is a separate issue. Even with SIGSTOP implemented (which I did try as >> well), we're never going to allow ptrace attach and hence gdb would still >> be broken. Hence I'd rather treat them as separate issues to attack. > > A quick skim shows that these threads are not showing up anywhere in > proc which appears to be a problem, as it hides them from top. > > Sysadmins need the ability to dig into a system and find out where all > their cpu usage or io's have gone when there is a problem. I general I > think this argues that these threads should show up as threads of the > process so I am not even certain this is the right fix to deal with gdb. That's a good point, overall hiding was not really what I desired, just getting them out of gdb's hands. And arguably it _is_ a gdb bug, but... -- Jens Axboe ^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ 2021-03-25 19:40 ` Jens Axboe @ 2021-03-25 20:32 ` Oleg Nesterov 0 siblings, 0 replies; 31+ messages in thread From: Oleg Nesterov @ 2021-03-25 20:32 UTC (permalink / raw) To: Jens Axboe; +Cc: Eric W. Biederman, io-uring, torvalds, linux-kernel, metze I didn't even try to read this series yet, will try tomorrow. But sorry, I can't resist... On 03/25, Jens Axboe wrote: > > On 3/25/21 1:33 PM, Eric W. Biederman wrote: > > Jens Axboe <axboe@kernel.dk> writes: > > > >> Hi, > >> > >> Stefan reports that attaching to a task with io_uring will leave gdb > >> very confused and just repeatedly attempting to attach to the IO threads, > >> even though it receives an -EPERM every time. Heh. As expected :/ > And arguably it _is_ a gdb bug, but... Why do you think so? Oleg. ^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2021-04-01 18:32 UTC | newest] Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-03-25 16:43 [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ Jens Axboe 2021-03-25 16:43 ` [PATCH 1/2] kernel: don't include PF_IO_WORKERs as part of same_thread_group() Jens Axboe 2021-03-25 16:43 ` [PATCH 2/2] proc: don't show PF_IO_WORKER threads as threads in /proc/<pid>/task/ Jens Axboe 2021-03-29 1:57 ` [proc] 43b2a76b1a: will-it-scale.per_process_ops -11.3% regression kernel test robot 2021-03-25 19:33 ` [PATCH 0/2] Don't show PF_IO_WORKER in /proc/<pid>/task/ Eric W. Biederman 2021-03-25 19:38 ` Linus Torvalds 2021-03-25 19:40 ` Jens Axboe 2021-03-25 19:42 ` Linus Torvalds 2021-03-25 19:46 ` Jens Axboe 2021-03-25 20:21 ` Eric W. Biederman 2021-03-25 20:40 ` Oleg Nesterov 2021-03-25 20:43 ` Jens Axboe 2021-03-25 20:48 ` Eric W. Biederman 2021-03-25 20:42 ` Jens Axboe 2021-03-25 20:12 ` Linus Torvalds 2021-03-25 20:40 ` Jens Axboe 2021-03-25 21:44 ` Jens Axboe 2021-03-25 21:57 ` Stefan Metzmacher 2021-03-26 0:11 ` Jens Axboe 2021-03-26 11:59 ` Stefan Metzmacher 2021-04-01 14:40 ` Stefan Metzmacher 2021-03-25 22:37 ` Linus Torvalds 2021-03-26 0:08 ` Jens Axboe 2021-03-25 20:43 ` Eric W. Biederman 2021-03-25 21:50 ` Jens Axboe 2021-03-25 20:44 ` Oleg Nesterov 2021-03-25 20:55 ` Eric W. Biederman 2021-03-25 21:20 ` Stefan Metzmacher 2021-03-25 21:48 ` Stefan Metzmacher 2021-03-25 19:40 ` Jens Axboe 2021-03-25 20:32 ` Oleg Nesterov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).