* Worker threads in D state since c5a94a618e7ac86 (workqueue: Use TASK_IDLE)
@ 2017-09-10 7:36 Markus Trippelsdorf
2017-09-11 13:11 ` Tejun Heo
0 siblings, 1 reply; 11+ messages in thread
From: Markus Trippelsdorf @ 2017-09-10 7:36 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Tejun Heo, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 429 bytes --]
Since:
commit c5a94a618e7ac86b20f53d947f68d7cee6a4c6bc
Author: Peter Zijlstra <peterz@infradead.org>
Date: Wed Aug 23 13:58:44 2017 +0200
workqueue: Use TASK_IDLE
all worker threads are in D state. They all show up when using "magic
SysRq w". In htop they all have big fat red 'D' in the state column.
Is this really desirable?
I have attached the output of "ps aux" after boot and the SysRq-w
output.
--
Markus
[-- Attachment #2: ps_aux --]
[-- Type: text/plain, Size: 13591 bytes --]
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 7.5 0.0 180 36 ? S 09:21 0:02 /sbin/minit
root 2 0.0 0.0 0 0 ? S 09:21 0:00 [kthreadd]
root 3 0.2 0.0 0 0 ? D 09:21 0:00 [kworker/0:0]
root 4 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/0:0H]
root 5 1.0 0.0 0 0 ? D 09:21 0:00 [kworker/u8:0]
root 6 0.0 0.0 0 0 ? D< 09:21 0:00 [mm_percpu_wq]
root 7 0.0 0.0 0 0 ? S 09:21 0:00 [ksoftirqd/0]
root 8 0.0 0.0 0 0 ? S 09:21 0:00 [rcu_sched]
root 9 0.0 0.0 0 0 ? S 09:21 0:00 [rcu_bh]
root 10 0.0 0.0 0 0 ? S 09:21 0:00 [migration/0]
root 11 0.0 0.0 0 0 ? S 09:21 0:00 [cpuhp/0]
root 12 0.0 0.0 0 0 ? S 09:21 0:00 [cpuhp/1]
root 13 0.0 0.0 0 0 ? S 09:21 0:00 [migration/1]
root 14 0.0 0.0 0 0 ? S 09:21 0:00 [ksoftirqd/1]
root 15 0.0 0.0 0 0 ? D 09:21 0:00 [kworker/1:0]
root 16 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/1:0H]
root 17 0.0 0.0 0 0 ? S 09:21 0:00 [cpuhp/2]
root 18 0.0 0.0 0 0 ? S 09:21 0:00 [migration/2]
root 19 0.0 0.0 0 0 ? S 09:21 0:00 [ksoftirqd/2]
root 20 0.2 0.0 0 0 ? D 09:21 0:00 [kworker/2:0]
root 21 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/2:0H]
root 22 0.0 0.0 0 0 ? S 09:21 0:00 [cpuhp/3]
root 23 0.0 0.0 0 0 ? S 09:21 0:00 [migration/3]
root 24 0.0 0.0 0 0 ? S 09:21 0:00 [ksoftirqd/3]
root 25 0.0 0.0 0 0 ? D 09:21 0:00 [kworker/3:0]
root 26 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/3:0H]
root 27 0.0 0.0 0 0 ? S 09:21 0:00 [kdevtmpfs]
root 28 0.0 0.0 0 0 ? D< 09:21 0:00 [netns]
root 29 0.1 0.0 0 0 ? D 09:21 0:00 [kworker/0:1]
root 30 0.0 0.0 0 0 ? D 09:21 0:00 [kworker/1:1]
root 31 0.0 0.0 0 0 ? D 09:21 0:00 [kworker/2:1]
root 32 0.0 0.0 0 0 ? S 09:21 0:00 [oom_reaper]
root 33 0.0 0.0 0 0 ? D< 09:21 0:00 [writeback]
root 34 0.0 0.0 0 0 ? S 09:21 0:00 [kcompactd0]
root 35 0.0 0.0 0 0 ? D< 09:21 0:00 [kblockd]
root 36 0.0 0.0 0 0 ? D< 09:21 0:00 [edac-poller]
root 37 0.1 0.0 0 0 ? D 09:21 0:00 [kworker/3:1]
root 38 0.0 0.0 0 0 ? S 09:21 0:00 [kswapd0]
root 39 0.0 0.0 0 0 ? D< 09:21 0:00 [ttm_swap]
root 40 0.0 0.0 0 0 ? S 09:21 0:00 [scsi_eh_0]
root 41 0.0 0.0 0 0 ? D< 09:21 0:00 [scsi_tmf_0]
root 42 0.0 0.0 0 0 ? S 09:21 0:00 [scsi_eh_1]
root 43 0.0 0.0 0 0 ? D< 09:21 0:00 [scsi_tmf_1]
root 44 0.0 0.0 0 0 ? S 09:21 0:00 [scsi_eh_2]
root 45 0.0 0.0 0 0 ? D< 09:21 0:00 [scsi_tmf_2]
root 46 0.0 0.0 0 0 ? S 09:21 0:00 [scsi_eh_3]
root 47 0.0 0.0 0 0 ? D< 09:21 0:00 [scsi_tmf_3]
root 48 0.0 0.0 0 0 ? S 09:21 0:00 [scsi_eh_4]
root 49 0.0 0.0 0 0 ? D< 09:21 0:00 [scsi_tmf_4]
root 50 0.0 0.0 0 0 ? S 09:21 0:00 [scsi_eh_5]
root 51 0.0 0.0 0 0 ? D< 09:21 0:00 [scsi_tmf_5]
root 52 0.5 0.0 0 0 ? D 09:21 0:00 [kworker/u8:1]
root 53 0.5 0.0 0 0 ? D 09:21 0:00 [kworker/u8:2]
root 54 0.5 0.0 0 0 ? D 09:21 0:00 [kworker/u8:3]
root 55 0.9 0.0 0 0 ? D 09:21 0:00 [kworker/u8:4]
root 56 2.4 0.0 0 0 ? D 09:21 0:00 [kworker/u8:5]
root 57 0.3 0.0 0 0 ? D 09:21 0:00 [kworker/u8:6]
root 58 0.0 0.0 0 0 ? D 09:21 0:00 [kworker/2:2]
root 59 0.0 0.0 0 0 ? D 09:21 0:00 [kworker/2:3]
root 60 0.8 0.0 0 0 ? D 09:21 0:00 [kworker/u8:7]
root 61 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/0:1H]
root 62 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/3:1H]
root 63 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/2:1H]
root 64 0.0 0.0 0 0 ? D 09:21 0:00 [kworker/0:2]
root 65 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-worker]
root 66 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/u9:0]
root 67 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-worker-hi]
root 68 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-delalloc]
root 69 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-flush_del]
root 70 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-cache]
root 71 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-submit]
root 72 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-fixup]
root 73 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio]
root 74 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-met]
root 75 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-met]
root 76 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-rai]
root 77 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-rep]
root 78 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-rmw]
root 79 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-wri]
root 80 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-freespace]
root 81 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-delayed-m]
root 82 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-readahead]
root 83 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-qgroup-re]
root 84 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-extent-re]
root 85 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/1:1H]
root 86 0.0 0.0 0 0 ? S 09:21 0:00 [btrfs-cleaner]
root 87 0.0 0.0 0 0 ? S 09:21 0:00 [btrfs-transacti]
root 94 0.0 0.0 0 0 ? S 09:21 0:00 [jbd2/sdc2-8]
root 95 0.0 0.0 0 0 ? D< 09:21 0:00 [ext4-rsv-conver]
root 96 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-worker]
root 97 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-worker-hi]
root 98 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-delalloc]
root 99 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-flush_del]
root 100 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-cache]
root 101 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-submit]
root 102 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-fixup]
root 103 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio]
root 104 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-met]
root 105 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-met]
root 106 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-rai]
root 107 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-rep]
root 108 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-rmw]
root 109 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-endio-wri]
root 110 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-freespace]
root 111 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-delayed-m]
root 112 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-readahead]
root 113 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-qgroup-re]
root 114 0.0 0.0 0 0 ? D< 09:21 0:00 [btrfs-extent-re]
root 115 0.0 0.0 0 0 ? D 09:21 0:00 [kworker/3:2]
root 117 0.0 0.0 0 0 ? D 09:21 0:00 [kworker/1:2]
root 118 0.0 0.0 0 0 ? S 09:21 0:00 [btrfs-cleaner]
root 119 0.0 0.0 0 0 ? S 09:21 0:00 [btrfs-transacti]
root 122 0.7 0.0 12268 3580 ? Ss 09:21 0:00 /sbin/udevd --daemon
root 148 0.5 0.0 0 0 ? D 09:21 0:00 [kworker/u8:8]
root 151 0.0 0.0 8948 6276 ? Ss 09:21 0:00 syslog-ng --foreground
mpd 157 1.0 0.2 409956 21728 ? Ssl 09:21 0:00 mpd --no-daemon
root 158 0.1 0.0 73352 7868 ? SLsl 09:21 0:00 ntpd -n
root 159 0.1 0.0 9724 5136 ? Ss 09:21 0:00 cupsd -f
root 161 0.0 0.0 2700 120 ? Ss 09:21 0:00 fcron -b
root 163 0.6 0.0 0 0 ? D 09:21 0:00 [kworker/u8:9]
root 164 1.1 0.0 0 0 ? D 09:21 0:00 [kworker/u8:10]
root 166 0.3 0.0 0 0 ? D 09:21 0:00 [kworker/u8:11]
root 167 1.6 0.0 0 0 ? D 09:21 0:00 [kworker/u8:12]
root 194 0.2 0.0 3780 2776 tty1 Ss 09:21 0:00 /bin/login -f
root 195 0.0 0.0 5536 1712 tty2 Ss+ 09:21 0:00 agetty 38400 tty2 linux
markus 198 0.0 0.0 8252 3952 tty1 S 09:21 0:00 -zsh
markus 199 0.0 0.0 7072 3160 tty1 S+ 09:21 0:00 /bin/sh /usr/bin/startx
markus 215 0.0 0.0 4172 2668 tty1 S+ 09:21 0:00 xinit /home/markus/.xinitrc -- /etc/X11/xinit/xserverrc :0 -auth /home/markus/.serverauth.199
root 216 3.0 0.8 275060 66748 tty3 Ssl+ 09:21 0:00 /usr/bin/X -nolisten tcp :0 -auth /home/markus/.serverauth.199
root 217 0.7 0.0 0 0 ? D 09:21 0:00 [kworker/u8:13]
markus 221 0.2 0.2 1073767784 16816 tty1 S 09:21 0:00 /home/markus/.xmonad/xmonad-x86_64-linux
root 225 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/1:2H]
markus 230 0.0 0.0 4188 2212 ? S 09:21 0:00 xbindkeys
markus 247 0.1 0.0 9196 5096 tty1 S 09:21 0:00 xscreensaver
markus 251 0.0 0.0 7072 1544 tty1 S 09:21 0:00 /bin/sh /home/markus/.xinitrc
markus 254 0.0 0.0 4428 2540 tty1 S 09:21 0:00 unclutter -noevents -idle 1
root 256 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/u9:1]
markus 257 0.2 0.2 1073770164 19108 ? Ss 09:21 0:00 xmobar
markus 265 3.9 0.9 490256 81336 tty1 Sl 09:21 0:00 konsole -e tmux attach-session
markus 271 0.0 0.0 4780 2272 tty1 S 09:21 0:00 dbus-launch --autolaunch 500fac476577f8a0a3d748f800000095 --binary-syntax --close-stderr
markus 272 0.0 0.0 3292 196 ? Ss 09:21 0:00 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
markus 275 0.0 0.0 9796 4080 pts/0 Ss+ 09:21 0:00 /usr/bin/tmux attach-session
markus 278 0.7 0.0 10492 4632 ? Ss 09:21 0:00 /usr/bin/tmux attach-session
root 279 0.0 0.0 6688 3384 pts/1 Ss 09:21 0:00 sudo su
root 280 0.0 0.0 6688 3332 pts/2 Ss 09:21 0:00 sudo su
markus 281 0.0 0.0 7072 3148 pts/3 Ss+ 09:21 0:00 sh /home/markus/multitail
markus 282 0.3 0.3 62116 29028 pts/4 Ss+ 09:21 0:00 ncmpcpp -h 192.168.1.2 -p 55555
markus 283 1.7 0.4 43556 37856 pts/5 Ss+ 09:21 0:00 mutt
markus 284 1.2 0.2 20748 16656 pts/6 Ss 09:21 0:00 zsh
markus 285 1.0 0.1 20208 15956 pts/7 Ss+ 09:21 0:00 zsh
markus 286 1.0 0.1 20208 15996 pts/8 Ss+ 09:21 0:00 zsh
markus 287 1.0 0.1 20168 15920 pts/9 Ss+ 09:21 0:00 -zsh
markus 288 1.0 0.1 20168 16016 pts/10 Ss+ 09:21 0:00 -zsh
root 300 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/3:2H]
root 301 0.0 0.0 6312 2868 pts/1 S 09:21 0:00 su
root 302 0.0 0.0 6312 2592 pts/2 S 09:21 0:00 su
root 306 0.4 0.0 11776 7572 pts/2 S+ 09:21 0:00 zsh
root 347 0.5 0.0 11776 7580 pts/1 S+ 09:21 0:00 zsh
root 351 0.0 0.0 6688 3208 pts/3 S+ 09:21 0:00 sudo multitail --no-mark-change -M 0 -csn /var/log/messages -cS kernel /var/log/kern.log
root 352 0.2 0.1 14944 10760 pts/3 S+ 09:21 0:00 multitail --no-mark-change -M 0 -csn /var/log/messages -cS kernel /var/log/kern.log
root 353 0.0 0.0 5436 620 pts/3 S 09:21 0:00 tail --follow=name -n 172 /var/log/messages
root 354 0.0 0.0 5436 700 pts/3 S 09:21 0:00 tail --follow=name -n 50 /var/log/kern.log
root 355 0.0 0.0 0 0 ? D< 09:21 0:00 [kworker/u9:2]
markus 362 0.0 0.0 6328 2400 pts/6 R+ 09:21 0:00 ps aux
[-- Attachment #3: SysRq_w --]
[-- Type: text/plain, Size: 25515 bytes --]
sysrq: SysRq : Show Blocked State
task PC stack pid father
kworker/0:0 D 0 3 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/0:0H D 0 4 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:0 D 0 5 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
mm_percpu_wq D 0 6 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/1:0 D 0 15 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/1:0H D 0 16 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/2:0 D 0 20 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/2:0H D 0 21 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/3:0 D 0 25 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/3:0H D 0 26 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
netns D 0 28 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/0:1 D 0 29 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/1:1 D 0 30 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/2:1 D 0 31 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? pm_runtime_work+0x79/0x80
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
writeback D 0 33 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kblockd D 0 35 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
edac-poller D 0 36 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/3:1 D 0 37 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
ttm_swap D 0 39 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
scsi_tmf_0 D 0 41 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
scsi_tmf_1 D 0 43 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
scsi_tmf_2 D 0 45 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
scsi_tmf_3 D 0 47 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
scsi_tmf_4 D 0 49 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
scsi_tmf_5 D 0 51 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:1 D 0 52 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:2 D 0 53 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? edac_pci_workq_function+0x54/0x70
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:3 D 0 54 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:4 D 0 55 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:5 D 0 56 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:6 D 0 57 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/2:2 D 0 58 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/2:3 D 0 59 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:7 D 0 60 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/0:1H D 0 61 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/3:1H D 0 62 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/2:1H D 0 63 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/0:2 D 0 64 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-worker D 0 65 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u9:0 D 0 66 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-worker-hi D 0 67 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-delalloc D 0 68 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-flush_del D 0 69 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-cache D 0 70 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-submit D 0 71 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-fixup D 0 72 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio D 0 73 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-met D 0 74 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-met D 0 75 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-rai D 0 76 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-rep D 0 77 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-rmw D 0 78 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-wri D 0 79 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-freespace D 0 80 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-delayed-m D 0 81 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-readahead D 0 82 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-qgroup-re D 0 83 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-extent-re D 0 84 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/1:1H D 0 85 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
ext4-rsv-conver D 0 95 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-worker D 0 96 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-worker-hi D 0 97 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-delalloc D 0 98 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-flush_del D 0 99 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? do_group_exit+0x35/0xa0
? ret_from_fork+0x22/0x30
btrfs-cache D 0 100 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-submit D 0 101 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-fixup D 0 102 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio D 0 103 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-met D 0 104 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-met D 0 105 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-rai D 0 106 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-rep D 0 107 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? do_group_exit+0x35/0xa0
? ret_from_fork+0x22/0x30
btrfs-rmw D 0 108 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-endio-wri D 0 109 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? do_group_exit+0x35/0xa0
? ret_from_fork+0x22/0x30
btrfs-freespace D 0 110 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-delayed-m D 0 111 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-readahead D 0 112 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-qgroup-re D 0 113 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-extent-re D 0 114 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? rescuer_thread+0x2f1/0x340
? __cancel_work+0x70/0x70
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/3:2 D 0 115 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/1:2 D 0 117 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
btrfs-transacti D 0 119 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? schedule_timeout+0x1a9/0x240
? __schedule+0x177/0x5b0
? io_schedule_timeout+0x1e/0x40
? wait_for_completion_io+0x92/0xf0
? do_task_dead+0x40/0x40
? write_all_supers+0x9d5/0xac0
? btrfs_commit_transaction+0x685/0x860
? start_transaction+0x94/0x3a0
? transaction_kthread+0x185/0x1a0
? btrfs_cleanup_transaction+0x460/0x460
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:8 D 0 148 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:9 D 0 163 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:10 D 0 164 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? do_group_exit+0x35/0xa0
? ret_from_fork+0x22/0x30
kworker/u8:11 D 0 166 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? do_group_exit+0x35/0xa0
? ret_from_fork+0x22/0x30
kworker/u8:12 D 0 167 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u8:13 D 0 217 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? kmem_cache_free+0xdf/0x100
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/1:2H D 0 225 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u9:1 D 0 256 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/3:2H D 0 300 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? do_group_exit+0x35/0xa0
? ret_from_fork+0x22/0x30
kworker/u9:2 D 0 355 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u9:3 D 0 419 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? ret_from_fork+0x22/0x30
kworker/u9:4 D 0 420 2 0x00000000
Call Trace:
? __schedule+0x16f/0x5b0
? schedule+0x2d/0x80
? worker_thread+0xaf/0x3f0
? process_one_work+0x340/0x340
? kthread+0x106/0x120
? __kthread_create_on_node+0x170/0x170
? do_group_exit+0x35/0xa0
? ret_from_fork+0x22/0x30
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Worker threads in D state since c5a94a618e7ac86 (workqueue: Use TASK_IDLE)
2017-09-10 7:36 Worker threads in D state since c5a94a618e7ac86 (workqueue: Use TASK_IDLE) Markus Trippelsdorf
@ 2017-09-11 13:11 ` Tejun Heo
2017-09-11 14:21 ` Markus Trippelsdorf
0 siblings, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2017-09-11 13:11 UTC (permalink / raw)
To: Markus Trippelsdorf; +Cc: Peter Zijlstra, linux-kernel
Hello,
On Sun, Sep 10, 2017 at 09:36:53AM +0200, Markus Trippelsdorf wrote:
> Since:
>
> commit c5a94a618e7ac86b20f53d947f68d7cee6a4c6bc
> Author: Peter Zijlstra <peterz@infradead.org>
> Date: Wed Aug 23 13:58:44 2017 +0200
>
> workqueue: Use TASK_IDLE
>
>
> all worker threads are in D state. They all show up when using "magic
> SysRq w". In htop they all have big fat red 'D' in the state column.
> Is this really desirable?
>
> I have attached the output of "ps aux" after boot and the SysRq-w
> output.
Hmm.... looks like we better revert until we figure out how this
should get presented in debugging facilities / to userspace. Peter?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Worker threads in D state since c5a94a618e7ac86 (workqueue: Use TASK_IDLE)
2017-09-11 13:11 ` Tejun Heo
@ 2017-09-11 14:21 ` Markus Trippelsdorf
2017-09-21 11:08 ` Markus Trippelsdorf
0 siblings, 1 reply; 11+ messages in thread
From: Markus Trippelsdorf @ 2017-09-11 14:21 UTC (permalink / raw)
To: Tejun Heo
Cc: Peter Zijlstra, linux-kernel, Luis R. Rodriguez,
Eric W. Biederman, Paul E. McKenney
On 2017.09.11 at 06:11 -0700, Tejun Heo wrote:
> Hello,
>
> On Sun, Sep 10, 2017 at 09:36:53AM +0200, Markus Trippelsdorf wrote:
> > Since:
> >
> > commit c5a94a618e7ac86b20f53d947f68d7cee6a4c6bc
> > Author: Peter Zijlstra <peterz@infradead.org>
> > Date: Wed Aug 23 13:58:44 2017 +0200
> >
> > workqueue: Use TASK_IDLE
> >
> >
> > all worker threads are in D state. They all show up when using "magic
> > SysRq w". In htop they all have big fat red 'D' in the state column.
> > Is this really desirable?
> >
> > I have attached the output of "ps aux" after boot and the SysRq-w
> > output.
>
> Hmm.... looks like we better revert until we figure out how this
> should get presented in debugging facilities / to userspace. Peter?
BTW rcu recently introduced the same issue:
commit d5374226c3e444239e063f005dfb59cae4390db4
Author: Luis R. Rodriguez <mcgrof@kernel.org>
Date: Tue Jun 20 14:45:47 2017 -0700
rcu: Use idle versions of swait to make idle-hack clear
--
Markus
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Worker threads in D state since c5a94a618e7ac86 (workqueue: Use TASK_IDLE)
2017-09-11 14:21 ` Markus Trippelsdorf
@ 2017-09-21 11:08 ` Markus Trippelsdorf
2017-09-21 12:30 ` Peter Zijlstra
0 siblings, 1 reply; 11+ messages in thread
From: Markus Trippelsdorf @ 2017-09-21 11:08 UTC (permalink / raw)
To: Tejun Heo
Cc: Peter Zijlstra, linux-kernel, Luis R. Rodriguez,
Eric W. Biederman, Paul E. McKenney, Linus Torvalds
On 2017.09.11 at 16:21 +0200, Markus Trippelsdorf wrote:
> On 2017.09.11 at 06:11 -0700, Tejun Heo wrote:
> > Hello,
> >
> > On Sun, Sep 10, 2017 at 09:36:53AM +0200, Markus Trippelsdorf wrote:
> > > Since:
> > >
> > > commit c5a94a618e7ac86b20f53d947f68d7cee6a4c6bc
> > > Author: Peter Zijlstra <peterz@infradead.org>
> > > Date: Wed Aug 23 13:58:44 2017 +0200
> > >
> > > workqueue: Use TASK_IDLE
> > >
> > >
> > > all worker threads are in D state. They all show up when using "magic
> > > SysRq w". In htop they all have big fat red 'D' in the state column.
> > > Is this really desirable?
> > >
> > > I have attached the output of "ps aux" after boot and the SysRq-w
> > > output.
> >
> > Hmm.... looks like we better revert until we figure out how this
> > should get presented in debugging facilities / to userspace. Peter?
>
> BTW rcu recently introduced the same issue:
>
> commit d5374226c3e444239e063f005dfb59cae4390db4
> Author: Luis R. Rodriguez <mcgrof@kernel.org>
> Date: Tue Jun 20 14:45:47 2017 -0700
>
> rcu: Use idle versions of swait to make idle-hack clear
Ping?
You may call it a cosmetic issue, but still it makes debugging much
harder. Finding "real" blocked tasks is now like finding a needle in a
haystack.
--
Markus
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Worker threads in D state since c5a94a618e7ac86 (workqueue: Use TASK_IDLE)
2017-09-21 11:08 ` Markus Trippelsdorf
@ 2017-09-21 12:30 ` Peter Zijlstra
2017-09-21 14:41 ` Markus Trippelsdorf
0 siblings, 1 reply; 11+ messages in thread
From: Peter Zijlstra @ 2017-09-21 12:30 UTC (permalink / raw)
To: Markus Trippelsdorf
Cc: Tejun Heo, linux-kernel, Luis R. Rodriguez, Eric W. Biederman,
Paul E. McKenney, Linus Torvalds
On Thu, Sep 21, 2017 at 01:08:42PM +0200, Markus Trippelsdorf wrote:
> On 2017.09.11 at 16:21 +0200, Markus Trippelsdorf wrote:
> > On 2017.09.11 at 06:11 -0700, Tejun Heo wrote:
> > > Hello,
> > >
> > > On Sun, Sep 10, 2017 at 09:36:53AM +0200, Markus Trippelsdorf wrote:
> > > > Since:
> > > >
> > > > commit c5a94a618e7ac86b20f53d947f68d7cee6a4c6bc
> > > > Author: Peter Zijlstra <peterz@infradead.org>
> > > > Date: Wed Aug 23 13:58:44 2017 +0200
> > > >
> > > > workqueue: Use TASK_IDLE
> > > >
> > > >
> > > > all worker threads are in D state. They all show up when using "magic
> > > > SysRq w". In htop they all have big fat red 'D' in the state column.
> > > > Is this really desirable?
> > > >
> > > > I have attached the output of "ps aux" after boot and the SysRq-w
> > > > output.
> > >
> > > Hmm.... looks like we better revert until we figure out how this
> > > should get presented in debugging facilities / to userspace. Peter?
> >
> > BTW rcu recently introduced the same issue:
> >
> > commit d5374226c3e444239e063f005dfb59cae4390db4
> > Author: Luis R. Rodriguez <mcgrof@kernel.org>
> > Date: Tue Jun 20 14:45:47 2017 -0700
> >
> > rcu: Use idle versions of swait to make idle-hack clear
>
> Ping?
> You may call it a cosmetic issue, but still it makes debugging much
> harder. Finding "real" blocked tasks is now like finding a needle in a
> haystack.
Sorry, was out traveling. We can easily fix sysrq-w, not sure we can do
much about htop (I've never seen it).
I suppose we can try and make the state character not be D, is that
really worth the trouble, or would it simply break htop if we were to
return a new character?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Worker threads in D state since c5a94a618e7ac86 (workqueue: Use TASK_IDLE)
2017-09-21 12:30 ` Peter Zijlstra
@ 2017-09-21 14:41 ` Markus Trippelsdorf
2017-09-22 9:35 ` Markus Trippelsdorf
0 siblings, 1 reply; 11+ messages in thread
From: Markus Trippelsdorf @ 2017-09-21 14:41 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Tejun Heo, linux-kernel, Luis R. Rodriguez, Eric W. Biederman,
Paul E. McKenney, Linus Torvalds
On 2017.09.21 at 14:30 +0200, Peter Zijlstra wrote:
> On Thu, Sep 21, 2017 at 01:08:42PM +0200, Markus Trippelsdorf wrote:
> > On 2017.09.11 at 16:21 +0200, Markus Trippelsdorf wrote:
> > > On 2017.09.11 at 06:11 -0700, Tejun Heo wrote:
> > > > Hello,
> > > >
> > > > On Sun, Sep 10, 2017 at 09:36:53AM +0200, Markus Trippelsdorf wrote:
> > > > > Since:
> > > > >
> > > > > commit c5a94a618e7ac86b20f53d947f68d7cee6a4c6bc
> > > > > Author: Peter Zijlstra <peterz@infradead.org>
> > > > > Date: Wed Aug 23 13:58:44 2017 +0200
> > > > >
> > > > > workqueue: Use TASK_IDLE
> > > > >
> > > > >
> > > > > all worker threads are in D state. They all show up when using "magic
> > > > > SysRq w". In htop they all have big fat red 'D' in the state column.
> > > > > Is this really desirable?
> > > > >
> > > > > I have attached the output of "ps aux" after boot and the SysRq-w
> > > > > output.
> > > >
> > > > Hmm.... looks like we better revert until we figure out how this
> > > > should get presented in debugging facilities / to userspace. Peter?
> > >
> > > BTW rcu recently introduced the same issue:
> > >
> > > commit d5374226c3e444239e063f005dfb59cae4390db4
> > > Author: Luis R. Rodriguez <mcgrof@kernel.org>
> > > Date: Tue Jun 20 14:45:47 2017 -0700
> > >
> > > rcu: Use idle versions of swait to make idle-hack clear
> >
> > Ping?
> > You may call it a cosmetic issue, but still it makes debugging much
> > harder. Finding "real" blocked tasks is now like finding a needle in a
> > haystack.
>
> Sorry, was out traveling. We can easily fix sysrq-w, not sure we can do
> much about htop (I've never seen it).
>
> I suppose we can try and make the state character not be D, is that
> really worth the trouble, or would it simply break htop if we were to
> return a new character?
It seems to work. Simply returning "I (idle)" from get_task_state() in
fs/proc/array.c when the state is TASK_IDLE does the trick.
I've tested top, htop and ps.
--
Markus
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Worker threads in D state since c5a94a618e7ac86 (workqueue: Use TASK_IDLE)
2017-09-21 14:41 ` Markus Trippelsdorf
@ 2017-09-22 9:35 ` Markus Trippelsdorf
2017-09-22 11:54 ` [RFC][PATCH] sched: Cleanup task->state printing Peter Zijlstra
0 siblings, 1 reply; 11+ messages in thread
From: Markus Trippelsdorf @ 2017-09-22 9:35 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Tejun Heo, linux-kernel, Luis R. Rodriguez, Eric W. Biederman,
Paul E. McKenney, Linus Torvalds
On 2017.09.21 at 16:41 +0200, Markus Trippelsdorf wrote:
> On 2017.09.21 at 14:30 +0200, Peter Zijlstra wrote:
> > On Thu, Sep 21, 2017 at 01:08:42PM +0200, Markus Trippelsdorf wrote:
> > > On 2017.09.11 at 16:21 +0200, Markus Trippelsdorf wrote:
> > > > On 2017.09.11 at 06:11 -0700, Tejun Heo wrote:
> > > > > Hello,
> > > > >
> > > > > On Sun, Sep 10, 2017 at 09:36:53AM +0200, Markus Trippelsdorf wrote:
> > > > > > Since:
> > > > > >
> > > > > > commit c5a94a618e7ac86b20f53d947f68d7cee6a4c6bc
> > > > > > Author: Peter Zijlstra <peterz@infradead.org>
> > > > > > Date: Wed Aug 23 13:58:44 2017 +0200
> > > > > >
> > > > > > workqueue: Use TASK_IDLE
> > > > > >
> > > > > >
> > > > > > all worker threads are in D state. They all show up when using "magic
> > > > > > SysRq w". In htop they all have big fat red 'D' in the state column.
> > > > > > Is this really desirable?
> > > > > >
> > > > > > I have attached the output of "ps aux" after boot and the SysRq-w
> > > > > > output.
> > > > >
> > > > > Hmm.... looks like we better revert until we figure out how this
> > > > > should get presented in debugging facilities / to userspace. Peter?
> > > >
> > > > BTW rcu recently introduced the same issue:
> > > >
> > > > commit d5374226c3e444239e063f005dfb59cae4390db4
> > > > Author: Luis R. Rodriguez <mcgrof@kernel.org>
> > > > Date: Tue Jun 20 14:45:47 2017 -0700
> > > >
> > > > rcu: Use idle versions of swait to make idle-hack clear
> > >
> > > Ping?
> > > You may call it a cosmetic issue, but still it makes debugging much
> > > harder. Finding "real" blocked tasks is now like finding a needle in a
> > > haystack.
> >
> > Sorry, was out traveling. We can easily fix sysrq-w, not sure we can do
> > much about htop (I've never seen it).
> >
> > I suppose we can try and make the state character not be D, is that
> > really worth the trouble, or would it simply break htop if we were to
> > return a new character?
>
> It seems to work. Simply returning "I (idle)" from get_task_state() in
> fs/proc/array.c when the state is TASK_IDLE does the trick.
> I've tested top, htop and ps.
So perhaps something like this:
diff --git a/fs/proc/array.c b/fs/proc/array.c
index 525157ca25cb..741687be3b0d 100644
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -142,6 +142,9 @@ static inline const char *get_task_state(struct task_struct *tsk)
BUILD_BUG_ON(1 + ilog2(TASK_REPORT) != ARRAY_SIZE(task_state_array)-1);
+ if (tsk->state == TASK_IDLE)
+ return "I (idle)";
+
return task_state_array[fls(state)];
}
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 18a6966567da..83681990d3f9 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5188,7 +5188,8 @@ void show_state_filter(unsigned long state_filter)
*/
touch_nmi_watchdog();
touch_all_softlockup_watchdogs();
- if (!state_filter || (p->state & state_filter))
+ if (!state_filter ||
+ (!(p->state == TASK_IDLE) && p->state & state_filter))
sched_show_task(p);
}
--
Markus
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [RFC][PATCH] sched: Cleanup task->state printing
2017-09-22 9:35 ` Markus Trippelsdorf
@ 2017-09-22 11:54 ` Peter Zijlstra
2017-09-22 12:40 ` Markus Trippelsdorf
2017-09-22 14:12 ` Steven Rostedt
0 siblings, 2 replies; 11+ messages in thread
From: Peter Zijlstra @ 2017-09-22 11:54 UTC (permalink / raw)
To: Markus Trippelsdorf
Cc: Tejun Heo, linux-kernel, Luis R. Rodriguez, Eric W. Biederman,
Paul E. McKenney, Linus Torvalds, Steven Rostedt,
Thomas Gleixner, Ingo Molnar
On Fri, Sep 22, 2017 at 11:35:33AM +0200, Markus Trippelsdorf wrote:
> > It seems to work. Simply returning "I (idle)" from get_task_state() in
> > fs/proc/array.c when the state is TASK_IDLE does the trick.
> > I've tested top, htop and ps.
I ended up with the below; there was quite a lot of inconsistent state
printing around it seems.
I should probably split this thing into a bunch of patches :/
Alongside an explicit idle state, this also exposes TASK_PARKED,
although arguably we could map that to idle too. Opinions?
---
fs/proc/array.c | 35 ++++++++++++-----------
include/linux/sched.h | 58 +++++++++++++++++++++++----------------
include/trace/events/sched.h | 24 +++++++++++-----
kernel/sched/core.c | 22 ++++++++++++++-
kernel/sched/debug.c | 2 --
kernel/trace/trace_output.c | 21 ++++----------
kernel/trace/trace_sched_wakeup.c | 12 ++++----
7 files changed, 103 insertions(+), 71 deletions(-)
diff --git a/fs/proc/array.c b/fs/proc/array.c
index 88c355574aa0..5a076854857f 100644
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -118,28 +118,31 @@ static inline void task_name(struct seq_file *m, struct task_struct *p)
* simple bit tests.
*/
static const char * const task_state_array[] = {
- "R (running)", /* 0 */
- "S (sleeping)", /* 1 */
- "D (disk sleep)", /* 2 */
- "T (stopped)", /* 4 */
- "t (tracing stop)", /* 8 */
- "X (dead)", /* 16 */
- "Z (zombie)", /* 32 */
+ /* states inside TASK_REPORT */
+
+ "R (running)", /* 0x00 */
+ "S (sleeping)", /* 0x01 */
+ "D (disk sleep)", /* 0x02 */
+ "T (stopped)", /* 0x04 */
+ "t (tracing stop)", /* 0x08 */
+ "X (dead)", /* 0x10 */
+ "Z (zombie)", /* 0x20 */
+ "P (parked)", /* 0x40 */
+
+ /* extra states, beyond TASK_REPORT */
+
+ "I (idle)", /* 0x80 */
};
static inline const char *get_task_state(struct task_struct *tsk)
{
- unsigned int state = (tsk->state | tsk->exit_state) & TASK_REPORT;
+ unsigned int tsk_state = READ_ONCE(tsk->state);
+ unsigned int state = (tsk_state | tsk->exit_state) & TASK_REPORT;
- /*
- * Parked tasks do not run; they sit in __kthread_parkme().
- * Without this check, we would report them as running, which is
- * clearly wrong, so we report them as sleeping instead.
- */
- if (tsk->state == TASK_PARKED)
- state = TASK_INTERRUPTIBLE;
+ if (tsk_state == TASK_IDLE)
+ state = TASK_REPORT_MAX;
- BUILD_BUG_ON(1 + ilog2(TASK_REPORT) != ARRAY_SIZE(task_state_array)-1);
+ BUILD_BUG_ON(1 + ilog2(TASK_REPORT) != ARRAY_SIZE(task_state_array)-2);
return task_state_array[fls(state)];
}
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 68b38335d33c..7ae81efb17bd 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -65,25 +65,27 @@ struct task_group;
*/
/* Used in tsk->state: */
-#define TASK_RUNNING 0
-#define TASK_INTERRUPTIBLE 1
-#define TASK_UNINTERRUPTIBLE 2
-#define __TASK_STOPPED 4
-#define __TASK_TRACED 8
+#define TASK_RUNNING 0x0000
+#define TASK_INTERRUPTIBLE 0x0001
+#define TASK_UNINTERRUPTIBLE 0x0002
+#define __TASK_STOPPED 0x0004
+#define __TASK_TRACED 0x0008
/* Used in tsk->exit_state: */
-#define EXIT_DEAD 16
-#define EXIT_ZOMBIE 32
+#define EXIT_DEAD 0x0010
+#define EXIT_ZOMBIE 0x0020
#define EXIT_TRACE (EXIT_ZOMBIE | EXIT_DEAD)
/* Used in tsk->state again: */
-#define TASK_DEAD 64
-#define TASK_WAKEKILL 128
-#define TASK_WAKING 256
-#define TASK_PARKED 512
-#define TASK_NOLOAD 1024
-#define TASK_NEW 2048
-#define TASK_STATE_MAX 4096
+#define TASK_PARKED 0x0040
+#define TASK_REPORT_MAX 0x0080
-#define TASK_STATE_TO_CHAR_STR "RSDTtXZxKWPNn"
+/* Not in TASK_REPORT: */
+#define TASK_DEAD 0x0080
+#define TASK_WAKEKILL 0x0100
+#define TASK_WAKING 0x0200
+#define TASK_NOLOAD 0x0400
+#define TASK_NEW 0x0800
+
+#define TASK_STATE_MAX 0x1000
/* Convenience macros for the sake of set_current_state: */
#define TASK_KILLABLE (TASK_WAKEKILL | TASK_UNINTERRUPTIBLE)
@@ -96,10 +98,11 @@ struct task_group;
#define TASK_NORMAL (TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE)
#define TASK_ALL (TASK_NORMAL | __TASK_STOPPED | __TASK_TRACED)
-/* get_task_state(): */
+/* task_state_to_char(), get_task_state(), trace_sched_switch() */
#define TASK_REPORT (TASK_RUNNING | TASK_INTERRUPTIBLE | \
TASK_UNINTERRUPTIBLE | __TASK_STOPPED | \
- __TASK_TRACED | EXIT_ZOMBIE | EXIT_DEAD)
+ __TASK_TRACED | EXIT_DEAD | EXIT_ZOMBIE | \
+ TASK_PARKED)
#define task_is_traced(task) ((task->state & __TASK_TRACED) != 0)
@@ -1244,17 +1247,24 @@ static inline pid_t task_pgrp_nr(struct task_struct *tsk)
return task_pgrp_nr_ns(tsk, &init_pid_ns);
}
-static inline char task_state_to_char(struct task_struct *task)
+static inline char __task_state_to_char(unsigned int state)
{
- const char stat_nam[] = TASK_STATE_TO_CHAR_STR;
- unsigned long state = task->state;
+ static const char state_char[] = "RSDTtXZPI";
- state = state ? __ffs(state) + 1 : 0;
+ BUILD_BUG_ON(1 + ilog2(TASK_REPORT) != sizeof(state_char) - 3);
+
+ return state_char[fls(state)];
+}
+
+static inline char task_state_to_char(struct task_struct *task)
+{
+ unsigned int tsk_state = READ_ONCE(task->state);
+ unsigned int state = (tsk_state | task->exit_state) & TASK_REPORT;
- /* Make sure the string lines up properly with the number of task states: */
- BUILD_BUG_ON(sizeof(TASK_STATE_TO_CHAR_STR)-1 != ilog2(TASK_STATE_MAX)+1);
+ if (tsk_state == TASK_IDLE)
+ state = TASK_REPORT_MAX;
- return state < sizeof(stat_nam) - 1 ? stat_nam[state] : '?';
+ return __task_state_to_char(state);
}
/**
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index ae1409ffe99a..af1858a01335 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -106,6 +106,8 @@ DEFINE_EVENT(sched_wakeup_template, sched_wakeup_new,
#ifdef CREATE_TRACE_POINTS
static inline long __trace_sched_switch_state(bool preempt, struct task_struct *p)
{
+ unsigned int state = READ_ONCE(p->state);
+
#ifdef CONFIG_SCHED_DEBUG
BUG_ON(p != current);
#endif /* CONFIG_SCHED_DEBUG */
@@ -114,10 +116,18 @@ static inline long __trace_sched_switch_state(bool preempt, struct task_struct *
* Preemption ignores task state, therefore preempted tasks are always
* RUNNING (we will not have dequeued if state != RUNNING).
*/
- return preempt ? TASK_RUNNING | TASK_STATE_MAX : p->state;
+ if (preempt)
+ return TASK_REPORT_MAX << 1;
+
+ if (state == TASK_IDLE)
+ return TASK_REPORT_MAX;
+
+ return (state | p->exit_state) & TASK_REPORT;
}
#endif /* CREATE_TRACE_POINTS */
+#define TRACE_REPORT_MASK ((TASK_REPORT_MAX << 1) - 1)
+
/*
* Tracepoint for task switches, performed by the scheduler:
*/
@@ -152,13 +162,13 @@ TRACE_EVENT(sched_switch,
TP_printk("prev_comm=%s prev_pid=%d prev_prio=%d prev_state=%s%s ==> next_comm=%s next_pid=%d next_prio=%d",
__entry->prev_comm, __entry->prev_pid, __entry->prev_prio,
- __entry->prev_state & (TASK_STATE_MAX-1) ?
- __print_flags(__entry->prev_state & (TASK_STATE_MAX-1), "|",
+
+ __entry->prev_state & TRACE_REPORT_MASK ?
+ __print_flags(__entry->prev_state & TRACE_REPORT_MASK, "|",
{ 1, "S"} , { 2, "D" }, { 4, "T" }, { 8, "t" },
- { 16, "Z" }, { 32, "X" }, { 64, "x" },
- { 128, "K" }, { 256, "W" }, { 512, "P" },
- { 1024, "N" }) : "R",
- __entry->prev_state & TASK_STATE_MAX ? "+" : "",
+ { 16, "X" }, { 32, "Z" }, { 64, "P" },
+ { 128, "I" }) : "R",
+ __entry->prev_state & (TASK_REPORT_MAX << 1) ? "+" : "",
__entry->next_comm, __entry->next_pid, __entry->next_prio)
);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 703f5831738e..431e2d6c709e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5164,6 +5164,26 @@ void sched_show_task(struct task_struct *p)
put_task_stack(p);
}
+static inline bool state_filter_match(unsigned long state_filter, struct task_struct *p)
+{
+ /* no filter, everything matches */
+ if (!state_filter)
+ return true;
+
+ /* filter, but doesn't match */
+ if (!(p->state & state_filter))
+ return false;
+
+ /*
+ * When looking for TASK_UNINTERRUPTIBLE, skip TASK_IDLE, but allow
+ * TASK_KILLABLE.
+ */
+ if (state_filter == TASK_UNINTERRUPTIBLE && p->state == TASK_IDLE)
+ return false;
+
+ return true;
+}
+
void show_state_filter(unsigned long state_filter)
{
struct task_struct *g, *p;
@@ -5186,7 +5206,7 @@ void show_state_filter(unsigned long state_filter)
*/
touch_nmi_watchdog();
touch_all_softlockup_watchdogs();
- if (!state_filter || (p->state & state_filter))
+ if (state_filter_match(state_filter, p))
sched_show_task(p);
}
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 4a23bbc3111b..244619e402cc 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -461,8 +461,6 @@ static char *task_group_path(struct task_group *tg)
}
#endif
-static const char stat_nam[] = TASK_STATE_TO_CHAR_STR;
-
static void
print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
{
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index bac629af2285..c738e764e2a5 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -656,15 +656,6 @@ int trace_print_lat_context(struct trace_iterator *iter)
return !trace_seq_has_overflowed(s);
}
-static const char state_to_char[] = TASK_STATE_TO_CHAR_STR;
-
-static int task_state_char(unsigned long state)
-{
- int bit = state ? __ffs(state) + 1 : 0;
-
- return bit < sizeof(state_to_char) - 1 ? state_to_char[bit] : '?';
-}
-
/**
* ftrace_find_event - find a registered event
* @type: the type of event to look for
@@ -930,8 +921,8 @@ static enum print_line_t trace_ctxwake_print(struct trace_iterator *iter,
trace_assign_type(field, iter->ent);
- T = task_state_char(field->next_state);
- S = task_state_char(field->prev_state);
+ T = __task_state_to_char(field->next_state);
+ S = __task_state_to_char(field->prev_state);
trace_find_cmdline(field->next_pid, comm);
trace_seq_printf(&iter->seq,
" %5d:%3d:%c %s [%03d] %5d:%3d:%c %s\n",
@@ -966,8 +957,8 @@ static int trace_ctxwake_raw(struct trace_iterator *iter, char S)
trace_assign_type(field, iter->ent);
if (!S)
- S = task_state_char(field->prev_state);
- T = task_state_char(field->next_state);
+ S = __task_state_to_char(field->prev_state);
+ T = __task_state_to_char(field->next_state);
trace_seq_printf(&iter->seq, "%d %d %c %d %d %d %c\n",
field->prev_pid,
field->prev_prio,
@@ -1002,8 +993,8 @@ static int trace_ctxwake_hex(struct trace_iterator *iter, char S)
trace_assign_type(field, iter->ent);
if (!S)
- S = task_state_char(field->prev_state);
- T = task_state_char(field->next_state);
+ S = __task_state_to_char(field->prev_state);
+ T = __task_state_to_char(field->next_state);
SEQ_PUT_HEX_FIELD(s, field->prev_pid);
SEQ_PUT_HEX_FIELD(s, field->prev_prio);
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index ddec53b67646..b14caa0afd35 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -380,7 +380,7 @@ probe_wakeup_migrate_task(void *ignore, struct task_struct *task, int cpu)
}
static void
-tracing_sched_switch_trace(struct trace_array *tr,
+tracing_sched_switch_trace(bool preempt, struct trace_array *tr,
struct task_struct *prev,
struct task_struct *next,
unsigned long flags, int pc)
@@ -397,10 +397,10 @@ tracing_sched_switch_trace(struct trace_array *tr,
entry = ring_buffer_event_data(event);
entry->prev_pid = prev->pid;
entry->prev_prio = prev->prio;
- entry->prev_state = prev->state;
+ entry->prev_state = __trace_sched_switch_state(preempt, prev);
entry->next_pid = next->pid;
entry->next_prio = next->prio;
- entry->next_state = next->state;
+ entry->next_state = __trace_sched_switch_state(false, next);
entry->next_cpu = task_cpu(next);
if (!call_filter_check_discard(call, entry, buffer, event))
@@ -425,10 +425,10 @@ tracing_sched_wakeup_trace(struct trace_array *tr,
entry = ring_buffer_event_data(event);
entry->prev_pid = curr->pid;
entry->prev_prio = curr->prio;
- entry->prev_state = curr->state;
+ entry->prev_state = __trace_sched_switch_state(false, curr);
entry->next_pid = wakee->pid;
entry->next_prio = wakee->prio;
- entry->next_state = wakee->state;
+ entry->next_state = __trace_sched_switch_state(false, wakee);
entry->next_cpu = task_cpu(wakee);
if (!call_filter_check_discard(call, entry, buffer, event))
@@ -482,7 +482,7 @@ probe_wakeup_sched_switch(void *ignore, bool preempt,
data = per_cpu_ptr(wakeup_trace->trace_buffer.data, wakeup_cpu);
__trace_function(wakeup_trace, CALLER_ADDR0, CALLER_ADDR1, flags, pc);
- tracing_sched_switch_trace(wakeup_trace, prev, next, flags, pc);
+ tracing_sched_switch_trace(preempt, wakeup_trace, prev, next, flags, pc);
T0 = data->preempt_timestamp;
T1 = ftrace_now(cpu);
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [RFC][PATCH] sched: Cleanup task->state printing
2017-09-22 11:54 ` [RFC][PATCH] sched: Cleanup task->state printing Peter Zijlstra
@ 2017-09-22 12:40 ` Markus Trippelsdorf
2017-09-22 14:12 ` Steven Rostedt
1 sibling, 0 replies; 11+ messages in thread
From: Markus Trippelsdorf @ 2017-09-22 12:40 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Tejun Heo, linux-kernel, Luis R. Rodriguez, Eric W. Biederman,
Paul E. McKenney, Linus Torvalds, Steven Rostedt,
Thomas Gleixner, Ingo Molnar
On 2017.09.22 at 13:54 +0200, Peter Zijlstra wrote:
> On Fri, Sep 22, 2017 at 11:35:33AM +0200, Markus Trippelsdorf wrote:
> > > It seems to work. Simply returning "I (idle)" from get_task_state() in
> > > fs/proc/array.c when the state is TASK_IDLE does the trick.
> > > I've tested top, htop and ps.
>
> I ended up with the below; there was quite a lot of inconsistent state
> printing around it seems.
>
> I should probably split this thing into a bunch of patches :/
>
> Alongside an explicit idle state, this also exposes TASK_PARKED,
> although arguably we could map that to idle too. Opinions?
Looks good to me and works as expected.
Many thanks.
--
Markus
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC][PATCH] sched: Cleanup task->state printing
2017-09-22 11:54 ` [RFC][PATCH] sched: Cleanup task->state printing Peter Zijlstra
2017-09-22 12:40 ` Markus Trippelsdorf
@ 2017-09-22 14:12 ` Steven Rostedt
2017-09-22 15:56 ` Peter Zijlstra
1 sibling, 1 reply; 11+ messages in thread
From: Steven Rostedt @ 2017-09-22 14:12 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Markus Trippelsdorf, Tejun Heo, linux-kernel, Luis R. Rodriguez,
Eric W. Biederman, Paul E. McKenney, Linus Torvalds,
Thomas Gleixner, Ingo Molnar
On Fri, 22 Sep 2017 13:54:30 +0200
Peter Zijlstra <peterz@infradead.org> wrote:
> I should probably split this thing into a bunch of patches :/
Yes please. Convert form dec to hex in one patch and one patch only.
Because I'm not sure if you meant to change numbers or not.
> /* Used in tsk->state again: */
> -#define TASK_DEAD 64
> -#define TASK_WAKEKILL 128
> -#define TASK_WAKING 256
> -#define TASK_PARKED 512
> -#define TASK_NOLOAD 1024
> -#define TASK_NEW 2048
> -#define TASK_STATE_MAX 4096
> +#define TASK_PARKED 0x0040
> +#define TASK_REPORT_MAX 0x0080
>
> -#define TASK_STATE_TO_CHAR_STR "RSDTtXZxKWPNn"
> +/* Not in TASK_REPORT: */
> +#define TASK_DEAD 0x0080
TASK_DEAD went from 64 to 128 (0x40 to 0x80)
As well as all the defines below that. Was this on purpose?
> +#define TASK_WAKEKILL 0x0100
> +#define TASK_WAKING 0x0200
> +#define TASK_NOLOAD 0x0400
> +#define TASK_NEW 0x0800
> +
> +#define TASK_STATE_MAX 0x1000
-- Steve
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [RFC][PATCH] sched: Cleanup task->state printing
2017-09-22 14:12 ` Steven Rostedt
@ 2017-09-22 15:56 ` Peter Zijlstra
0 siblings, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2017-09-22 15:56 UTC (permalink / raw)
To: Steven Rostedt
Cc: Markus Trippelsdorf, Tejun Heo, linux-kernel, Luis R. Rodriguez,
Eric W. Biederman, Paul E. McKenney, Linus Torvalds,
Thomas Gleixner, Ingo Molnar
On Fri, Sep 22, 2017 at 10:12:45AM -0400, Steven Rostedt wrote:
> On Fri, 22 Sep 2017 13:54:30 +0200
> Peter Zijlstra <peterz@infradead.org> wrote:
>
> > I should probably split this thing into a bunch of patches :/
>
> Yes please. Convert form dec to hex in one patch and one patch only.
Yeah, was already on it, did more cleanups too.
> Because I'm not sure if you meant to change numbers or not.
>
>
> > /* Used in tsk->state again: */
> > -#define TASK_DEAD 64
> > -#define TASK_WAKEKILL 128
> > -#define TASK_WAKING 256
> > -#define TASK_PARKED 512
> > -#define TASK_NOLOAD 1024
> > -#define TASK_NEW 2048
> > -#define TASK_STATE_MAX 4096
> > +#define TASK_PARKED 0x0040
> > +#define TASK_REPORT_MAX 0x0080
> >
> > -#define TASK_STATE_TO_CHAR_STR "RSDTtXZxKWPNn"
> > +/* Not in TASK_REPORT: */
> > +#define TASK_DEAD 0x0080
>
> TASK_DEAD went from 64 to 128 (0x40 to 0x80)
>
> As well as all the defines below that. Was this on purpose?
Yes, was on purpose. I moved TASK_PARKED up, such that I could include
it in the TASK_REPORT mask and keep that contiguous.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2017-09-22 15:56 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-10 7:36 Worker threads in D state since c5a94a618e7ac86 (workqueue: Use TASK_IDLE) Markus Trippelsdorf
2017-09-11 13:11 ` Tejun Heo
2017-09-11 14:21 ` Markus Trippelsdorf
2017-09-21 11:08 ` Markus Trippelsdorf
2017-09-21 12:30 ` Peter Zijlstra
2017-09-21 14:41 ` Markus Trippelsdorf
2017-09-22 9:35 ` Markus Trippelsdorf
2017-09-22 11:54 ` [RFC][PATCH] sched: Cleanup task->state printing Peter Zijlstra
2017-09-22 12:40 ` Markus Trippelsdorf
2017-09-22 14:12 ` Steven Rostedt
2017-09-22 15:56 ` Peter Zijlstra
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.